Mar 11 08:56:40 crc systemd[1]: Starting Kubernetes Kubelet... Mar 11 08:56:40 crc restorecon[4744]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:40 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 11 08:56:41 crc restorecon[4744]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 11 08:56:41 crc restorecon[4744]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Mar 11 08:56:41 crc kubenswrapper[4840]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 08:56:41 crc kubenswrapper[4840]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 11 08:56:41 crc kubenswrapper[4840]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 08:56:41 crc kubenswrapper[4840]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 08:56:41 crc kubenswrapper[4840]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 11 08:56:41 crc kubenswrapper[4840]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.772832 4840 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778375 4840 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778410 4840 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778420 4840 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778429 4840 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778438 4840 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778446 4840 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778454 4840 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778463 4840 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778496 4840 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778508 4840 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778517 4840 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778525 4840 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778534 4840 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778541 4840 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778552 4840 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778562 4840 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778570 4840 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778580 4840 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778589 4840 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778597 4840 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778605 4840 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778614 4840 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778622 4840 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778630 4840 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778638 4840 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778647 4840 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778655 4840 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778664 4840 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778672 4840 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778680 4840 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778688 4840 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778698 4840 feature_gate.go:330] unrecognized feature gate: Example Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778706 4840 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778716 4840 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778724 4840 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778732 4840 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778740 4840 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778748 4840 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778758 4840 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778769 4840 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778779 4840 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778789 4840 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778797 4840 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778807 4840 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778816 4840 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778825 4840 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778834 4840 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778843 4840 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778852 4840 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778860 4840 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778868 4840 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778876 4840 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778883 4840 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778891 4840 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778899 4840 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778906 4840 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778914 4840 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778945 4840 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778957 4840 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778967 4840 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778977 4840 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778986 4840 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.778994 4840 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.779002 4840 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.779011 4840 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.779019 4840 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.779027 4840 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.779036 4840 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.779044 4840 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.779052 4840 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.779059 4840 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779200 4840 flags.go:64] FLAG: --address="0.0.0.0" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779217 4840 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779230 4840 flags.go:64] FLAG: --anonymous-auth="true" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779242 4840 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779253 4840 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779263 4840 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779274 4840 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779285 4840 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779294 4840 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779305 4840 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779315 4840 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779324 4840 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779333 4840 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779342 4840 flags.go:64] FLAG: --cgroup-root="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779351 4840 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779360 4840 flags.go:64] FLAG: --client-ca-file="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779369 4840 flags.go:64] FLAG: --cloud-config="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779378 4840 flags.go:64] FLAG: --cloud-provider="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779386 4840 flags.go:64] FLAG: --cluster-dns="[]" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779398 4840 flags.go:64] FLAG: --cluster-domain="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779406 4840 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779415 4840 flags.go:64] FLAG: --config-dir="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779424 4840 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779434 4840 flags.go:64] FLAG: --container-log-max-files="5" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779450 4840 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779459 4840 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779495 4840 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779506 4840 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779515 4840 flags.go:64] FLAG: --contention-profiling="false" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779524 4840 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779533 4840 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779542 4840 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779557 4840 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779569 4840 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779578 4840 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779587 4840 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779596 4840 flags.go:64] FLAG: --enable-load-reader="false" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779604 4840 flags.go:64] FLAG: --enable-server="true" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779613 4840 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779624 4840 flags.go:64] FLAG: --event-burst="100" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779635 4840 flags.go:64] FLAG: --event-qps="50" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779644 4840 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779654 4840 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779662 4840 flags.go:64] FLAG: --eviction-hard="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779673 4840 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779683 4840 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779691 4840 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779701 4840 flags.go:64] FLAG: --eviction-soft="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779709 4840 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779718 4840 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779727 4840 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779736 4840 flags.go:64] FLAG: --experimental-mounter-path="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779744 4840 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779753 4840 flags.go:64] FLAG: --fail-swap-on="true" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779762 4840 flags.go:64] FLAG: --feature-gates="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779772 4840 flags.go:64] FLAG: --file-check-frequency="20s" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779781 4840 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779792 4840 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779801 4840 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779811 4840 flags.go:64] FLAG: --healthz-port="10248" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779821 4840 flags.go:64] FLAG: --help="false" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779830 4840 flags.go:64] FLAG: --hostname-override="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779839 4840 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779848 4840 flags.go:64] FLAG: --http-check-frequency="20s" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779858 4840 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779869 4840 flags.go:64] FLAG: --image-credential-provider-config="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779880 4840 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779892 4840 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779904 4840 flags.go:64] FLAG: --image-service-endpoint="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779916 4840 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779926 4840 flags.go:64] FLAG: --kube-api-burst="100" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779939 4840 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779951 4840 flags.go:64] FLAG: --kube-api-qps="50" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779961 4840 flags.go:64] FLAG: --kube-reserved="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779972 4840 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779983 4840 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.779994 4840 flags.go:64] FLAG: --kubelet-cgroups="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780004 4840 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780015 4840 flags.go:64] FLAG: --lock-file="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780025 4840 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780037 4840 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780048 4840 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780065 4840 flags.go:64] FLAG: --log-json-split-stream="false" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780075 4840 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780086 4840 flags.go:64] FLAG: --log-text-split-stream="false" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780096 4840 flags.go:64] FLAG: --logging-format="text" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780106 4840 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780118 4840 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780128 4840 flags.go:64] FLAG: --manifest-url="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780138 4840 flags.go:64] FLAG: --manifest-url-header="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780152 4840 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780163 4840 flags.go:64] FLAG: --max-open-files="1000000" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780176 4840 flags.go:64] FLAG: --max-pods="110" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780187 4840 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780198 4840 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780209 4840 flags.go:64] FLAG: --memory-manager-policy="None" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780221 4840 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780234 4840 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780245 4840 flags.go:64] FLAG: --node-ip="192.168.126.11" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780256 4840 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780281 4840 flags.go:64] FLAG: --node-status-max-images="50" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780292 4840 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780305 4840 flags.go:64] FLAG: --oom-score-adj="-999" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780318 4840 flags.go:64] FLAG: --pod-cidr="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780333 4840 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780351 4840 flags.go:64] FLAG: --pod-manifest-path="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780362 4840 flags.go:64] FLAG: --pod-max-pids="-1" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780373 4840 flags.go:64] FLAG: --pods-per-core="0" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780384 4840 flags.go:64] FLAG: --port="10250" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780394 4840 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780404 4840 flags.go:64] FLAG: --provider-id="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780414 4840 flags.go:64] FLAG: --qos-reserved="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780615 4840 flags.go:64] FLAG: --read-only-port="10255" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780628 4840 flags.go:64] FLAG: --register-node="true" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780639 4840 flags.go:64] FLAG: --register-schedulable="true" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780651 4840 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780672 4840 flags.go:64] FLAG: --registry-burst="10" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780683 4840 flags.go:64] FLAG: --registry-qps="5" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780694 4840 flags.go:64] FLAG: --reserved-cpus="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780703 4840 flags.go:64] FLAG: --reserved-memory="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780714 4840 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780723 4840 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780732 4840 flags.go:64] FLAG: --rotate-certificates="false" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780741 4840 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780750 4840 flags.go:64] FLAG: --runonce="false" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780759 4840 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780768 4840 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780777 4840 flags.go:64] FLAG: --seccomp-default="false" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780787 4840 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780796 4840 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780805 4840 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780814 4840 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780823 4840 flags.go:64] FLAG: --storage-driver-password="root" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780833 4840 flags.go:64] FLAG: --storage-driver-secure="false" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780843 4840 flags.go:64] FLAG: --storage-driver-table="stats" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780853 4840 flags.go:64] FLAG: --storage-driver-user="root" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780861 4840 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780871 4840 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780880 4840 flags.go:64] FLAG: --system-cgroups="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780889 4840 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780906 4840 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780915 4840 flags.go:64] FLAG: --tls-cert-file="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780924 4840 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780936 4840 flags.go:64] FLAG: --tls-min-version="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780945 4840 flags.go:64] FLAG: --tls-private-key-file="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780954 4840 flags.go:64] FLAG: --topology-manager-policy="none" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780963 4840 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780973 4840 flags.go:64] FLAG: --topology-manager-scope="container" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780982 4840 flags.go:64] FLAG: --v="2" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.780994 4840 flags.go:64] FLAG: --version="false" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.781005 4840 flags.go:64] FLAG: --vmodule="" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.781016 4840 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.781026 4840 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781262 4840 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781273 4840 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781282 4840 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781290 4840 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781299 4840 feature_gate.go:330] unrecognized feature gate: Example Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781307 4840 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781314 4840 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781326 4840 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781334 4840 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781342 4840 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781350 4840 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781358 4840 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781366 4840 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781374 4840 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781382 4840 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781390 4840 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781397 4840 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781405 4840 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781413 4840 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781421 4840 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781429 4840 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781437 4840 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781445 4840 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781454 4840 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781462 4840 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781502 4840 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781513 4840 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781522 4840 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781530 4840 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781537 4840 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781549 4840 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781558 4840 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781568 4840 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781578 4840 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781589 4840 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781597 4840 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781606 4840 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781614 4840 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781623 4840 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781642 4840 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781651 4840 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781658 4840 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781666 4840 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781677 4840 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781687 4840 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781697 4840 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781706 4840 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781714 4840 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781724 4840 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781732 4840 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781740 4840 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781748 4840 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781756 4840 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781763 4840 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781771 4840 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781779 4840 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781787 4840 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781794 4840 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781802 4840 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781813 4840 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781821 4840 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781829 4840 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781836 4840 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781844 4840 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781852 4840 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781860 4840 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781867 4840 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781875 4840 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781883 4840 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781890 4840 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.781898 4840 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.782961 4840 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.801796 4840 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.801840 4840 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.801954 4840 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.801966 4840 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.801977 4840 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.801987 4840 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.801995 4840 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802004 4840 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802012 4840 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802020 4840 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802029 4840 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802038 4840 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802047 4840 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802056 4840 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802064 4840 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802075 4840 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802088 4840 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802097 4840 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802105 4840 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802114 4840 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802122 4840 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802130 4840 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802138 4840 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802146 4840 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802153 4840 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802161 4840 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802169 4840 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802177 4840 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802184 4840 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802194 4840 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802203 4840 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802213 4840 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802224 4840 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802233 4840 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802243 4840 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802253 4840 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802263 4840 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802272 4840 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802281 4840 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802289 4840 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802300 4840 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802311 4840 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802319 4840 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802328 4840 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802337 4840 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802349 4840 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802359 4840 feature_gate.go:330] unrecognized feature gate: Example Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802367 4840 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802375 4840 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802383 4840 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802391 4840 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802399 4840 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802408 4840 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802416 4840 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802424 4840 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802434 4840 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802443 4840 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802452 4840 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802460 4840 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802501 4840 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802511 4840 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802519 4840 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802530 4840 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802538 4840 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802545 4840 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802553 4840 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802561 4840 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802569 4840 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802577 4840 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802585 4840 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802593 4840 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802601 4840 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802609 4840 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.802622 4840 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802824 4840 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802836 4840 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802845 4840 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802855 4840 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802863 4840 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802870 4840 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802878 4840 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802886 4840 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802894 4840 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802901 4840 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802909 4840 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802956 4840 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802964 4840 feature_gate.go:330] unrecognized feature gate: Example Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802972 4840 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802981 4840 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802989 4840 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.802996 4840 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803004 4840 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803012 4840 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803020 4840 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803027 4840 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803036 4840 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803044 4840 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803054 4840 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803064 4840 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803075 4840 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803084 4840 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803093 4840 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803101 4840 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803110 4840 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803118 4840 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803127 4840 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803135 4840 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803143 4840 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803152 4840 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803160 4840 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803170 4840 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803180 4840 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803188 4840 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803197 4840 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803208 4840 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803216 4840 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803224 4840 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803232 4840 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803241 4840 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803250 4840 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803259 4840 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803267 4840 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803275 4840 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803283 4840 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803293 4840 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803301 4840 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803311 4840 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803320 4840 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803329 4840 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803338 4840 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803346 4840 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803354 4840 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803362 4840 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803369 4840 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803377 4840 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803386 4840 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803394 4840 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803404 4840 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803412 4840 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803420 4840 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803428 4840 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803437 4840 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803444 4840 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803452 4840 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.803460 4840 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.803493 4840 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.804597 4840 server.go:940] "Client rotation is on, will bootstrap in background" Mar 11 08:56:41 crc kubenswrapper[4840]: E0311 08:56:41.809618 4840 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.815059 4840 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.815334 4840 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.819196 4840 server.go:997] "Starting client certificate rotation" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.819274 4840 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.819541 4840 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.847944 4840 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.851156 4840 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 11 08:56:41 crc kubenswrapper[4840]: E0311 08:56:41.851200 4840 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.867966 4840 log.go:25] "Validated CRI v1 runtime API" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.912233 4840 log.go:25] "Validated CRI v1 image API" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.916218 4840 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.921603 4840 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-03-11-08-52-08-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.921654 4840 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.953319 4840 manager.go:217] Machine: {Timestamp:2026-03-11 08:56:41.949616963 +0000 UTC m=+0.615286818 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:e5bb6cc6-19d8-441f-bba6-b926930273a7 BootID:b40dc5ac-6e20-4fe3-8d4f-1dab2691799c Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:fc:41:2b Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:fc:41:2b Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:f3:17:19 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:79:0a:27 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:2d:a4:8d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:e4:1b:0c Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:d3:7f:be Speed:-1 Mtu:1496} {Name:eth10 MacAddress:46:62:42:1c:d4:e6 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ee:75:6c:f3:30:4a Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.953637 4840 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.953774 4840 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.956291 4840 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.956538 4840 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.956580 4840 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.956892 4840 topology_manager.go:138] "Creating topology manager with none policy" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.956908 4840 container_manager_linux.go:303] "Creating device plugin manager" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.957409 4840 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.957449 4840 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.957654 4840 state_mem.go:36] "Initialized new in-memory state store" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.957776 4840 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.961910 4840 kubelet.go:418] "Attempting to sync node with API server" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.961942 4840 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.962001 4840 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.962016 4840 kubelet.go:324] "Adding apiserver pod source" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.962040 4840 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.968706 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 11 08:56:41 crc kubenswrapper[4840]: E0311 08:56:41.968963 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.968987 4840 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.971586 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.971680 4840 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 11 08:56:41 crc kubenswrapper[4840]: E0311 08:56:41.971682 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.973447 4840 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.975182 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.975228 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.975244 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.975259 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.975283 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.975299 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.975316 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.975343 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.975363 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.975381 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.975403 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.975420 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.977651 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.978447 4840 server.go:1280] "Started kubelet" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.979503 4840 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.979502 4840 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.980551 4840 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.980963 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 11 08:56:41 crc systemd[1]: Started Kubernetes Kubelet. Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.984429 4840 server.go:460] "Adding debug handlers to kubelet server" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.984619 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.984805 4840 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.985071 4840 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.985251 4840 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 11 08:56:41 crc kubenswrapper[4840]: E0311 08:56:41.985546 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.985643 4840 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 11 08:56:41 crc kubenswrapper[4840]: E0311 08:56:41.987063 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="200ms" Mar 11 08:56:41 crc kubenswrapper[4840]: W0311 08:56:41.987161 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 11 08:56:41 crc kubenswrapper[4840]: E0311 08:56:41.987335 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.988980 4840 factory.go:55] Registering systemd factory Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.989009 4840 factory.go:221] Registration of the systemd container factory successfully Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.989337 4840 factory.go:153] Registering CRI-O factory Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.989362 4840 factory.go:221] Registration of the crio container factory successfully Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.989448 4840 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.989514 4840 factory.go:103] Registering Raw factory Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.989536 4840 manager.go:1196] Started watching for new ooms in manager Mar 11 08:56:41 crc kubenswrapper[4840]: I0311 08:56:41.990532 4840 manager.go:319] Starting recovery of all containers Mar 11 08:56:41 crc kubenswrapper[4840]: E0311 08:56:41.989082 4840 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.30:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189bbda6b3c0caeb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:41.978399467 +0000 UTC m=+0.644069322,LastTimestamp:2026-03-11 08:56:41.978399467 +0000 UTC m=+0.644069322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.009201 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.009320 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.009353 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.009373 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.009396 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.009419 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.009439 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.009462 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.009546 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.009566 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.009585 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.009605 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.009625 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.009648 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.009668 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.009688 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.009708 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.009749 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.009775 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.009802 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.009830 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.009849 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.009897 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.009927 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.009979 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010020 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010071 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010106 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010202 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010240 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010268 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010287 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010307 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010359 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010443 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010508 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010538 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010558 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010577 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010602 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010629 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010685 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010715 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010740 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010789 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010809 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010829 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010890 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010911 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010964 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.010983 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011002 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011062 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011087 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011110 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011134 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011156 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011229 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011267 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011292 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011339 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011370 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011396 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011424 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011509 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011540 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011567 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011594 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011623 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011654 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011673 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011692 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011712 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011732 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011752 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011771 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011791 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011809 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011829 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011850 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011869 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011895 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011913 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011954 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.011986 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012012 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012032 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012051 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012070 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012089 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012110 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012129 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012151 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012171 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012191 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012214 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012234 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012270 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012290 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012312 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012332 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012357 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012385 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012412 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012450 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012518 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012549 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012578 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012599 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012622 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012646 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012667 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012720 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012744 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012765 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012786 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012806 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012826 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012846 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012865 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012883 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012902 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012920 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012940 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012961 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.012996 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013023 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013047 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013076 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013097 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013117 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013136 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013154 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013173 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013192 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013223 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013244 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013263 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013281 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013299 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013318 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013337 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013356 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013376 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013394 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013418 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013438 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013458 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013503 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013522 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013540 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013558 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013577 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013599 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013619 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013637 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013667 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013689 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013709 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013734 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013762 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013793 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013897 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013933 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013961 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.013988 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.014015 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.014038 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.014057 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.014103 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.014123 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.014141 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.014163 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.014181 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.014249 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.014271 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.014297 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.015419 4840 manager.go:324] Recovery completed Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.016670 4840 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.016728 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.016753 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.016774 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.016805 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.016829 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.016850 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.016872 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.016900 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.016927 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.016947 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.016972 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.016991 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.017012 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.017034 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.017053 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.017076 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.017096 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.017114 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.017142 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.017205 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.017242 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.017298 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.017339 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.017375 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.017395 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.017415 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.017435 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.017455 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.017523 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.017544 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.017564 4840 reconstruct.go:97] "Volume reconstruction finished" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.017579 4840 reconciler.go:26] "Reconciler: start to sync state" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.026532 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.028133 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.028184 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.028194 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.029445 4840 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.029478 4840 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.029496 4840 state_mem.go:36] "Initialized new in-memory state store" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.041681 4840 policy_none.go:49] "None policy: Start" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.042807 4840 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.042849 4840 state_mem.go:35] "Initializing new in-memory state store" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.055486 4840 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.058808 4840 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.058873 4840 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.058904 4840 kubelet.go:2335] "Starting kubelet main sync loop" Mar 11 08:56:42 crc kubenswrapper[4840]: E0311 08:56:42.058943 4840 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 11 08:56:42 crc kubenswrapper[4840]: W0311 08:56:42.061405 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 11 08:56:42 crc kubenswrapper[4840]: E0311 08:56:42.061520 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:56:42 crc kubenswrapper[4840]: E0311 08:56:42.086685 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.108490 4840 manager.go:334] "Starting Device Plugin manager" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.108559 4840 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.108574 4840 server.go:79] "Starting device plugin registration server" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.109132 4840 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.109148 4840 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.109727 4840 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.110200 4840 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.110607 4840 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 11 08:56:42 crc kubenswrapper[4840]: E0311 08:56:42.116401 4840 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.160076 4840 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.160188 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.162403 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.162453 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.162490 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.162648 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.162858 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.162903 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.164170 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.164199 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.164208 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.164240 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.164285 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.164303 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.164563 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.164734 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.164810 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.166079 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.166103 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.166186 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.166207 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.166129 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.166240 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.166403 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.166534 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.166592 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.167763 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.167816 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.167838 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.167868 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.167889 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.167902 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.168054 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.168183 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.168217 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.169336 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.169363 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.169374 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.169663 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.169686 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.169696 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.169882 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.169906 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.170992 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.171058 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.171085 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:42 crc kubenswrapper[4840]: E0311 08:56:42.188766 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="400ms" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.211420 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.213982 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.214022 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.214030 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.214058 4840 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:56:42 crc kubenswrapper[4840]: E0311 08:56:42.214515 4840 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.30:6443: connect: connection refused" node="crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.219784 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.219829 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.219854 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.219875 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.219898 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.219918 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.219936 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.219955 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.219975 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.219993 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.220013 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.220032 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.220051 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.220069 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.220087 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321106 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321159 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321179 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321198 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321214 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321230 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321245 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321263 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321280 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321295 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321310 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321318 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321369 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321372 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321394 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321326 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321355 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321501 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321355 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321321 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321524 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321539 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321426 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321422 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321356 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321582 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321598 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321632 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321669 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.321759 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.415076 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.416365 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.416424 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.416434 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.416458 4840 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:56:42 crc kubenswrapper[4840]: E0311 08:56:42.416900 4840 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.30:6443: connect: connection refused" node="crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.505277 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.512522 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.526346 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.546847 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.554044 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 11 08:56:42 crc kubenswrapper[4840]: W0311 08:56:42.559597 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-92c11d56509f6a3160f8a5a0b47e26d604846fc32bf585520da6a1f57e5d62da WatchSource:0}: Error finding container 92c11d56509f6a3160f8a5a0b47e26d604846fc32bf585520da6a1f57e5d62da: Status 404 returned error can't find the container with id 92c11d56509f6a3160f8a5a0b47e26d604846fc32bf585520da6a1f57e5d62da Mar 11 08:56:42 crc kubenswrapper[4840]: W0311 08:56:42.574927 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-b8068d74f3dce73cacf11c96d92f64c04a891e6ddc81b686c628468aa06018a8 WatchSource:0}: Error finding container b8068d74f3dce73cacf11c96d92f64c04a891e6ddc81b686c628468aa06018a8: Status 404 returned error can't find the container with id b8068d74f3dce73cacf11c96d92f64c04a891e6ddc81b686c628468aa06018a8 Mar 11 08:56:42 crc kubenswrapper[4840]: W0311 08:56:42.578334 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-ffae612cfa4bfbee1462d30b22f79a3fc0e76b9b41cfffaab1f6db24c6afe5d5 WatchSource:0}: Error finding container ffae612cfa4bfbee1462d30b22f79a3fc0e76b9b41cfffaab1f6db24c6afe5d5: Status 404 returned error can't find the container with id ffae612cfa4bfbee1462d30b22f79a3fc0e76b9b41cfffaab1f6db24c6afe5d5 Mar 11 08:56:42 crc kubenswrapper[4840]: E0311 08:56:42.590100 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="800ms" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.817794 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.819157 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.819191 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.819200 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.819219 4840 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:56:42 crc kubenswrapper[4840]: E0311 08:56:42.819665 4840 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.30:6443: connect: connection refused" node="crc" Mar 11 08:56:42 crc kubenswrapper[4840]: W0311 08:56:42.887045 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 11 08:56:42 crc kubenswrapper[4840]: E0311 08:56:42.887151 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:56:42 crc kubenswrapper[4840]: W0311 08:56:42.939615 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 11 08:56:42 crc kubenswrapper[4840]: E0311 08:56:42.939712 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:56:42 crc kubenswrapper[4840]: I0311 08:56:42.982266 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 11 08:56:43 crc kubenswrapper[4840]: I0311 08:56:43.062824 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"09a2e9a0c6007cb722d99e8a0f2d2c68d012f7f15f4f0436cc8543b90df4aa5e"} Mar 11 08:56:43 crc kubenswrapper[4840]: I0311 08:56:43.063872 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e142efc1088a343a60a02b59e9912c89ddcfd760803c00fa3dc7b691cb276ddb"} Mar 11 08:56:43 crc kubenswrapper[4840]: I0311 08:56:43.065376 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"92c11d56509f6a3160f8a5a0b47e26d604846fc32bf585520da6a1f57e5d62da"} Mar 11 08:56:43 crc kubenswrapper[4840]: I0311 08:56:43.066278 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ffae612cfa4bfbee1462d30b22f79a3fc0e76b9b41cfffaab1f6db24c6afe5d5"} Mar 11 08:56:43 crc kubenswrapper[4840]: I0311 08:56:43.067046 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b8068d74f3dce73cacf11c96d92f64c04a891e6ddc81b686c628468aa06018a8"} Mar 11 08:56:43 crc kubenswrapper[4840]: W0311 08:56:43.284383 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 11 08:56:43 crc kubenswrapper[4840]: E0311 08:56:43.284541 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:56:43 crc kubenswrapper[4840]: E0311 08:56:43.390744 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="1.6s" Mar 11 08:56:43 crc kubenswrapper[4840]: W0311 08:56:43.554342 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 11 08:56:43 crc kubenswrapper[4840]: E0311 08:56:43.554423 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:56:43 crc kubenswrapper[4840]: I0311 08:56:43.620604 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:43 crc kubenswrapper[4840]: I0311 08:56:43.622570 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:43 crc kubenswrapper[4840]: I0311 08:56:43.622647 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:43 crc kubenswrapper[4840]: I0311 08:56:43.622666 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:43 crc kubenswrapper[4840]: I0311 08:56:43.622696 4840 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:56:43 crc kubenswrapper[4840]: E0311 08:56:43.623701 4840 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.30:6443: connect: connection refused" node="crc" Mar 11 08:56:43 crc kubenswrapper[4840]: E0311 08:56:43.734389 4840 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.30:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189bbda6b3c0caeb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:41.978399467 +0000 UTC m=+0.644069322,LastTimestamp:2026-03-11 08:56:41.978399467 +0000 UTC m=+0.644069322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:56:43 crc kubenswrapper[4840]: I0311 08:56:43.900519 4840 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 11 08:56:43 crc kubenswrapper[4840]: E0311 08:56:43.902397 4840 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:56:43 crc kubenswrapper[4840]: I0311 08:56:43.982204 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.073693 4840 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ce7a8b6fd7ac469cc09856a0aaf6ca0714c24a4198231a5fa08b3eeddd073b20" exitCode=0 Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.073782 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ce7a8b6fd7ac469cc09856a0aaf6ca0714c24a4198231a5fa08b3eeddd073b20"} Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.073879 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.076260 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.076323 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.076346 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.079550 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.079555 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"91a1312d85b3716ac13f37e631c5c53b4d5541614972597efb39c406e32f738a"} Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.079622 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7deb8c4d6c80705675e84dfb26c8041214e85b2c425ce6d2f6883fb2ab191c2e"} Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.079655 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a2b70735a9ebbdbf55e287a817686645c45ca4896c4560cc031967174940500c"} Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.079683 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f8267a35ad5296c0a02a207b535f1fe5f48c725dfb4ab7254fc2833e84148eff"} Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.080755 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.080805 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.080822 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.083277 4840 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="f5815f294ed0f63415a41c689e1aaea2d9dd1f0ee4999752e4e85065c08cb1d3" exitCode=0 Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.083561 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"f5815f294ed0f63415a41c689e1aaea2d9dd1f0ee4999752e4e85065c08cb1d3"} Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.083577 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.091747 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.091782 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.091790 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.094583 4840 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c" exitCode=0 Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.094639 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c"} Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.094793 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.096934 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.097036 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.097138 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.097572 4840 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="656c404aff0b9e454e59bceb73d79f5d045ac4c05de3dee6f0f38019bf78fac4" exitCode=0 Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.097642 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"656c404aff0b9e454e59bceb73d79f5d045ac4c05de3dee6f0f38019bf78fac4"} Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.097682 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.099114 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.099182 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.099208 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.100934 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.102414 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.102526 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.102567 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:44 crc kubenswrapper[4840]: I0311 08:56:44.982083 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 11 08:56:44 crc kubenswrapper[4840]: E0311 08:56:44.991938 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="3.2s" Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.101247 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.104911 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e"} Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.104942 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7"} Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.104953 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10"} Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.104965 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2"} Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.107945 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0b8b376bd2621e314b7488c7f19023eb2927deea932f4764289fa82d1309f6d0"} Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.107993 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"191f6779c704bc04c6db18ab7604aa56472fa73c65a49ab54f8213a17dfc89dc"} Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.108007 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"68fa952dc486db4733cdc74294744871c4e8e27f3a797c3bb641c93bc1ba7549"} Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.108026 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.109183 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.109232 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.109249 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.111095 4840 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d44bea268b0ee5b265a53c2798864866ec5bd5104161bb6927202375b7b7adee" exitCode=0 Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.111139 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d44bea268b0ee5b265a53c2798864866ec5bd5104161bb6927202375b7b7adee"} Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.111189 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.112271 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.112321 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.112344 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.114506 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c1d24a6fa03febf90b39f2984c5d8adc70cf5e0b019cba4c3d99830dc9e2cbdc"} Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.114516 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.114720 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.115705 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.115758 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.115781 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.115910 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.115943 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.115953 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:45 crc kubenswrapper[4840]: W0311 08:56:45.197117 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 11 08:56:45 crc kubenswrapper[4840]: E0311 08:56:45.197207 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.224775 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.226647 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.226722 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.226735 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:45 crc kubenswrapper[4840]: I0311 08:56:45.226765 4840 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:56:45 crc kubenswrapper[4840]: E0311 08:56:45.227336 4840 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.30:6443: connect: connection refused" node="crc" Mar 11 08:56:45 crc kubenswrapper[4840]: W0311 08:56:45.391119 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 11 08:56:45 crc kubenswrapper[4840]: E0311 08:56:45.391211 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:56:45 crc kubenswrapper[4840]: W0311 08:56:45.484578 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 11 08:56:45 crc kubenswrapper[4840]: E0311 08:56:45.484683 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:56:45 crc kubenswrapper[4840]: W0311 08:56:45.519340 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.30:6443: connect: connection refused Mar 11 08:56:45 crc kubenswrapper[4840]: E0311 08:56:45.519497 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.30:6443: connect: connection refused" logger="UnhandledError" Mar 11 08:56:46 crc kubenswrapper[4840]: I0311 08:56:46.120450 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b16214bd1db56e442df59aaa4b583a2a0fca599366e4aa8439926413ffc40790"} Mar 11 08:56:46 crc kubenswrapper[4840]: I0311 08:56:46.120697 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:46 crc kubenswrapper[4840]: I0311 08:56:46.122103 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:46 crc kubenswrapper[4840]: I0311 08:56:46.122179 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:46 crc kubenswrapper[4840]: I0311 08:56:46.122203 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:46 crc kubenswrapper[4840]: I0311 08:56:46.126044 4840 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d3ff936a8fe9cf8c33596b663c3d52951be0085b979c3e8af02bb07dea800744" exitCode=0 Mar 11 08:56:46 crc kubenswrapper[4840]: I0311 08:56:46.126170 4840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 08:56:46 crc kubenswrapper[4840]: I0311 08:56:46.126207 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:46 crc kubenswrapper[4840]: I0311 08:56:46.126904 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:46 crc kubenswrapper[4840]: I0311 08:56:46.127433 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d3ff936a8fe9cf8c33596b663c3d52951be0085b979c3e8af02bb07dea800744"} Mar 11 08:56:46 crc kubenswrapper[4840]: I0311 08:56:46.127602 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:46 crc kubenswrapper[4840]: I0311 08:56:46.128209 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:46 crc kubenswrapper[4840]: I0311 08:56:46.129434 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:46 crc kubenswrapper[4840]: I0311 08:56:46.129518 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:46 crc kubenswrapper[4840]: I0311 08:56:46.129542 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:46 crc kubenswrapper[4840]: I0311 08:56:46.130592 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:46 crc kubenswrapper[4840]: I0311 08:56:46.130638 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:46 crc kubenswrapper[4840]: I0311 08:56:46.130658 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:46 crc kubenswrapper[4840]: I0311 08:56:46.131560 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:46 crc kubenswrapper[4840]: I0311 08:56:46.131601 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:46 crc kubenswrapper[4840]: I0311 08:56:46.131622 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:46 crc kubenswrapper[4840]: I0311 08:56:46.132694 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:46 crc kubenswrapper[4840]: I0311 08:56:46.132741 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:46 crc kubenswrapper[4840]: I0311 08:56:46.132763 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:47 crc kubenswrapper[4840]: I0311 08:56:47.135893 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4c6fc1d000f0451d3d286785671b529deb2cb20d118bd036d5e550ed784a61b2"} Mar 11 08:56:47 crc kubenswrapper[4840]: I0311 08:56:47.135963 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"354ac3a6222a2c7aa4de250024d7e698f06097efad749943955ea0805afbaba7"} Mar 11 08:56:47 crc kubenswrapper[4840]: I0311 08:56:47.135985 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c65e5262bebdac2c51d34bfc966b44eb120a83b15e0f00d252dc3642540e1332"} Mar 11 08:56:47 crc kubenswrapper[4840]: I0311 08:56:47.136003 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"925bc67aa7f956e253f23be3cfd6ffe0bc899de3325a02aad01f274621d8672c"} Mar 11 08:56:47 crc kubenswrapper[4840]: I0311 08:56:47.136066 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:47 crc kubenswrapper[4840]: I0311 08:56:47.136125 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:56:47 crc kubenswrapper[4840]: I0311 08:56:47.137500 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:47 crc kubenswrapper[4840]: I0311 08:56:47.137562 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:47 crc kubenswrapper[4840]: I0311 08:56:47.137581 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:47 crc kubenswrapper[4840]: I0311 08:56:47.215594 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:56:48 crc kubenswrapper[4840]: I0311 08:56:48.146334 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d160fd84f510a2f2ca918481d111f942afe11c3b37522e1ed4a515cc1e934f96"} Mar 11 08:56:48 crc kubenswrapper[4840]: I0311 08:56:48.146451 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:48 crc kubenswrapper[4840]: I0311 08:56:48.146504 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:48 crc kubenswrapper[4840]: I0311 08:56:48.148043 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:48 crc kubenswrapper[4840]: I0311 08:56:48.148087 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:48 crc kubenswrapper[4840]: I0311 08:56:48.148098 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:48 crc kubenswrapper[4840]: I0311 08:56:48.148878 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:48 crc kubenswrapper[4840]: I0311 08:56:48.148940 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:48 crc kubenswrapper[4840]: I0311 08:56:48.148966 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:48 crc kubenswrapper[4840]: I0311 08:56:48.161548 4840 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 11 08:56:48 crc kubenswrapper[4840]: I0311 08:56:48.332110 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Mar 11 08:56:48 crc kubenswrapper[4840]: I0311 08:56:48.428340 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:48 crc kubenswrapper[4840]: I0311 08:56:48.429574 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:48 crc kubenswrapper[4840]: I0311 08:56:48.429615 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:48 crc kubenswrapper[4840]: I0311 08:56:48.429624 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:48 crc kubenswrapper[4840]: I0311 08:56:48.429650 4840 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:56:49 crc kubenswrapper[4840]: I0311 08:56:49.149436 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:49 crc kubenswrapper[4840]: I0311 08:56:49.149691 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:49 crc kubenswrapper[4840]: I0311 08:56:49.150695 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:49 crc kubenswrapper[4840]: I0311 08:56:49.150730 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:49 crc kubenswrapper[4840]: I0311 08:56:49.150742 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:49 crc kubenswrapper[4840]: I0311 08:56:49.151717 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:49 crc kubenswrapper[4840]: I0311 08:56:49.151752 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:49 crc kubenswrapper[4840]: I0311 08:56:49.151766 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:50 crc kubenswrapper[4840]: I0311 08:56:50.151876 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:50 crc kubenswrapper[4840]: I0311 08:56:50.152959 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:50 crc kubenswrapper[4840]: I0311 08:56:50.152999 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:50 crc kubenswrapper[4840]: I0311 08:56:50.153015 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:50 crc kubenswrapper[4840]: I0311 08:56:50.956357 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 11 08:56:50 crc kubenswrapper[4840]: I0311 08:56:50.956781 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:50 crc kubenswrapper[4840]: I0311 08:56:50.958509 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:50 crc kubenswrapper[4840]: I0311 08:56:50.958557 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:50 crc kubenswrapper[4840]: I0311 08:56:50.958570 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:51 crc kubenswrapper[4840]: I0311 08:56:51.027964 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:56:51 crc kubenswrapper[4840]: I0311 08:56:51.028167 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:51 crc kubenswrapper[4840]: I0311 08:56:51.029664 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:51 crc kubenswrapper[4840]: I0311 08:56:51.029723 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:51 crc kubenswrapper[4840]: I0311 08:56:51.029741 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:51 crc kubenswrapper[4840]: I0311 08:56:51.053188 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:56:51 crc kubenswrapper[4840]: I0311 08:56:51.053428 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:51 crc kubenswrapper[4840]: I0311 08:56:51.055303 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:51 crc kubenswrapper[4840]: I0311 08:56:51.055358 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:51 crc kubenswrapper[4840]: I0311 08:56:51.055383 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:51 crc kubenswrapper[4840]: I0311 08:56:51.946059 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:56:51 crc kubenswrapper[4840]: I0311 08:56:51.946236 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:51 crc kubenswrapper[4840]: I0311 08:56:51.947745 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:51 crc kubenswrapper[4840]: I0311 08:56:51.947813 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:51 crc kubenswrapper[4840]: I0311 08:56:51.947826 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:52 crc kubenswrapper[4840]: E0311 08:56:52.116716 4840 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 11 08:56:53 crc kubenswrapper[4840]: I0311 08:56:53.062914 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:56:53 crc kubenswrapper[4840]: I0311 08:56:53.063045 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:53 crc kubenswrapper[4840]: I0311 08:56:53.064174 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:53 crc kubenswrapper[4840]: I0311 08:56:53.064225 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:53 crc kubenswrapper[4840]: I0311 08:56:53.064239 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:53 crc kubenswrapper[4840]: I0311 08:56:53.069603 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:56:53 crc kubenswrapper[4840]: I0311 08:56:53.160604 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:53 crc kubenswrapper[4840]: I0311 08:56:53.163541 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:53 crc kubenswrapper[4840]: I0311 08:56:53.163605 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:53 crc kubenswrapper[4840]: I0311 08:56:53.163629 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:53 crc kubenswrapper[4840]: I0311 08:56:53.165737 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:56:54 crc kubenswrapper[4840]: I0311 08:56:54.028282 4840 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 11 08:56:54 crc kubenswrapper[4840]: I0311 08:56:54.028380 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 11 08:56:54 crc kubenswrapper[4840]: I0311 08:56:54.164450 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:54 crc kubenswrapper[4840]: I0311 08:56:54.165974 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:54 crc kubenswrapper[4840]: I0311 08:56:54.166008 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:54 crc kubenswrapper[4840]: I0311 08:56:54.166018 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:55 crc kubenswrapper[4840]: I0311 08:56:55.982958 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Mar 11 08:56:56 crc kubenswrapper[4840]: E0311 08:56:56.020085 4840 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:56:56Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189bbda6b3c0caeb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:41.978399467 +0000 UTC m=+0.644069322,LastTimestamp:2026-03-11 08:56:41.978399467 +0000 UTC m=+0.644069322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:56:56 crc kubenswrapper[4840]: E0311 08:56:56.022395 4840 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:56:56Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:56:56 crc kubenswrapper[4840]: E0311 08:56:56.024510 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:56:56Z is after 2026-02-23T05:33:13Z" interval="6.4s" Mar 11 08:56:56 crc kubenswrapper[4840]: W0311 08:56:56.026181 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:56:56Z is after 2026-02-23T05:33:13Z Mar 11 08:56:56 crc kubenswrapper[4840]: E0311 08:56:56.026396 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:56:56Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:56:56 crc kubenswrapper[4840]: W0311 08:56:56.026899 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:56:56Z is after 2026-02-23T05:33:13Z Mar 11 08:56:56 crc kubenswrapper[4840]: E0311 08:56:56.026991 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:56:56Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:56:56 crc kubenswrapper[4840]: E0311 08:56:56.028153 4840 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:56:56Z is after 2026-02-23T05:33:13Z" node="crc" Mar 11 08:56:56 crc kubenswrapper[4840]: W0311 08:56:56.029541 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:56:56Z is after 2026-02-23T05:33:13Z Mar 11 08:56:56 crc kubenswrapper[4840]: E0311 08:56:56.029604 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:56:56Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:56:56 crc kubenswrapper[4840]: W0311 08:56:56.032264 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:56:56Z is after 2026-02-23T05:33:13Z Mar 11 08:56:56 crc kubenswrapper[4840]: E0311 08:56:56.032538 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:56:56Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:56:56 crc kubenswrapper[4840]: I0311 08:56:56.035811 4840 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 11 08:56:56 crc kubenswrapper[4840]: I0311 08:56:56.036040 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 11 08:56:56 crc kubenswrapper[4840]: I0311 08:56:56.042155 4840 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58852->192.168.126.11:17697: read: connection reset by peer" start-of-body= Mar 11 08:56:56 crc kubenswrapper[4840]: I0311 08:56:56.042220 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58852->192.168.126.11:17697: read: connection reset by peer" Mar 11 08:56:56 crc kubenswrapper[4840]: I0311 08:56:56.058589 4840 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Mar 11 08:56:56 crc kubenswrapper[4840]: I0311 08:56:56.058667 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 11 08:56:56 crc kubenswrapper[4840]: I0311 08:56:56.170405 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 11 08:56:56 crc kubenswrapper[4840]: I0311 08:56:56.171990 4840 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b16214bd1db56e442df59aaa4b583a2a0fca599366e4aa8439926413ffc40790" exitCode=255 Mar 11 08:56:56 crc kubenswrapper[4840]: I0311 08:56:56.172044 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b16214bd1db56e442df59aaa4b583a2a0fca599366e4aa8439926413ffc40790"} Mar 11 08:56:56 crc kubenswrapper[4840]: I0311 08:56:56.172185 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:56 crc kubenswrapper[4840]: I0311 08:56:56.173060 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:56 crc kubenswrapper[4840]: I0311 08:56:56.173090 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:56 crc kubenswrapper[4840]: I0311 08:56:56.173099 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:56 crc kubenswrapper[4840]: I0311 08:56:56.173696 4840 scope.go:117] "RemoveContainer" containerID="b16214bd1db56e442df59aaa4b583a2a0fca599366e4aa8439926413ffc40790" Mar 11 08:56:56 crc kubenswrapper[4840]: I0311 08:56:56.985641 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:56:56Z is after 2026-02-23T05:33:13Z Mar 11 08:56:57 crc kubenswrapper[4840]: I0311 08:56:57.175933 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 11 08:56:57 crc kubenswrapper[4840]: I0311 08:56:57.177258 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c6013875a10aae4aac3a415db83db5591778176eca34426ca2b8bd68d1166b6b"} Mar 11 08:56:57 crc kubenswrapper[4840]: I0311 08:56:57.177397 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:57 crc kubenswrapper[4840]: I0311 08:56:57.178286 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:57 crc kubenswrapper[4840]: I0311 08:56:57.178329 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:57 crc kubenswrapper[4840]: I0311 08:56:57.178345 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:57 crc kubenswrapper[4840]: I0311 08:56:57.470938 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Mar 11 08:56:57 crc kubenswrapper[4840]: I0311 08:56:57.471157 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:57 crc kubenswrapper[4840]: I0311 08:56:57.472181 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:57 crc kubenswrapper[4840]: I0311 08:56:57.472212 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:57 crc kubenswrapper[4840]: I0311 08:56:57.472223 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:57 crc kubenswrapper[4840]: I0311 08:56:57.512366 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Mar 11 08:56:57 crc kubenswrapper[4840]: I0311 08:56:57.985013 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:56:57Z is after 2026-02-23T05:33:13Z Mar 11 08:56:58 crc kubenswrapper[4840]: I0311 08:56:58.187597 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 11 08:56:58 crc kubenswrapper[4840]: I0311 08:56:58.189372 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 11 08:56:58 crc kubenswrapper[4840]: I0311 08:56:58.192839 4840 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c6013875a10aae4aac3a415db83db5591778176eca34426ca2b8bd68d1166b6b" exitCode=255 Mar 11 08:56:58 crc kubenswrapper[4840]: I0311 08:56:58.192924 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c6013875a10aae4aac3a415db83db5591778176eca34426ca2b8bd68d1166b6b"} Mar 11 08:56:58 crc kubenswrapper[4840]: I0311 08:56:58.193036 4840 scope.go:117] "RemoveContainer" containerID="b16214bd1db56e442df59aaa4b583a2a0fca599366e4aa8439926413ffc40790" Mar 11 08:56:58 crc kubenswrapper[4840]: I0311 08:56:58.193095 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:58 crc kubenswrapper[4840]: I0311 08:56:58.193192 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:58 crc kubenswrapper[4840]: I0311 08:56:58.195002 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:58 crc kubenswrapper[4840]: I0311 08:56:58.195105 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:58 crc kubenswrapper[4840]: I0311 08:56:58.195134 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:58 crc kubenswrapper[4840]: I0311 08:56:58.195177 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:58 crc kubenswrapper[4840]: I0311 08:56:58.195250 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:58 crc kubenswrapper[4840]: I0311 08:56:58.195286 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:58 crc kubenswrapper[4840]: I0311 08:56:58.197165 4840 scope.go:117] "RemoveContainer" containerID="c6013875a10aae4aac3a415db83db5591778176eca34426ca2b8bd68d1166b6b" Mar 11 08:56:58 crc kubenswrapper[4840]: E0311 08:56:58.197660 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:56:58 crc kubenswrapper[4840]: I0311 08:56:58.222325 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Mar 11 08:56:58 crc kubenswrapper[4840]: I0311 08:56:58.987499 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:56:58Z is after 2026-02-23T05:33:13Z Mar 11 08:56:59 crc kubenswrapper[4840]: I0311 08:56:59.197212 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 11 08:56:59 crc kubenswrapper[4840]: I0311 08:56:59.199419 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:56:59 crc kubenswrapper[4840]: I0311 08:56:59.200756 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:56:59 crc kubenswrapper[4840]: I0311 08:56:59.200819 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:56:59 crc kubenswrapper[4840]: I0311 08:56:59.200841 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:56:59 crc kubenswrapper[4840]: I0311 08:56:59.987744 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:56:59Z is after 2026-02-23T05:33:13Z Mar 11 08:57:00 crc kubenswrapper[4840]: I0311 08:57:00.789630 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:57:00 crc kubenswrapper[4840]: I0311 08:57:00.789861 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:00 crc kubenswrapper[4840]: I0311 08:57:00.791256 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:00 crc kubenswrapper[4840]: I0311 08:57:00.791296 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:00 crc kubenswrapper[4840]: I0311 08:57:00.791306 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:00 crc kubenswrapper[4840]: I0311 08:57:00.791807 4840 scope.go:117] "RemoveContainer" containerID="c6013875a10aae4aac3a415db83db5591778176eca34426ca2b8bd68d1166b6b" Mar 11 08:57:00 crc kubenswrapper[4840]: E0311 08:57:00.791985 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:57:00 crc kubenswrapper[4840]: I0311 08:57:00.986799 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:00Z is after 2026-02-23T05:33:13Z Mar 11 08:57:01 crc kubenswrapper[4840]: I0311 08:57:01.058792 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:57:01 crc kubenswrapper[4840]: I0311 08:57:01.205321 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:01 crc kubenswrapper[4840]: I0311 08:57:01.207144 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:01 crc kubenswrapper[4840]: I0311 08:57:01.207196 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:01 crc kubenswrapper[4840]: I0311 08:57:01.207217 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:01 crc kubenswrapper[4840]: I0311 08:57:01.208121 4840 scope.go:117] "RemoveContainer" containerID="c6013875a10aae4aac3a415db83db5591778176eca34426ca2b8bd68d1166b6b" Mar 11 08:57:01 crc kubenswrapper[4840]: E0311 08:57:01.208437 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:57:01 crc kubenswrapper[4840]: I0311 08:57:01.212764 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:57:01 crc kubenswrapper[4840]: I0311 08:57:01.985676 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:01Z is after 2026-02-23T05:33:13Z Mar 11 08:57:02 crc kubenswrapper[4840]: E0311 08:57:02.117451 4840 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 11 08:57:02 crc kubenswrapper[4840]: I0311 08:57:02.208072 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:02 crc kubenswrapper[4840]: I0311 08:57:02.209517 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:02 crc kubenswrapper[4840]: I0311 08:57:02.209565 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:02 crc kubenswrapper[4840]: I0311 08:57:02.209584 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:02 crc kubenswrapper[4840]: I0311 08:57:02.210592 4840 scope.go:117] "RemoveContainer" containerID="c6013875a10aae4aac3a415db83db5591778176eca34426ca2b8bd68d1166b6b" Mar 11 08:57:02 crc kubenswrapper[4840]: E0311 08:57:02.210915 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:57:02 crc kubenswrapper[4840]: I0311 08:57:02.428578 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:02 crc kubenswrapper[4840]: E0311 08:57:02.428609 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:02Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 11 08:57:02 crc kubenswrapper[4840]: I0311 08:57:02.430178 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:02 crc kubenswrapper[4840]: I0311 08:57:02.430216 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:02 crc kubenswrapper[4840]: I0311 08:57:02.430225 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:02 crc kubenswrapper[4840]: I0311 08:57:02.430253 4840 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:57:02 crc kubenswrapper[4840]: E0311 08:57:02.432841 4840 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:02Z is after 2026-02-23T05:33:13Z" node="crc" Mar 11 08:57:02 crc kubenswrapper[4840]: I0311 08:57:02.984875 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:02Z is after 2026-02-23T05:33:13Z Mar 11 08:57:03 crc kubenswrapper[4840]: W0311 08:57:03.265860 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:03Z is after 2026-02-23T05:33:13Z Mar 11 08:57:03 crc kubenswrapper[4840]: E0311 08:57:03.265930 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:03Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:57:03 crc kubenswrapper[4840]: I0311 08:57:03.986369 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:03Z is after 2026-02-23T05:33:13Z Mar 11 08:57:04 crc kubenswrapper[4840]: I0311 08:57:04.029159 4840 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 11 08:57:04 crc kubenswrapper[4840]: I0311 08:57:04.029241 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 11 08:57:04 crc kubenswrapper[4840]: W0311 08:57:04.085507 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:04Z is after 2026-02-23T05:33:13Z Mar 11 08:57:04 crc kubenswrapper[4840]: E0311 08:57:04.085616 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:57:04 crc kubenswrapper[4840]: I0311 08:57:04.358102 4840 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 11 08:57:04 crc kubenswrapper[4840]: E0311 08:57:04.363280 4840 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:57:04 crc kubenswrapper[4840]: I0311 08:57:04.987732 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:04Z is after 2026-02-23T05:33:13Z Mar 11 08:57:05 crc kubenswrapper[4840]: I0311 08:57:05.612635 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:57:05 crc kubenswrapper[4840]: I0311 08:57:05.612940 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:05 crc kubenswrapper[4840]: I0311 08:57:05.614666 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:05 crc kubenswrapper[4840]: I0311 08:57:05.614724 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:05 crc kubenswrapper[4840]: I0311 08:57:05.614743 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:05 crc kubenswrapper[4840]: I0311 08:57:05.615627 4840 scope.go:117] "RemoveContainer" containerID="c6013875a10aae4aac3a415db83db5591778176eca34426ca2b8bd68d1166b6b" Mar 11 08:57:05 crc kubenswrapper[4840]: E0311 08:57:05.615927 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:57:05 crc kubenswrapper[4840]: I0311 08:57:05.985309 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:05Z is after 2026-02-23T05:33:13Z Mar 11 08:57:06 crc kubenswrapper[4840]: E0311 08:57:06.024835 4840 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:06Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189bbda6b3c0caeb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:41.978399467 +0000 UTC m=+0.644069322,LastTimestamp:2026-03-11 08:56:41.978399467 +0000 UTC m=+0.644069322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:06 crc kubenswrapper[4840]: I0311 08:57:06.986817 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:06Z is after 2026-02-23T05:33:13Z Mar 11 08:57:07 crc kubenswrapper[4840]: I0311 08:57:07.988041 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:07Z is after 2026-02-23T05:33:13Z Mar 11 08:57:08 crc kubenswrapper[4840]: W0311 08:57:08.408844 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:08Z is after 2026-02-23T05:33:13Z Mar 11 08:57:08 crc kubenswrapper[4840]: E0311 08:57:08.408988 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:08Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:57:08 crc kubenswrapper[4840]: W0311 08:57:08.694025 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:08Z is after 2026-02-23T05:33:13Z Mar 11 08:57:08 crc kubenswrapper[4840]: E0311 08:57:08.694158 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:08Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:57:08 crc kubenswrapper[4840]: I0311 08:57:08.986970 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:08Z is after 2026-02-23T05:33:13Z Mar 11 08:57:09 crc kubenswrapper[4840]: I0311 08:57:09.433600 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:09 crc kubenswrapper[4840]: I0311 08:57:09.435374 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:09 crc kubenswrapper[4840]: I0311 08:57:09.435442 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:09 crc kubenswrapper[4840]: I0311 08:57:09.435572 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:09 crc kubenswrapper[4840]: I0311 08:57:09.435611 4840 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:57:09 crc kubenswrapper[4840]: E0311 08:57:09.435935 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:09Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 11 08:57:09 crc kubenswrapper[4840]: E0311 08:57:09.441026 4840 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:09Z is after 2026-02-23T05:33:13Z" node="crc" Mar 11 08:57:09 crc kubenswrapper[4840]: I0311 08:57:09.985209 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:09Z is after 2026-02-23T05:33:13Z Mar 11 08:57:10 crc kubenswrapper[4840]: I0311 08:57:10.987426 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:10Z is after 2026-02-23T05:33:13Z Mar 11 08:57:11 crc kubenswrapper[4840]: I0311 08:57:11.985708 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:11Z is after 2026-02-23T05:33:13Z Mar 11 08:57:12 crc kubenswrapper[4840]: E0311 08:57:12.117967 4840 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 11 08:57:12 crc kubenswrapper[4840]: I0311 08:57:12.985606 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:12Z is after 2026-02-23T05:33:13Z Mar 11 08:57:13 crc kubenswrapper[4840]: I0311 08:57:13.987052 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:13Z is after 2026-02-23T05:33:13Z Mar 11 08:57:14 crc kubenswrapper[4840]: I0311 08:57:14.029391 4840 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 11 08:57:14 crc kubenswrapper[4840]: I0311 08:57:14.029551 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 11 08:57:14 crc kubenswrapper[4840]: I0311 08:57:14.029668 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:57:14 crc kubenswrapper[4840]: I0311 08:57:14.029985 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:14 crc kubenswrapper[4840]: I0311 08:57:14.038394 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:14 crc kubenswrapper[4840]: I0311 08:57:14.038497 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:14 crc kubenswrapper[4840]: I0311 08:57:14.038521 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:14 crc kubenswrapper[4840]: I0311 08:57:14.039363 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"a2b70735a9ebbdbf55e287a817686645c45ca4896c4560cc031967174940500c"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 11 08:57:14 crc kubenswrapper[4840]: I0311 08:57:14.039711 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://a2b70735a9ebbdbf55e287a817686645c45ca4896c4560cc031967174940500c" gracePeriod=30 Mar 11 08:57:14 crc kubenswrapper[4840]: I0311 08:57:14.245288 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 11 08:57:14 crc kubenswrapper[4840]: I0311 08:57:14.246069 4840 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a2b70735a9ebbdbf55e287a817686645c45ca4896c4560cc031967174940500c" exitCode=255 Mar 11 08:57:14 crc kubenswrapper[4840]: I0311 08:57:14.246204 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a2b70735a9ebbdbf55e287a817686645c45ca4896c4560cc031967174940500c"} Mar 11 08:57:14 crc kubenswrapper[4840]: I0311 08:57:14.984344 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:14Z is after 2026-02-23T05:33:13Z Mar 11 08:57:15 crc kubenswrapper[4840]: I0311 08:57:15.252278 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 11 08:57:15 crc kubenswrapper[4840]: I0311 08:57:15.252732 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9be0868d10e5abebdb9b4fba34e5c202c8cd14fbc120551405b9709a816dc4c4"} Mar 11 08:57:15 crc kubenswrapper[4840]: I0311 08:57:15.252981 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:15 crc kubenswrapper[4840]: I0311 08:57:15.254294 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:15 crc kubenswrapper[4840]: I0311 08:57:15.254342 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:15 crc kubenswrapper[4840]: I0311 08:57:15.254359 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:15 crc kubenswrapper[4840]: I0311 08:57:15.984772 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:15Z is after 2026-02-23T05:33:13Z Mar 11 08:57:16 crc kubenswrapper[4840]: E0311 08:57:16.029418 4840 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:16Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189bbda6b3c0caeb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:41.978399467 +0000 UTC m=+0.644069322,LastTimestamp:2026-03-11 08:56:41.978399467 +0000 UTC m=+0.644069322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:16 crc kubenswrapper[4840]: I0311 08:57:16.255539 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:16 crc kubenswrapper[4840]: I0311 08:57:16.256447 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:16 crc kubenswrapper[4840]: I0311 08:57:16.256590 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:16 crc kubenswrapper[4840]: I0311 08:57:16.256610 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:16 crc kubenswrapper[4840]: I0311 08:57:16.441673 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:16 crc kubenswrapper[4840]: E0311 08:57:16.442768 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:16Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 11 08:57:16 crc kubenswrapper[4840]: I0311 08:57:16.443793 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:16 crc kubenswrapper[4840]: I0311 08:57:16.443851 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:16 crc kubenswrapper[4840]: I0311 08:57:16.443876 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:16 crc kubenswrapper[4840]: I0311 08:57:16.443916 4840 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:57:16 crc kubenswrapper[4840]: E0311 08:57:16.447864 4840 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:16Z is after 2026-02-23T05:33:13Z" node="crc" Mar 11 08:57:16 crc kubenswrapper[4840]: I0311 08:57:16.985209 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:16Z is after 2026-02-23T05:33:13Z Mar 11 08:57:17 crc kubenswrapper[4840]: I0311 08:57:17.984879 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:17Z is after 2026-02-23T05:33:13Z Mar 11 08:57:18 crc kubenswrapper[4840]: I0311 08:57:18.986398 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:18Z is after 2026-02-23T05:33:13Z Mar 11 08:57:19 crc kubenswrapper[4840]: I0311 08:57:19.987325 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:19Z is after 2026-02-23T05:33:13Z Mar 11 08:57:20 crc kubenswrapper[4840]: I0311 08:57:20.060317 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:20 crc kubenswrapper[4840]: I0311 08:57:20.062611 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:20 crc kubenswrapper[4840]: I0311 08:57:20.062697 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:20 crc kubenswrapper[4840]: I0311 08:57:20.062726 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:20 crc kubenswrapper[4840]: I0311 08:57:20.063902 4840 scope.go:117] "RemoveContainer" containerID="c6013875a10aae4aac3a415db83db5591778176eca34426ca2b8bd68d1166b6b" Mar 11 08:57:20 crc kubenswrapper[4840]: I0311 08:57:20.533910 4840 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 11 08:57:20 crc kubenswrapper[4840]: E0311 08:57:20.540357 4840 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:20Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:57:20 crc kubenswrapper[4840]: E0311 08:57:20.541694 4840 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Mar 11 08:57:20 crc kubenswrapper[4840]: I0311 08:57:20.988617 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:20Z is after 2026-02-23T05:33:13Z Mar 11 08:57:21 crc kubenswrapper[4840]: I0311 08:57:21.028225 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:57:21 crc kubenswrapper[4840]: I0311 08:57:21.028558 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:21 crc kubenswrapper[4840]: I0311 08:57:21.030365 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:21 crc kubenswrapper[4840]: I0311 08:57:21.030400 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:21 crc kubenswrapper[4840]: I0311 08:57:21.030410 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:21 crc kubenswrapper[4840]: W0311 08:57:21.267391 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:21Z is after 2026-02-23T05:33:13Z Mar 11 08:57:21 crc kubenswrapper[4840]: E0311 08:57:21.267516 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:21Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:57:21 crc kubenswrapper[4840]: I0311 08:57:21.269097 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 11 08:57:21 crc kubenswrapper[4840]: I0311 08:57:21.269702 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 11 08:57:21 crc kubenswrapper[4840]: I0311 08:57:21.271363 4840 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3ed2e229a2a2c71fc8b8b9cbc58b12b35f14c3d357fb894914aee55334a09e91" exitCode=255 Mar 11 08:57:21 crc kubenswrapper[4840]: I0311 08:57:21.271446 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3ed2e229a2a2c71fc8b8b9cbc58b12b35f14c3d357fb894914aee55334a09e91"} Mar 11 08:57:21 crc kubenswrapper[4840]: I0311 08:57:21.271588 4840 scope.go:117] "RemoveContainer" containerID="c6013875a10aae4aac3a415db83db5591778176eca34426ca2b8bd68d1166b6b" Mar 11 08:57:21 crc kubenswrapper[4840]: I0311 08:57:21.271726 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:21 crc kubenswrapper[4840]: I0311 08:57:21.272588 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:21 crc kubenswrapper[4840]: I0311 08:57:21.272623 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:21 crc kubenswrapper[4840]: I0311 08:57:21.272637 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:21 crc kubenswrapper[4840]: I0311 08:57:21.273217 4840 scope.go:117] "RemoveContainer" containerID="3ed2e229a2a2c71fc8b8b9cbc58b12b35f14c3d357fb894914aee55334a09e91" Mar 11 08:57:21 crc kubenswrapper[4840]: E0311 08:57:21.273411 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:57:21 crc kubenswrapper[4840]: I0311 08:57:21.986359 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:21Z is after 2026-02-23T05:33:13Z Mar 11 08:57:22 crc kubenswrapper[4840]: E0311 08:57:22.118108 4840 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 11 08:57:22 crc kubenswrapper[4840]: I0311 08:57:22.276616 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 11 08:57:22 crc kubenswrapper[4840]: I0311 08:57:22.985276 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:22Z is after 2026-02-23T05:33:13Z Mar 11 08:57:23 crc kubenswrapper[4840]: E0311 08:57:23.447881 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:23Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 11 08:57:23 crc kubenswrapper[4840]: I0311 08:57:23.449010 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:23 crc kubenswrapper[4840]: I0311 08:57:23.453995 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:23 crc kubenswrapper[4840]: I0311 08:57:23.454058 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:23 crc kubenswrapper[4840]: I0311 08:57:23.454097 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:23 crc kubenswrapper[4840]: I0311 08:57:23.454144 4840 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:57:23 crc kubenswrapper[4840]: E0311 08:57:23.458611 4840 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:23Z is after 2026-02-23T05:33:13Z" node="crc" Mar 11 08:57:24 crc kubenswrapper[4840]: I0311 08:57:24.042263 4840 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 11 08:57:24 crc kubenswrapper[4840]: I0311 08:57:24.042359 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 11 08:57:24 crc kubenswrapper[4840]: I0311 08:57:24.046102 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:24Z is after 2026-02-23T05:33:13Z Mar 11 08:57:24 crc kubenswrapper[4840]: I0311 08:57:24.984422 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:24Z is after 2026-02-23T05:33:13Z Mar 11 08:57:25 crc kubenswrapper[4840]: I0311 08:57:25.102100 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:57:25 crc kubenswrapper[4840]: I0311 08:57:25.102430 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:25 crc kubenswrapper[4840]: I0311 08:57:25.104343 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:25 crc kubenswrapper[4840]: I0311 08:57:25.104432 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:25 crc kubenswrapper[4840]: I0311 08:57:25.104456 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:25 crc kubenswrapper[4840]: I0311 08:57:25.613264 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:57:25 crc kubenswrapper[4840]: I0311 08:57:25.613698 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:25 crc kubenswrapper[4840]: I0311 08:57:25.615818 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:25 crc kubenswrapper[4840]: I0311 08:57:25.615907 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:25 crc kubenswrapper[4840]: I0311 08:57:25.615934 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:25 crc kubenswrapper[4840]: I0311 08:57:25.617016 4840 scope.go:117] "RemoveContainer" containerID="3ed2e229a2a2c71fc8b8b9cbc58b12b35f14c3d357fb894914aee55334a09e91" Mar 11 08:57:25 crc kubenswrapper[4840]: E0311 08:57:25.617339 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:57:25 crc kubenswrapper[4840]: I0311 08:57:25.985060 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:25Z is after 2026-02-23T05:33:13Z Mar 11 08:57:26 crc kubenswrapper[4840]: E0311 08:57:26.036206 4840 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:26Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189bbda6b3c0caeb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:41.978399467 +0000 UTC m=+0.644069322,LastTimestamp:2026-03-11 08:56:41.978399467 +0000 UTC m=+0.644069322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:26 crc kubenswrapper[4840]: W0311 08:57:26.447498 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:26Z is after 2026-02-23T05:33:13Z Mar 11 08:57:26 crc kubenswrapper[4840]: E0311 08:57:26.447677 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:26Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:57:26 crc kubenswrapper[4840]: W0311 08:57:26.588178 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:26Z is after 2026-02-23T05:33:13Z Mar 11 08:57:26 crc kubenswrapper[4840]: E0311 08:57:26.588354 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:26Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:57:26 crc kubenswrapper[4840]: I0311 08:57:26.986257 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:26Z is after 2026-02-23T05:33:13Z Mar 11 08:57:27 crc kubenswrapper[4840]: I0311 08:57:27.985993 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:27Z is after 2026-02-23T05:33:13Z Mar 11 08:57:28 crc kubenswrapper[4840]: I0311 08:57:28.987666 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:28Z is after 2026-02-23T05:33:13Z Mar 11 08:57:29 crc kubenswrapper[4840]: W0311 08:57:29.058130 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:29Z is after 2026-02-23T05:33:13Z Mar 11 08:57:29 crc kubenswrapper[4840]: E0311 08:57:29.058256 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:29Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 11 08:57:29 crc kubenswrapper[4840]: I0311 08:57:29.984815 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:29Z is after 2026-02-23T05:33:13Z Mar 11 08:57:30 crc kubenswrapper[4840]: E0311 08:57:30.451670 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:30Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 11 08:57:30 crc kubenswrapper[4840]: I0311 08:57:30.458998 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:30 crc kubenswrapper[4840]: I0311 08:57:30.460762 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:30 crc kubenswrapper[4840]: I0311 08:57:30.460800 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:30 crc kubenswrapper[4840]: I0311 08:57:30.460814 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:30 crc kubenswrapper[4840]: I0311 08:57:30.460847 4840 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:57:30 crc kubenswrapper[4840]: E0311 08:57:30.463544 4840 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:30Z is after 2026-02-23T05:33:13Z" node="crc" Mar 11 08:57:30 crc kubenswrapper[4840]: I0311 08:57:30.790488 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:57:30 crc kubenswrapper[4840]: I0311 08:57:30.791091 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:30 crc kubenswrapper[4840]: I0311 08:57:30.792504 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:30 crc kubenswrapper[4840]: I0311 08:57:30.792575 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:30 crc kubenswrapper[4840]: I0311 08:57:30.792595 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:30 crc kubenswrapper[4840]: I0311 08:57:30.793453 4840 scope.go:117] "RemoveContainer" containerID="3ed2e229a2a2c71fc8b8b9cbc58b12b35f14c3d357fb894914aee55334a09e91" Mar 11 08:57:30 crc kubenswrapper[4840]: E0311 08:57:30.793763 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:57:30 crc kubenswrapper[4840]: I0311 08:57:30.971512 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 11 08:57:30 crc kubenswrapper[4840]: I0311 08:57:30.971798 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:30 crc kubenswrapper[4840]: I0311 08:57:30.973415 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:30 crc kubenswrapper[4840]: I0311 08:57:30.973662 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:30 crc kubenswrapper[4840]: I0311 08:57:30.973821 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:30 crc kubenswrapper[4840]: I0311 08:57:30.986201 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:30Z is after 2026-02-23T05:33:13Z Mar 11 08:57:31 crc kubenswrapper[4840]: I0311 08:57:31.984952 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:31Z is after 2026-02-23T05:33:13Z Mar 11 08:57:32 crc kubenswrapper[4840]: E0311 08:57:32.119118 4840 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 11 08:57:32 crc kubenswrapper[4840]: I0311 08:57:32.985398 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:32Z is after 2026-02-23T05:33:13Z Mar 11 08:57:33 crc kubenswrapper[4840]: I0311 08:57:33.987216 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:33Z is after 2026-02-23T05:33:13Z Mar 11 08:57:34 crc kubenswrapper[4840]: I0311 08:57:34.030305 4840 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 11 08:57:34 crc kubenswrapper[4840]: I0311 08:57:34.030412 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 11 08:57:34 crc kubenswrapper[4840]: I0311 08:57:34.985909 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:57:34Z is after 2026-02-23T05:33:13Z Mar 11 08:57:35 crc kubenswrapper[4840]: I0311 08:57:35.985832 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.041862 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbda6b3c0caeb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:41.978399467 +0000 UTC m=+0.644069322,LastTimestamp:2026-03-11 08:56:41.978399467 +0000 UTC m=+0.644069322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.046652 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbda6b6b83e59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.028170841 +0000 UTC m=+0.693840656,LastTimestamp:2026-03-11 08:56:42.028170841 +0000 UTC m=+0.693840656,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.051174 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbda6b6b88bbc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.028190652 +0000 UTC m=+0.693860467,LastTimestamp:2026-03-11 08:56:42.028190652 +0000 UTC m=+0.693860467,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.057210 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbda6b6b8aac0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.028198592 +0000 UTC m=+0.693868407,LastTimestamp:2026-03-11 08:56:42.028198592 +0000 UTC m=+0.693868407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.062624 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbda6bccff2cf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.130387663 +0000 UTC m=+0.796057488,LastTimestamp:2026-03-11 08:56:42.130387663 +0000 UTC m=+0.796057488,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.067625 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbda6b6b83e59\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbda6b6b83e59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.028170841 +0000 UTC m=+0.693840656,LastTimestamp:2026-03-11 08:56:42.16243782 +0000 UTC m=+0.828107645,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.072070 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbda6b6b88bbc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbda6b6b88bbc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.028190652 +0000 UTC m=+0.693860467,LastTimestamp:2026-03-11 08:56:42.16246124 +0000 UTC m=+0.828131065,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.078743 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbda6b6b8aac0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbda6b6b8aac0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.028198592 +0000 UTC m=+0.693868407,LastTimestamp:2026-03-11 08:56:42.162496891 +0000 UTC m=+0.828166716,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.083151 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbda6b6b83e59\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbda6b6b83e59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.028170841 +0000 UTC m=+0.693840656,LastTimestamp:2026-03-11 08:56:42.164187595 +0000 UTC m=+0.829857410,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.088621 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbda6b6b88bbc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbda6b6b88bbc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.028190652 +0000 UTC m=+0.693860467,LastTimestamp:2026-03-11 08:56:42.164205545 +0000 UTC m=+0.829875360,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.094111 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbda6b6b8aac0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbda6b6b8aac0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.028198592 +0000 UTC m=+0.693868407,LastTimestamp:2026-03-11 08:56:42.164213675 +0000 UTC m=+0.829883490,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.098867 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbda6b6b83e59\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbda6b6b83e59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.028170841 +0000 UTC m=+0.693840656,LastTimestamp:2026-03-11 08:56:42.164271277 +0000 UTC m=+0.829941132,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.103486 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbda6b6b88bbc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbda6b6b88bbc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.028190652 +0000 UTC m=+0.693860467,LastTimestamp:2026-03-11 08:56:42.164296538 +0000 UTC m=+0.829966403,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.107326 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbda6b6b8aac0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbda6b6b8aac0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.028198592 +0000 UTC m=+0.693868407,LastTimestamp:2026-03-11 08:56:42.164313399 +0000 UTC m=+0.829983254,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.111970 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbda6b6b83e59\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbda6b6b83e59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.028170841 +0000 UTC m=+0.693840656,LastTimestamp:2026-03-11 08:56:42.166116435 +0000 UTC m=+0.831786290,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.116306 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbda6b6b83e59\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbda6b6b83e59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.028170841 +0000 UTC m=+0.693840656,LastTimestamp:2026-03-11 08:56:42.166151976 +0000 UTC m=+0.831821831,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.120518 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbda6b6b88bbc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbda6b6b88bbc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.028190652 +0000 UTC m=+0.693860467,LastTimestamp:2026-03-11 08:56:42.166198548 +0000 UTC m=+0.831868403,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.125513 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbda6b6b8aac0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbda6b6b8aac0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.028198592 +0000 UTC m=+0.693868407,LastTimestamp:2026-03-11 08:56:42.166216058 +0000 UTC m=+0.831885913,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.129918 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbda6b6b88bbc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbda6b6b88bbc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.028190652 +0000 UTC m=+0.693860467,LastTimestamp:2026-03-11 08:56:42.166230949 +0000 UTC m=+0.831900814,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.134560 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbda6b6b8aac0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbda6b6b8aac0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.028198592 +0000 UTC m=+0.693868407,LastTimestamp:2026-03-11 08:56:42.166250039 +0000 UTC m=+0.831919894,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.138512 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbda6b6b83e59\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbda6b6b83e59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.028170841 +0000 UTC m=+0.693840656,LastTimestamp:2026-03-11 08:56:42.167804758 +0000 UTC m=+0.833474613,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.143615 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbda6b6b88bbc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbda6b6b88bbc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.028190652 +0000 UTC m=+0.693860467,LastTimestamp:2026-03-11 08:56:42.167830699 +0000 UTC m=+0.833500544,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.148829 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbda6b6b8aac0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbda6b6b8aac0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.028198592 +0000 UTC m=+0.693868407,LastTimestamp:2026-03-11 08:56:42.16785036 +0000 UTC m=+0.833520205,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.153789 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbda6b6b83e59\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbda6b6b83e59 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.028170841 +0000 UTC m=+0.693840656,LastTimestamp:2026-03-11 08:56:42.167882831 +0000 UTC m=+0.833552656,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.158368 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189bbda6b6b88bbc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189bbda6b6b88bbc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.028190652 +0000 UTC m=+0.693860467,LastTimestamp:2026-03-11 08:56:42.167896231 +0000 UTC m=+0.833566056,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.164364 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189bbda6d6daf8eb openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.567317739 +0000 UTC m=+1.232987594,LastTimestamp:2026-03-11 08:56:42.567317739 +0000 UTC m=+1.232987594,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.169216 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbda6d6e4961c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.567947804 +0000 UTC m=+1.233617619,LastTimestamp:2026-03-11 08:56:42.567947804 +0000 UTC m=+1.233617619,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.174131 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbda6d77eddbe openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.578058686 +0000 UTC m=+1.243728491,LastTimestamp:2026-03-11 08:56:42.578058686 +0000 UTC m=+1.243728491,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.178951 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189bbda6d7c171f8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.582422008 +0000 UTC m=+1.248091823,LastTimestamp:2026-03-11 08:56:42.582422008 +0000 UTC m=+1.248091823,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.182942 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbda6d7cca501 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:42.583155969 +0000 UTC m=+1.248825804,LastTimestamp:2026-03-11 08:56:42.583155969 +0000 UTC m=+1.248825804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.185287 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbda6f941596a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:43.144452458 +0000 UTC m=+1.810122273,LastTimestamp:2026-03-11 08:56:43.144452458 +0000 UTC m=+1.810122273,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.187121 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189bbda6f9613b64 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:43.146541924 +0000 UTC m=+1.812211739,LastTimestamp:2026-03-11 08:56:43.146541924 +0000 UTC m=+1.812211739,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.189577 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbda6f9732b53 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:43.147717459 +0000 UTC m=+1.813387274,LastTimestamp:2026-03-11 08:56:43.147717459 +0000 UTC m=+1.813387274,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.191009 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbda6f978e4ea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:43.14809265 +0000 UTC m=+1.813762465,LastTimestamp:2026-03-11 08:56:43.14809265 +0000 UTC m=+1.813762465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.193863 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189bbda6f9863f59 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:43.148967769 +0000 UTC m=+1.814637574,LastTimestamp:2026-03-11 08:56:43.148967769 +0000 UTC m=+1.814637574,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.198253 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbda6f9f50d10 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:43.156229392 +0000 UTC m=+1.821899207,LastTimestamp:2026-03-11 08:56:43.156229392 +0000 UTC m=+1.821899207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.202368 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbda6fa16911c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:43.158425884 +0000 UTC m=+1.824095709,LastTimestamp:2026-03-11 08:56:43.158425884 +0000 UTC m=+1.824095709,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.206848 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189bbda6fa20285d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:43.159054429 +0000 UTC m=+1.824724244,LastTimestamp:2026-03-11 08:56:43.159054429 +0000 UTC m=+1.824724244,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.211239 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189bbda6fa43605f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:43.161362527 +0000 UTC m=+1.827032342,LastTimestamp:2026-03-11 08:56:43.161362527 +0000 UTC m=+1.827032342,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.215652 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbda6fa91ac42 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:43.166493762 +0000 UTC m=+1.832163577,LastTimestamp:2026-03-11 08:56:43.166493762 +0000 UTC m=+1.832163577,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.220336 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbda6fafe8139 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:43.173626169 +0000 UTC m=+1.839295984,LastTimestamp:2026-03-11 08:56:43.173626169 +0000 UTC m=+1.839295984,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.225067 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbda70cfd2e71 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:43.475529329 +0000 UTC m=+2.141199144,LastTimestamp:2026-03-11 08:56:43.475529329 +0000 UTC m=+2.141199144,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.229632 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbda70d821621 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:43.484239393 +0000 UTC m=+2.149909208,LastTimestamp:2026-03-11 08:56:43.484239393 +0000 UTC m=+2.149909208,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.233541 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbda70d8faa60 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:43.485129312 +0000 UTC m=+2.150799127,LastTimestamp:2026-03-11 08:56:43.485129312 +0000 UTC m=+2.150799127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.239189 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbda71a160f93 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:43.695263635 +0000 UTC m=+2.360933460,LastTimestamp:2026-03-11 08:56:43.695263635 +0000 UTC m=+2.360933460,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.244999 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbda71aaa9902 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:43.704998146 +0000 UTC m=+2.370667961,LastTimestamp:2026-03-11 08:56:43.704998146 +0000 UTC m=+2.370667961,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.249201 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbda71abc4002 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:43.70615501 +0000 UTC m=+2.371824865,LastTimestamp:2026-03-11 08:56:43.70615501 +0000 UTC m=+2.371824865,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.253048 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbda72656d9ee openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:43.900836334 +0000 UTC m=+2.566506139,LastTimestamp:2026-03-11 08:56:43.900836334 +0000 UTC m=+2.566506139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.257455 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbda7272fec03 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:43.915062275 +0000 UTC m=+2.580732100,LastTimestamp:2026-03-11 08:56:43.915062275 +0000 UTC m=+2.580732100,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.262180 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbda7310b55ab openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:44.080436651 +0000 UTC m=+2.746106516,LastTimestamp:2026-03-11 08:56:44.080436651 +0000 UTC m=+2.746106516,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.267628 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189bbda731cc795a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:44.093094234 +0000 UTC m=+2.758764099,LastTimestamp:2026-03-11 08:56:44.093094234 +0000 UTC m=+2.758764099,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.272967 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbda732421848 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:44.100802632 +0000 UTC m=+2.766472447,LastTimestamp:2026-03-11 08:56:44.100802632 +0000 UTC m=+2.766472447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.277616 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189bbda7325230dd openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:44.101857501 +0000 UTC m=+2.767527326,LastTimestamp:2026-03-11 08:56:44.101857501 +0000 UTC m=+2.767527326,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.282393 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189bbda7436b39ba openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:44.388710842 +0000 UTC m=+3.054380667,LastTimestamp:2026-03-11 08:56:44.388710842 +0000 UTC m=+3.054380667,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.286924 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbda74375cd6b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:44.389404011 +0000 UTC m=+3.055073826,LastTimestamp:2026-03-11 08:56:44.389404011 +0000 UTC m=+3.055073826,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.290948 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189bbda743d9d43b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:44.395959355 +0000 UTC m=+3.061629190,LastTimestamp:2026-03-11 08:56:44.395959355 +0000 UTC m=+3.061629190,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.292066 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbda743eac905 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:44.397070597 +0000 UTC m=+3.062740412,LastTimestamp:2026-03-11 08:56:44.397070597 +0000 UTC m=+3.062740412,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.296858 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189bbda7445a59e6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:44.404382182 +0000 UTC m=+3.070051997,LastTimestamp:2026-03-11 08:56:44.404382182 +0000 UTC m=+3.070051997,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.301210 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbda7446efd86 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:44.40573479 +0000 UTC m=+3.071404605,LastTimestamp:2026-03-11 08:56:44.40573479 +0000 UTC m=+3.071404605,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.305414 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189bbda7447d7cc3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:44.406684867 +0000 UTC m=+3.072354682,LastTimestamp:2026-03-11 08:56:44.406684867 +0000 UTC m=+3.072354682,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.309062 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbda74483cd9b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:44.407098779 +0000 UTC m=+3.072768614,LastTimestamp:2026-03-11 08:56:44.407098779 +0000 UTC m=+3.072768614,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.313305 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189bbda744d9b5c9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:44.412728777 +0000 UTC m=+3.078398592,LastTimestamp:2026-03-11 08:56:44.412728777 +0000 UTC m=+3.078398592,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.318407 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbda745ba1cd4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:44.42743522 +0000 UTC m=+3.093105035,LastTimestamp:2026-03-11 08:56:44.42743522 +0000 UTC m=+3.093105035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.323281 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbda74f32f113 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:44.586348819 +0000 UTC m=+3.252018644,LastTimestamp:2026-03-11 08:56:44.586348819 +0000 UTC m=+3.252018644,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.326788 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189bbda74f42e95a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:44.587395418 +0000 UTC m=+3.253065233,LastTimestamp:2026-03-11 08:56:44.587395418 +0000 UTC m=+3.253065233,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.331155 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbda74fe3e3bb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:44.597945275 +0000 UTC m=+3.263615090,LastTimestamp:2026-03-11 08:56:44.597945275 +0000 UTC m=+3.263615090,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.335850 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbda74ff470af openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:44.599029935 +0000 UTC m=+3.264699750,LastTimestamp:2026-03-11 08:56:44.599029935 +0000 UTC m=+3.264699750,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.340656 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189bbda74ffe6816 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:44.599683094 +0000 UTC m=+3.265352919,LastTimestamp:2026-03-11 08:56:44.599683094 +0000 UTC m=+3.265352919,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.345651 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189bbda750078a00 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:44.6002816 +0000 UTC m=+3.265951415,LastTimestamp:2026-03-11 08:56:44.6002816 +0000 UTC m=+3.265951415,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.350345 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189bbda75c1cbe50 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:44.80299784 +0000 UTC m=+3.468667675,LastTimestamp:2026-03-11 08:56:44.80299784 +0000 UTC m=+3.468667675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.355533 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbda75c385f2b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:44.804808491 +0000 UTC m=+3.470478306,LastTimestamp:2026-03-11 08:56:44.804808491 +0000 UTC m=+3.470478306,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.360358 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbda75db8313e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:44.829962558 +0000 UTC m=+3.495632373,LastTimestamp:2026-03-11 08:56:44.829962558 +0000 UTC m=+3.495632373,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.364129 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbda75dcd3dcb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:44.831342027 +0000 UTC m=+3.497011842,LastTimestamp:2026-03-11 08:56:44.831342027 +0000 UTC m=+3.497011842,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.368018 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189bbda75de399fa openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:44.832807418 +0000 UTC m=+3.498477233,LastTimestamp:2026-03-11 08:56:44.832807418 +0000 UTC m=+3.498477233,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.371384 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbda76817e916 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:45.004007702 +0000 UTC m=+3.669677517,LastTimestamp:2026-03-11 08:56:45.004007702 +0000 UTC m=+3.669677517,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.375033 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbda76947f3e4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:45.023933412 +0000 UTC m=+3.689603227,LastTimestamp:2026-03-11 08:56:45.023933412 +0000 UTC m=+3.689603227,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.380068 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbda7695d0c15 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:45.025315861 +0000 UTC m=+3.690985676,LastTimestamp:2026-03-11 08:56:45.025315861 +0000 UTC m=+3.690985676,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.385620 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbda76ea0a586 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:45.113632134 +0000 UTC m=+3.779301949,LastTimestamp:2026-03-11 08:56:45.113632134 +0000 UTC m=+3.779301949,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.392864 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbda7746bc3f9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:45.210829817 +0000 UTC m=+3.876499632,LastTimestamp:2026-03-11 08:56:45.210829817 +0000 UTC m=+3.876499632,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.397100 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbda7752941c6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:45.223248326 +0000 UTC m=+3.888918131,LastTimestamp:2026-03-11 08:56:45.223248326 +0000 UTC m=+3.888918131,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.400753 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbda77816b13f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:45.272363327 +0000 UTC m=+3.938033142,LastTimestamp:2026-03-11 08:56:45.272363327 +0000 UTC m=+3.938033142,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.404345 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbda778be08da openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:45.283330266 +0000 UTC m=+3.949000081,LastTimestamp:2026-03-11 08:56:45.283330266 +0000 UTC m=+3.949000081,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.409982 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbda7ab6e150e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:46.133728526 +0000 UTC m=+4.799398341,LastTimestamp:2026-03-11 08:56:46.133728526 +0000 UTC m=+4.799398341,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.414229 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbda7b8ace5ff openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:46.355949055 +0000 UTC m=+5.021618870,LastTimestamp:2026-03-11 08:56:46.355949055 +0000 UTC m=+5.021618870,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.418618 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbda7b95a2142 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:46.367301954 +0000 UTC m=+5.032971779,LastTimestamp:2026-03-11 08:56:46.367301954 +0000 UTC m=+5.032971779,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.421799 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbda7b9729637 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:46.368904759 +0000 UTC m=+5.034574574,LastTimestamp:2026-03-11 08:56:46.368904759 +0000 UTC m=+5.034574574,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.424677 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbda7c6c88d24 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:46.59264234 +0000 UTC m=+5.258312155,LastTimestamp:2026-03-11 08:56:46.59264234 +0000 UTC m=+5.258312155,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.433123 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbda7c79b4e9c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:46.606454428 +0000 UTC m=+5.272124253,LastTimestamp:2026-03-11 08:56:46.606454428 +0000 UTC m=+5.272124253,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.437266 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbda7c7b05ed5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:46.607834837 +0000 UTC m=+5.273504652,LastTimestamp:2026-03-11 08:56:46.607834837 +0000 UTC m=+5.273504652,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.440808 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbda7d1d2e960 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:46.777870688 +0000 UTC m=+5.443540513,LastTimestamp:2026-03-11 08:56:46.777870688 +0000 UTC m=+5.443540513,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.444495 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbda7d2da728d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:46.795141773 +0000 UTC m=+5.460811598,LastTimestamp:2026-03-11 08:56:46.795141773 +0000 UTC m=+5.460811598,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.449114 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbda7d2ed55e2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:46.796379618 +0000 UTC m=+5.462049443,LastTimestamp:2026-03-11 08:56:46.796379618 +0000 UTC m=+5.462049443,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.452347 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbda7e18c8f9e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:47.041695646 +0000 UTC m=+5.707365481,LastTimestamp:2026-03-11 08:56:47.041695646 +0000 UTC m=+5.707365481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.455570 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbda7e27a186f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:47.057262703 +0000 UTC m=+5.722932528,LastTimestamp:2026-03-11 08:56:47.057262703 +0000 UTC m=+5.722932528,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.458846 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbda7e2905716 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:47.058720534 +0000 UTC m=+5.724390349,LastTimestamp:2026-03-11 08:56:47.058720534 +0000 UTC m=+5.724390349,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.463295 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbda7efa48922 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:47.278147874 +0000 UTC m=+5.943817699,LastTimestamp:2026-03-11 08:56:47.278147874 +0000 UTC m=+5.943817699,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.467115 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189bbda7f086aff7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:47.292968951 +0000 UTC m=+5.958638776,LastTimestamp:2026-03-11 08:56:47.292968951 +0000 UTC m=+5.958638776,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.473672 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 11 08:57:36 crc kubenswrapper[4840]: &Event{ObjectMeta:{kube-controller-manager-crc.189bbda981fca6ba openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 11 08:57:36 crc kubenswrapper[4840]: body: Mar 11 08:57:36 crc kubenswrapper[4840]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:54.02836345 +0000 UTC m=+12.694033275,LastTimestamp:2026-03-11 08:56:54.02836345 +0000 UTC m=+12.694033275,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 11 08:57:36 crc kubenswrapper[4840]: > Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.477866 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbda981fd728f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:54.028415631 +0000 UTC m=+12.694085456,LastTimestamp:2026-03-11 08:56:54.028415631 +0000 UTC m=+12.694085456,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.482414 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 11 08:57:36 crc kubenswrapper[4840]: &Event{ObjectMeta:{kube-apiserver-crc.189bbda9f9a6dce1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 11 08:57:36 crc kubenswrapper[4840]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 11 08:57:36 crc kubenswrapper[4840]: Mar 11 08:57:36 crc kubenswrapper[4840]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:56.036007137 +0000 UTC m=+14.701676992,LastTimestamp:2026-03-11 08:56:56.036007137 +0000 UTC m=+14.701676992,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 11 08:57:36 crc kubenswrapper[4840]: > Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.487023 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbda9f9aa1f0c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:56.036220684 +0000 UTC m=+14.701890559,LastTimestamp:2026-03-11 08:56:56.036220684 +0000 UTC m=+14.701890559,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.490449 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 11 08:57:36 crc kubenswrapper[4840]: &Event{ObjectMeta:{kube-apiserver-crc.189bbda9fa056d42 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:58852->192.168.126.11:17697: read: connection reset by peer Mar 11 08:57:36 crc kubenswrapper[4840]: body: Mar 11 08:57:36 crc kubenswrapper[4840]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:56.042204482 +0000 UTC m=+14.707874297,LastTimestamp:2026-03-11 08:56:56.042204482 +0000 UTC m=+14.707874297,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 11 08:57:36 crc kubenswrapper[4840]: > Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.497043 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbda9fa06226f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:58852->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:56.042250863 +0000 UTC m=+14.707920678,LastTimestamp:2026-03-11 08:56:56.042250863 +0000 UTC m=+14.707920678,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.501005 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 11 08:57:36 crc kubenswrapper[4840]: &Event{ObjectMeta:{kube-apiserver-crc.189bbda9fb005768 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 11 08:57:36 crc kubenswrapper[4840]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Mar 11 08:57:36 crc kubenswrapper[4840]: Mar 11 08:57:36 crc kubenswrapper[4840]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:56.058648424 +0000 UTC m=+14.724318259,LastTimestamp:2026-03-11 08:56:56.058648424 +0000 UTC m=+14.724318259,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 11 08:57:36 crc kubenswrapper[4840]: > Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.504682 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189bbda9f9aa1f0c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbda9f9aa1f0c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:56.036220684 +0000 UTC m=+14.701890559,LastTimestamp:2026-03-11 08:56:56.058695565 +0000 UTC m=+14.724365390,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.509312 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189bbda7695d0c15\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189bbda7695d0c15 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:45.025315861 +0000 UTC m=+3.690985676,LastTimestamp:2026-03-11 08:56:56.174739348 +0000 UTC m=+14.840409163,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.514666 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189bbda981fca6ba\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 11 08:57:36 crc kubenswrapper[4840]: &Event{ObjectMeta:{kube-controller-manager-crc.189bbda981fca6ba openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 11 08:57:36 crc kubenswrapper[4840]: body: Mar 11 08:57:36 crc kubenswrapper[4840]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:54.02836345 +0000 UTC m=+12.694033275,LastTimestamp:2026-03-11 08:57:04.029219079 +0000 UTC m=+22.694888884,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 11 08:57:36 crc kubenswrapper[4840]: > Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.517996 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189bbda981fd728f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbda981fd728f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:54.028415631 +0000 UTC m=+12.694085456,LastTimestamp:2026-03-11 08:57:04.02926688 +0000 UTC m=+22.694936695,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.523861 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189bbda981fca6ba\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 11 08:57:36 crc kubenswrapper[4840]: &Event{ObjectMeta:{kube-controller-manager-crc.189bbda981fca6ba openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 11 08:57:36 crc kubenswrapper[4840]: body: Mar 11 08:57:36 crc kubenswrapper[4840]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:54.02836345 +0000 UTC m=+12.694033275,LastTimestamp:2026-03-11 08:57:14.029504533 +0000 UTC m=+32.695174388,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 11 08:57:36 crc kubenswrapper[4840]: > Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.527601 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189bbda981fd728f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbda981fd728f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:54.028415631 +0000 UTC m=+12.694085456,LastTimestamp:2026-03-11 08:57:14.029608766 +0000 UTC m=+32.695278611,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.533910 4840 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbdae2ac11263 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:57:14.039677539 +0000 UTC m=+32.705347394,LastTimestamp:2026-03-11 08:57:14.039677539 +0000 UTC m=+32.705347394,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.537369 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189bbda6fa16911c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbda6fa16911c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:43.158425884 +0000 UTC m=+1.824095709,LastTimestamp:2026-03-11 08:57:14.163012837 +0000 UTC m=+32.828682662,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.541270 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189bbda70cfd2e71\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbda70cfd2e71 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:43.475529329 +0000 UTC m=+2.141199144,LastTimestamp:2026-03-11 08:57:14.409159608 +0000 UTC m=+33.074829433,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.544932 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189bbda70d821621\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbda70d821621 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:43.484239393 +0000 UTC m=+2.149909208,LastTimestamp:2026-03-11 08:57:14.421387651 +0000 UTC m=+33.087057466,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.549676 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189bbda981fca6ba\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 11 08:57:36 crc kubenswrapper[4840]: &Event{ObjectMeta:{kube-controller-manager-crc.189bbda981fca6ba openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 11 08:57:36 crc kubenswrapper[4840]: body: Mar 11 08:57:36 crc kubenswrapper[4840]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:54.02836345 +0000 UTC m=+12.694033275,LastTimestamp:2026-03-11 08:57:24.042336632 +0000 UTC m=+42.708006467,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 11 08:57:36 crc kubenswrapper[4840]: > Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.553284 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189bbda981fd728f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189bbda981fd728f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:54.028415631 +0000 UTC m=+12.694085456,LastTimestamp:2026-03-11 08:57:24.042402103 +0000 UTC m=+42.708071928,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 08:57:36 crc kubenswrapper[4840]: E0311 08:57:36.560664 4840 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189bbda981fca6ba\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 11 08:57:36 crc kubenswrapper[4840]: &Event{ObjectMeta:{kube-controller-manager-crc.189bbda981fca6ba openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 11 08:57:36 crc kubenswrapper[4840]: body: Mar 11 08:57:36 crc kubenswrapper[4840]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 08:56:54.02836345 +0000 UTC m=+12.694033275,LastTimestamp:2026-03-11 08:57:34.03039087 +0000 UTC m=+52.696060685,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 11 08:57:36 crc kubenswrapper[4840]: > Mar 11 08:57:36 crc kubenswrapper[4840]: I0311 08:57:36.986312 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:57:37 crc kubenswrapper[4840]: E0311 08:57:37.457795 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 11 08:57:37 crc kubenswrapper[4840]: I0311 08:57:37.463916 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:37 crc kubenswrapper[4840]: I0311 08:57:37.465232 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:37 crc kubenswrapper[4840]: I0311 08:57:37.465289 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:37 crc kubenswrapper[4840]: I0311 08:57:37.465308 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:37 crc kubenswrapper[4840]: I0311 08:57:37.465346 4840 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:57:37 crc kubenswrapper[4840]: E0311 08:57:37.470711 4840 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 11 08:57:37 crc kubenswrapper[4840]: I0311 08:57:37.985790 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:57:38 crc kubenswrapper[4840]: I0311 08:57:38.985031 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:57:39 crc kubenswrapper[4840]: I0311 08:57:39.988866 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:57:40 crc kubenswrapper[4840]: I0311 08:57:40.988828 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:57:41 crc kubenswrapper[4840]: I0311 08:57:41.986638 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:57:42 crc kubenswrapper[4840]: E0311 08:57:42.120163 4840 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 11 08:57:42 crc kubenswrapper[4840]: I0311 08:57:42.990015 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:57:43 crc kubenswrapper[4840]: I0311 08:57:43.986062 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.028153 4840 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.028232 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.028416 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.028627 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.029853 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.029889 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.029903 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.030452 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"9be0868d10e5abebdb9b4fba34e5c202c8cd14fbc120551405b9709a816dc4c4"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.030576 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://9be0868d10e5abebdb9b4fba34e5c202c8cd14fbc120551405b9709a816dc4c4" gracePeriod=30 Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.059679 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.105419 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.105872 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.106013 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.107068 4840 scope.go:117] "RemoveContainer" containerID="3ed2e229a2a2c71fc8b8b9cbc58b12b35f14c3d357fb894914aee55334a09e91" Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.343904 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.346082 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5"} Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.346525 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.347616 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.348590 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.348634 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.348652 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.349793 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.350310 4840 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="9be0868d10e5abebdb9b4fba34e5c202c8cd14fbc120551405b9709a816dc4c4" exitCode=255 Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.350337 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"9be0868d10e5abebdb9b4fba34e5c202c8cd14fbc120551405b9709a816dc4c4"} Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.350394 4840 scope.go:117] "RemoveContainer" containerID="a2b70735a9ebbdbf55e287a817686645c45ca4896c4560cc031967174940500c" Mar 11 08:57:44 crc kubenswrapper[4840]: E0311 08:57:44.465489 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.471506 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.473021 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.473058 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.473071 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.473098 4840 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:57:44 crc kubenswrapper[4840]: E0311 08:57:44.477264 4840 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 11 08:57:44 crc kubenswrapper[4840]: I0311 08:57:44.988855 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:57:45 crc kubenswrapper[4840]: I0311 08:57:45.355807 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 11 08:57:45 crc kubenswrapper[4840]: I0311 08:57:45.357167 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"958f66fb4312fb869f6679ae90fca57b2ffb5b5f1946ddc4e1d4345b2a291016"} Mar 11 08:57:45 crc kubenswrapper[4840]: I0311 08:57:45.357338 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:45 crc kubenswrapper[4840]: I0311 08:57:45.358618 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 11 08:57:45 crc kubenswrapper[4840]: I0311 08:57:45.358770 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:45 crc kubenswrapper[4840]: I0311 08:57:45.358830 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:45 crc kubenswrapper[4840]: I0311 08:57:45.358850 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:45 crc kubenswrapper[4840]: I0311 08:57:45.359393 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 11 08:57:45 crc kubenswrapper[4840]: I0311 08:57:45.361122 4840 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5" exitCode=255 Mar 11 08:57:45 crc kubenswrapper[4840]: I0311 08:57:45.361157 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5"} Mar 11 08:57:45 crc kubenswrapper[4840]: I0311 08:57:45.361185 4840 scope.go:117] "RemoveContainer" containerID="3ed2e229a2a2c71fc8b8b9cbc58b12b35f14c3d357fb894914aee55334a09e91" Mar 11 08:57:45 crc kubenswrapper[4840]: I0311 08:57:45.361366 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:45 crc kubenswrapper[4840]: I0311 08:57:45.366051 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:45 crc kubenswrapper[4840]: I0311 08:57:45.366111 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:45 crc kubenswrapper[4840]: I0311 08:57:45.366123 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:45 crc kubenswrapper[4840]: I0311 08:57:45.367078 4840 scope.go:117] "RemoveContainer" containerID="58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5" Mar 11 08:57:45 crc kubenswrapper[4840]: E0311 08:57:45.367445 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:57:45 crc kubenswrapper[4840]: I0311 08:57:45.612309 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:57:45 crc kubenswrapper[4840]: I0311 08:57:45.985450 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:57:46 crc kubenswrapper[4840]: I0311 08:57:46.366116 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 11 08:57:46 crc kubenswrapper[4840]: I0311 08:57:46.369678 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:46 crc kubenswrapper[4840]: I0311 08:57:46.369684 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:46 crc kubenswrapper[4840]: I0311 08:57:46.371176 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:46 crc kubenswrapper[4840]: I0311 08:57:46.371221 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:46 crc kubenswrapper[4840]: I0311 08:57:46.371234 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:46 crc kubenswrapper[4840]: I0311 08:57:46.371993 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:46 crc kubenswrapper[4840]: I0311 08:57:46.372017 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:46 crc kubenswrapper[4840]: I0311 08:57:46.372027 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:46 crc kubenswrapper[4840]: I0311 08:57:46.372374 4840 scope.go:117] "RemoveContainer" containerID="58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5" Mar 11 08:57:46 crc kubenswrapper[4840]: E0311 08:57:46.372555 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:57:46 crc kubenswrapper[4840]: I0311 08:57:46.987413 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:57:47 crc kubenswrapper[4840]: I0311 08:57:47.986530 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:57:48 crc kubenswrapper[4840]: I0311 08:57:48.986150 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:57:49 crc kubenswrapper[4840]: W0311 08:57:49.481170 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 11 08:57:49 crc kubenswrapper[4840]: E0311 08:57:49.481250 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 11 08:57:49 crc kubenswrapper[4840]: I0311 08:57:49.989623 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:57:50 crc kubenswrapper[4840]: I0311 08:57:50.790544 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:57:50 crc kubenswrapper[4840]: I0311 08:57:50.790786 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:50 crc kubenswrapper[4840]: I0311 08:57:50.792632 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:50 crc kubenswrapper[4840]: I0311 08:57:50.792755 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:50 crc kubenswrapper[4840]: I0311 08:57:50.792888 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:50 crc kubenswrapper[4840]: I0311 08:57:50.793682 4840 scope.go:117] "RemoveContainer" containerID="58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5" Mar 11 08:57:50 crc kubenswrapper[4840]: E0311 08:57:50.794051 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:57:50 crc kubenswrapper[4840]: I0311 08:57:50.985987 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:57:51 crc kubenswrapper[4840]: I0311 08:57:51.029068 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:57:51 crc kubenswrapper[4840]: I0311 08:57:51.029306 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:51 crc kubenswrapper[4840]: I0311 08:57:51.030819 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:51 crc kubenswrapper[4840]: I0311 08:57:51.030932 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:51 crc kubenswrapper[4840]: I0311 08:57:51.031021 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:51 crc kubenswrapper[4840]: I0311 08:57:51.034701 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:57:51 crc kubenswrapper[4840]: I0311 08:57:51.383264 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:51 crc kubenswrapper[4840]: I0311 08:57:51.383390 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:57:51 crc kubenswrapper[4840]: I0311 08:57:51.384170 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:51 crc kubenswrapper[4840]: I0311 08:57:51.384268 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:51 crc kubenswrapper[4840]: I0311 08:57:51.384358 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:51 crc kubenswrapper[4840]: E0311 08:57:51.470356 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 11 08:57:51 crc kubenswrapper[4840]: I0311 08:57:51.477382 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:51 crc kubenswrapper[4840]: I0311 08:57:51.478684 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:51 crc kubenswrapper[4840]: I0311 08:57:51.478715 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:51 crc kubenswrapper[4840]: I0311 08:57:51.478727 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:51 crc kubenswrapper[4840]: I0311 08:57:51.478752 4840 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:57:51 crc kubenswrapper[4840]: E0311 08:57:51.482554 4840 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 11 08:57:51 crc kubenswrapper[4840]: I0311 08:57:51.985436 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:57:52 crc kubenswrapper[4840]: E0311 08:57:52.121259 4840 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 11 08:57:52 crc kubenswrapper[4840]: I0311 08:57:52.385188 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:52 crc kubenswrapper[4840]: I0311 08:57:52.386167 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:52 crc kubenswrapper[4840]: I0311 08:57:52.386201 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:52 crc kubenswrapper[4840]: I0311 08:57:52.386214 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:52 crc kubenswrapper[4840]: I0311 08:57:52.544088 4840 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 11 08:57:52 crc kubenswrapper[4840]: I0311 08:57:52.557289 4840 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 11 08:57:52 crc kubenswrapper[4840]: I0311 08:57:52.986142 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:57:53 crc kubenswrapper[4840]: I0311 08:57:53.985134 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:57:54 crc kubenswrapper[4840]: I0311 08:57:54.987171 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:57:55 crc kubenswrapper[4840]: I0311 08:57:55.104796 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 08:57:55 crc kubenswrapper[4840]: I0311 08:57:55.104984 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:55 crc kubenswrapper[4840]: I0311 08:57:55.106207 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:55 crc kubenswrapper[4840]: I0311 08:57:55.106243 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:55 crc kubenswrapper[4840]: I0311 08:57:55.106256 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:55 crc kubenswrapper[4840]: I0311 08:57:55.989593 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 11 08:57:56 crc kubenswrapper[4840]: I0311 08:57:56.914614 4840 csr.go:261] certificate signing request csr-n7bqx is approved, waiting to be issued Mar 11 08:57:56 crc kubenswrapper[4840]: I0311 08:57:56.925527 4840 csr.go:257] certificate signing request csr-n7bqx is issued Mar 11 08:57:57 crc kubenswrapper[4840]: I0311 08:57:57.022904 4840 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 11 08:57:57 crc kubenswrapper[4840]: I0311 08:57:57.818259 4840 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 11 08:57:57 crc kubenswrapper[4840]: I0311 08:57:57.927317 4840 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-01 18:51:50.58386365 +0000 UTC Mar 11 08:57:57 crc kubenswrapper[4840]: I0311 08:57:57.927414 4840 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7113h53m52.656456941s for next certificate rotation Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.483374 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.485166 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.485239 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.485264 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.485447 4840 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.496526 4840 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.496674 4840 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 11 08:57:58 crc kubenswrapper[4840]: E0311 08:57:58.496706 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.501130 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.501184 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.501209 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.501255 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.501281 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:57:58Z","lastTransitionTime":"2026-03-11T08:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:57:58 crc kubenswrapper[4840]: E0311 08:57:58.521010 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.531998 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.532061 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.532088 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.532117 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.532138 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:57:58Z","lastTransitionTime":"2026-03-11T08:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:57:58 crc kubenswrapper[4840]: E0311 08:57:58.549522 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.560752 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.560827 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.560865 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.560897 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.560925 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:57:58Z","lastTransitionTime":"2026-03-11T08:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:57:58 crc kubenswrapper[4840]: E0311 08:57:58.577656 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.588628 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.588683 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.588699 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.588722 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:57:58 crc kubenswrapper[4840]: I0311 08:57:58.588744 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:57:58Z","lastTransitionTime":"2026-03-11T08:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:57:58 crc kubenswrapper[4840]: E0311 08:57:58.603968 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:57:58 crc kubenswrapper[4840]: E0311 08:57:58.604187 4840 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 11 08:57:58 crc kubenswrapper[4840]: E0311 08:57:58.604232 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:57:58 crc kubenswrapper[4840]: E0311 08:57:58.705251 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:57:58 crc kubenswrapper[4840]: E0311 08:57:58.805765 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:57:58 crc kubenswrapper[4840]: E0311 08:57:58.906189 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:57:59 crc kubenswrapper[4840]: E0311 08:57:59.006851 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:57:59 crc kubenswrapper[4840]: I0311 08:57:59.060677 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:57:59 crc kubenswrapper[4840]: I0311 08:57:59.062588 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:57:59 crc kubenswrapper[4840]: I0311 08:57:59.062644 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:57:59 crc kubenswrapper[4840]: I0311 08:57:59.062662 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:57:59 crc kubenswrapper[4840]: E0311 08:57:59.108065 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:57:59 crc kubenswrapper[4840]: E0311 08:57:59.208580 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:57:59 crc kubenswrapper[4840]: E0311 08:57:59.309653 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:57:59 crc kubenswrapper[4840]: E0311 08:57:59.410693 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:57:59 crc kubenswrapper[4840]: E0311 08:57:59.510896 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:57:59 crc kubenswrapper[4840]: E0311 08:57:59.611936 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:57:59 crc kubenswrapper[4840]: E0311 08:57:59.712923 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:57:59 crc kubenswrapper[4840]: E0311 08:57:59.813287 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:57:59 crc kubenswrapper[4840]: E0311 08:57:59.913770 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:00 crc kubenswrapper[4840]: E0311 08:58:00.014224 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:00 crc kubenswrapper[4840]: E0311 08:58:00.114496 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:00 crc kubenswrapper[4840]: E0311 08:58:00.215157 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:00 crc kubenswrapper[4840]: E0311 08:58:00.315966 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:00 crc kubenswrapper[4840]: E0311 08:58:00.416962 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:00 crc kubenswrapper[4840]: E0311 08:58:00.517709 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:00 crc kubenswrapper[4840]: E0311 08:58:00.618010 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:00 crc kubenswrapper[4840]: E0311 08:58:00.719125 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:00 crc kubenswrapper[4840]: E0311 08:58:00.820393 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:00 crc kubenswrapper[4840]: E0311 08:58:00.920681 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:01 crc kubenswrapper[4840]: E0311 08:58:01.021705 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:01 crc kubenswrapper[4840]: E0311 08:58:01.122736 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:01 crc kubenswrapper[4840]: E0311 08:58:01.223134 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:01 crc kubenswrapper[4840]: E0311 08:58:01.323720 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:01 crc kubenswrapper[4840]: E0311 08:58:01.424724 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:01 crc kubenswrapper[4840]: E0311 08:58:01.525430 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:01 crc kubenswrapper[4840]: E0311 08:58:01.625604 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:01 crc kubenswrapper[4840]: E0311 08:58:01.726790 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:01 crc kubenswrapper[4840]: E0311 08:58:01.827221 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:01 crc kubenswrapper[4840]: E0311 08:58:01.927356 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:02 crc kubenswrapper[4840]: E0311 08:58:02.028440 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:02 crc kubenswrapper[4840]: E0311 08:58:02.122309 4840 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 11 08:58:02 crc kubenswrapper[4840]: E0311 08:58:02.128642 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:02 crc kubenswrapper[4840]: E0311 08:58:02.229565 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:02 crc kubenswrapper[4840]: E0311 08:58:02.330575 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:02 crc kubenswrapper[4840]: E0311 08:58:02.431504 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:02 crc kubenswrapper[4840]: E0311 08:58:02.532256 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:02 crc kubenswrapper[4840]: E0311 08:58:02.633186 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:02 crc kubenswrapper[4840]: E0311 08:58:02.734331 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:02 crc kubenswrapper[4840]: E0311 08:58:02.835307 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:02 crc kubenswrapper[4840]: E0311 08:58:02.935595 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:03 crc kubenswrapper[4840]: E0311 08:58:03.036752 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:03 crc kubenswrapper[4840]: I0311 08:58:03.059174 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:58:03 crc kubenswrapper[4840]: I0311 08:58:03.060545 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:03 crc kubenswrapper[4840]: I0311 08:58:03.060659 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:03 crc kubenswrapper[4840]: I0311 08:58:03.060682 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:03 crc kubenswrapper[4840]: I0311 08:58:03.061680 4840 scope.go:117] "RemoveContainer" containerID="58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5" Mar 11 08:58:03 crc kubenswrapper[4840]: E0311 08:58:03.061963 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:58:03 crc kubenswrapper[4840]: E0311 08:58:03.137459 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:03 crc kubenswrapper[4840]: E0311 08:58:03.238644 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:03 crc kubenswrapper[4840]: E0311 08:58:03.339714 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:03 crc kubenswrapper[4840]: E0311 08:58:03.440791 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:03 crc kubenswrapper[4840]: E0311 08:58:03.541796 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:03 crc kubenswrapper[4840]: E0311 08:58:03.642363 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:03 crc kubenswrapper[4840]: E0311 08:58:03.743727 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:03 crc kubenswrapper[4840]: E0311 08:58:03.845230 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:03 crc kubenswrapper[4840]: E0311 08:58:03.946592 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:04 crc kubenswrapper[4840]: E0311 08:58:04.046796 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:04 crc kubenswrapper[4840]: E0311 08:58:04.147585 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:04 crc kubenswrapper[4840]: I0311 08:58:04.190329 4840 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 11 08:58:04 crc kubenswrapper[4840]: E0311 08:58:04.248827 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:04 crc kubenswrapper[4840]: E0311 08:58:04.349539 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:04 crc kubenswrapper[4840]: E0311 08:58:04.450346 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:04 crc kubenswrapper[4840]: E0311 08:58:04.551576 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:04 crc kubenswrapper[4840]: E0311 08:58:04.652822 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:04 crc kubenswrapper[4840]: E0311 08:58:04.753270 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:04 crc kubenswrapper[4840]: E0311 08:58:04.854436 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:04 crc kubenswrapper[4840]: E0311 08:58:04.954767 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:05 crc kubenswrapper[4840]: E0311 08:58:05.055233 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:05 crc kubenswrapper[4840]: E0311 08:58:05.156394 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:05 crc kubenswrapper[4840]: E0311 08:58:05.257026 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:05 crc kubenswrapper[4840]: E0311 08:58:05.357281 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:05 crc kubenswrapper[4840]: E0311 08:58:05.458254 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:05 crc kubenswrapper[4840]: E0311 08:58:05.558417 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:05 crc kubenswrapper[4840]: E0311 08:58:05.658813 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:05 crc kubenswrapper[4840]: E0311 08:58:05.759862 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:05 crc kubenswrapper[4840]: E0311 08:58:05.860317 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:05 crc kubenswrapper[4840]: E0311 08:58:05.960525 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:06 crc kubenswrapper[4840]: E0311 08:58:06.061025 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:06 crc kubenswrapper[4840]: E0311 08:58:06.161760 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:06 crc kubenswrapper[4840]: E0311 08:58:06.262214 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:06 crc kubenswrapper[4840]: E0311 08:58:06.362871 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:06 crc kubenswrapper[4840]: E0311 08:58:06.463437 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:06 crc kubenswrapper[4840]: E0311 08:58:06.564940 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:06 crc kubenswrapper[4840]: E0311 08:58:06.665712 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:06 crc kubenswrapper[4840]: E0311 08:58:06.765884 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:06 crc kubenswrapper[4840]: E0311 08:58:06.866863 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:06 crc kubenswrapper[4840]: E0311 08:58:06.968220 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:07 crc kubenswrapper[4840]: I0311 08:58:07.059567 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 11 08:58:07 crc kubenswrapper[4840]: I0311 08:58:07.061114 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:07 crc kubenswrapper[4840]: I0311 08:58:07.061168 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:07 crc kubenswrapper[4840]: I0311 08:58:07.061183 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:07 crc kubenswrapper[4840]: E0311 08:58:07.069198 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:07 crc kubenswrapper[4840]: E0311 08:58:07.169431 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:07 crc kubenswrapper[4840]: E0311 08:58:07.270489 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:07 crc kubenswrapper[4840]: E0311 08:58:07.371283 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:07 crc kubenswrapper[4840]: E0311 08:58:07.471699 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:07 crc kubenswrapper[4840]: E0311 08:58:07.571816 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:07 crc kubenswrapper[4840]: E0311 08:58:07.673492 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:07 crc kubenswrapper[4840]: E0311 08:58:07.773664 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:07 crc kubenswrapper[4840]: E0311 08:58:07.874656 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:07 crc kubenswrapper[4840]: E0311 08:58:07.975648 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:08 crc kubenswrapper[4840]: E0311 08:58:08.076109 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:08 crc kubenswrapper[4840]: E0311 08:58:08.176647 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:08 crc kubenswrapper[4840]: E0311 08:58:08.277785 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:08 crc kubenswrapper[4840]: E0311 08:58:08.378098 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.386770 4840 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 11 08:58:08 crc kubenswrapper[4840]: E0311 08:58:08.479087 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:08 crc kubenswrapper[4840]: E0311 08:58:08.579258 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:08 crc kubenswrapper[4840]: E0311 08:58:08.680416 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.718518 4840 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.784089 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.784135 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.784149 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.784171 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.784188 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:08Z","lastTransitionTime":"2026-03-11T08:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.801180 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.801213 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.801221 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.801238 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.801249 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:08Z","lastTransitionTime":"2026-03-11T08:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:08 crc kubenswrapper[4840]: E0311 08:58:08.811502 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.815550 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.815586 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.815599 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.815624 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.815639 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:08Z","lastTransitionTime":"2026-03-11T08:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:08 crc kubenswrapper[4840]: E0311 08:58:08.825930 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.829497 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.829530 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.829545 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.829565 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.829581 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:08Z","lastTransitionTime":"2026-03-11T08:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:08 crc kubenswrapper[4840]: E0311 08:58:08.839680 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.844186 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.844245 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.844259 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.844280 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.844294 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:08Z","lastTransitionTime":"2026-03-11T08:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:08 crc kubenswrapper[4840]: E0311 08:58:08.855350 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.859307 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.859339 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.859348 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.859368 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.859380 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:08Z","lastTransitionTime":"2026-03-11T08:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:08 crc kubenswrapper[4840]: E0311 08:58:08.870653 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:58:08 crc kubenswrapper[4840]: E0311 08:58:08.870827 4840 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.886503 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.886577 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.886592 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.886619 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.886631 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:08Z","lastTransitionTime":"2026-03-11T08:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.989483 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.989532 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.989543 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.989563 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:08 crc kubenswrapper[4840]: I0311 08:58:08.989576 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:08Z","lastTransitionTime":"2026-03-11T08:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.068732 4840 apiserver.go:52] "Watching apiserver" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.074183 4840 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.074537 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.075023 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.075077 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.075104 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.075143 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.075019 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.075384 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.075733 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.075781 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.075844 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.081440 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.081440 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.082413 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.086448 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.086578 4840 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.086944 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.087193 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.087364 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.087398 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.093782 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.093828 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.093837 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.093856 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.093869 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:09Z","lastTransitionTime":"2026-03-11T08:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.097308 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.097965 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098020 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098051 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098076 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098097 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098118 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098141 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098164 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098198 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098222 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098245 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098267 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098292 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098317 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098349 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098373 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098396 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098425 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098449 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098506 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098530 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098556 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098584 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098608 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098634 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098658 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098680 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098704 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098730 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098752 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098775 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098800 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.099535 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.099764 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.099991 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.100182 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.100267 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.102125 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.102153 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.100525 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.100782 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.102550 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.102884 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.102969 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.103186 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.103345 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.103373 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.103796 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.100247 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.098826 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.103989 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.101023 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.104066 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.101043 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.101097 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.100730 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.104135 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.104197 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.104242 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.104353 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.104665 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.104679 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.104917 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.105052 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.105178 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.105239 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.106849 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.105298 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.109271 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.110116 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.110193 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.110244 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.110282 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.110321 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.110357 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.110392 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.110425 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.110458 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.110632 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.110667 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.110702 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.110736 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.110774 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.110809 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.110845 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.110902 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.110967 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.111004 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.111039 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.111072 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.111108 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.111146 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.111203 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.111236 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.111269 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.111301 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.111332 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.111367 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.111399 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.111431 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.111494 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.111526 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.111557 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.111589 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.111623 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.111654 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.111684 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.111719 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.111755 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.111789 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.111820 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.113526 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.114559 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.115581 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.116100 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.116760 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.117645 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.118573 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.118650 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.118831 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.118950 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.119293 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.119301 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.119659 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.119895 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120010 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120091 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120107 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120141 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120175 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120205 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120340 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120362 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120390 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120425 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120452 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120503 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120532 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120559 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120585 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120617 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120690 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120718 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120743 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120773 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120799 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120829 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120859 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120890 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120921 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120931 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.120949 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121004 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121054 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121108 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121239 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121262 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121290 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121343 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121391 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121442 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121530 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121571 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121581 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121612 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121639 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121682 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121718 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121748 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121774 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121808 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121889 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121918 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121940 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121961 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121983 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121984 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122004 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122030 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122053 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122074 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122097 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122119 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122144 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122165 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122192 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122213 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122240 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122263 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122283 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122306 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122329 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122355 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122383 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122407 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122431 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122453 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122498 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122518 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122540 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122560 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122586 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122608 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122634 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122658 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122681 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122704 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122728 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122755 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122778 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122805 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122828 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122853 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122875 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122896 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122924 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122948 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122970 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.125672 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.121997 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122293 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122434 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122761 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.122988 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.123008 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:58:09.622975075 +0000 UTC m=+88.288645080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.136314 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.136353 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.136388 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.136399 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.136623 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.123262 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.136675 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.123268 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.123597 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.123910 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.124400 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.124519 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.124707 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.124862 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.125342 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.125917 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.136780 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.126114 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.126420 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.126431 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.126676 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.127023 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.127530 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.129867 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.130167 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.130222 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.130542 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.130432 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.130581 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.131299 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.131425 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.131610 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.131849 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.131949 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.131972 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.132243 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.132223 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.132356 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.132373 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.132426 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.133633 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.133947 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.134013 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.134176 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.134437 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.135144 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.135206 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.135237 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.135898 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.135909 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.135965 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.135964 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.135987 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.136916 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.136946 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.137235 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.137301 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.137467 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.137522 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.137553 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.137716 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.137796 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.137985 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.138124 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.138139 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.138234 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.138401 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.136861 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.138561 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.138594 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.138627 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.138661 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.138784 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.138826 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.138860 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.138893 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.138915 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.138929 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.138966 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.139003 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.139032 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.139032 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.139095 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.139120 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.139159 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.139195 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.139224 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.139259 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.139377 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.139556 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.139596 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.139628 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.139656 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.139684 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.139716 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.139745 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.139779 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.139808 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.139836 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.139866 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.137812 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.139908 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.140023 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.140071 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.140155 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.140198 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.140227 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.140269 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.140300 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.140309 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.140329 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.140356 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.140385 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.140410 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.140435 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.140458 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.140500 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.140504 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.140605 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.140651 4840 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.140505 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.140652 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.140892 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.141119 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:09.641093996 +0000 UTC m=+88.306763821 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.141236 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.141817 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.141563 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.141631 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.141691 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.141527 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.141967 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.142006 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.142884 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.143297 4840 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.144095 4840 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.144181 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:09.644147483 +0000 UTC m=+88.309817308 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.144354 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.144523 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.144850 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145231 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145375 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145409 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145475 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145501 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145517 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145529 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145545 4840 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145560 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145572 4840 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145588 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145601 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145617 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145629 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145640 4840 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145651 4840 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145662 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145672 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145682 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145691 4840 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145700 4840 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145710 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145720 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145729 4840 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145739 4840 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145748 4840 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145759 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145769 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145781 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145791 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145800 4840 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145811 4840 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145821 4840 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145830 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145840 4840 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145849 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145866 4840 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145877 4840 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145889 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145901 4840 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145912 4840 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145931 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145942 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145953 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145962 4840 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145971 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145980 4840 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.145994 4840 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146004 4840 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146014 4840 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146023 4840 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146034 4840 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146044 4840 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146055 4840 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146064 4840 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146074 4840 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146083 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146094 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146105 4840 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146116 4840 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146127 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146136 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146146 4840 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146156 4840 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146164 4840 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146176 4840 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146188 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146201 4840 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146211 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146221 4840 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146230 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146238 4840 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146246 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146255 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146265 4840 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146273 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146282 4840 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146294 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146307 4840 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146317 4840 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146325 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146334 4840 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146350 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146359 4840 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146369 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146378 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146386 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146395 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146404 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146413 4840 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146423 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146432 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146441 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146453 4840 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146462 4840 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146508 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146519 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146528 4840 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146539 4840 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146549 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146558 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146567 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146580 4840 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146589 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146597 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146608 4840 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146620 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146632 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146642 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146652 4840 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146661 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146670 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146678 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146687 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146696 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146706 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146716 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146725 4840 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146735 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146743 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.158794 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.160019 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.160353 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.160688 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.161144 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.161210 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.161315 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.149790 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.146752 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.161653 4840 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.161680 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.161706 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.161726 4840 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.161737 4840 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.161752 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.161763 4840 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.161780 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.161796 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.161807 4840 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.161817 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.161832 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.161843 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.161854 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.161868 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.161882 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.161892 4840 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.161766 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.161992 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.162253 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.149986 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.162543 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.162874 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.162946 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.163270 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.164951 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.150616 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.163828 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.164822 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.167547 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.167589 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.165306 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.167681 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.165584 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.165941 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.165999 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.166604 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.167149 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.168912 4840 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.169173 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:09.669140028 +0000 UTC m=+88.334809843 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.169225 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.169453 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.169246 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.169575 4840 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.169414 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.170096 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.170346 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:09.670306208 +0000 UTC m=+88.335976013 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.171712 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.171762 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.172164 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.172236 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.172763 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.173325 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.173611 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.174335 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.174536 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.177243 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.182401 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.183031 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.183308 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.183554 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.183910 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.184142 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.184241 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.184774 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.185037 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.193618 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.197555 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.198803 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.198863 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.198877 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.198904 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.198917 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:09Z","lastTransitionTime":"2026-03-11T08:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.205878 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.206298 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.210677 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.219518 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.224046 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.239046 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.250140 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.262752 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.262928 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.262916 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263294 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263363 4840 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263383 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263400 4840 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263415 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263433 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263504 4840 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263521 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263535 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263549 4840 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263583 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263607 4840 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263647 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263662 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263683 4840 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263720 4840 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263735 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263750 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263764 4840 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263799 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263813 4840 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263828 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263842 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263880 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263895 4840 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263909 4840 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263923 4840 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263958 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263971 4840 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.263988 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264000 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264013 4840 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264049 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264062 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264079 4840 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264092 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264129 4840 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264145 4840 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264158 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264172 4840 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264208 4840 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264223 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264237 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264250 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264285 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264300 4840 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264313 4840 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264326 4840 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264338 4840 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264374 4840 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264393 4840 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264405 4840 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264419 4840 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264450 4840 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264489 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264503 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264518 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264531 4840 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264544 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.264577 4840 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.301029 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.301407 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.301516 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.301622 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.301679 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:09Z","lastTransitionTime":"2026-03-11T08:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.393387 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.407558 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.410582 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.410786 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.410877 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.410958 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:09Z","lastTransitionTime":"2026-03-11T08:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.425751 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.436096 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"80b734f6b40360e4b0a8f63edaf6882a4b0367b631233e9bbfb4fa272f770ef0"} Mar 11 08:58:09 crc kubenswrapper[4840]: W0311 08:58:09.448488 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-c77730b3ecb25aaa93437c25925f978260c51913537f2c4e048affb9bae94584 WatchSource:0}: Error finding container c77730b3ecb25aaa93437c25925f978260c51913537f2c4e048affb9bae94584: Status 404 returned error can't find the container with id c77730b3ecb25aaa93437c25925f978260c51913537f2c4e048affb9bae94584 Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.459450 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.512847 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.512877 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.512889 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.512906 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.512917 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:09Z","lastTransitionTime":"2026-03-11T08:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.616030 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.616060 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.616069 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.616088 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.616101 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:09Z","lastTransitionTime":"2026-03-11T08:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.668465 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.668586 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.668641 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:58:10.668602724 +0000 UTC m=+89.334272569 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.668692 4840 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.668732 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:10.668721797 +0000 UTC m=+89.334391622 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.668733 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.668842 4840 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.668890 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:10.668879411 +0000 UTC m=+89.334549226 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.719290 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.719369 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.719382 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.719399 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.719413 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:09Z","lastTransitionTime":"2026-03-11T08:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.769213 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.769307 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.769442 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.769488 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.769509 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.769517 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.769530 4840 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.769538 4840 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.769607 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:10.769585951 +0000 UTC m=+89.435255776 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:09 crc kubenswrapper[4840]: E0311 08:58:09.769632 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:10.769621822 +0000 UTC m=+89.435291647 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.822038 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.822110 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.822132 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.822157 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.822177 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:09Z","lastTransitionTime":"2026-03-11T08:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.925528 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.925576 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.925590 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.925607 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:09 crc kubenswrapper[4840]: I0311 08:58:09.925617 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:09Z","lastTransitionTime":"2026-03-11T08:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.028696 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.028756 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.028775 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.028794 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.028808 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:10Z","lastTransitionTime":"2026-03-11T08:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.065587 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.066267 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.066949 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.067580 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.068229 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.068856 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.069586 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.070244 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.070957 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.071557 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.072127 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.072960 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.073525 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.074109 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.076034 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.077524 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.080525 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.081622 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.083133 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.085861 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.086653 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.087232 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.087708 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.088401 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.088953 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.089590 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.090268 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.090745 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.091303 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.091824 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.092288 4840 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.092401 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.093887 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.094448 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.094978 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.099122 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.100257 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.100901 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.101963 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.102686 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.103193 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.104218 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.105307 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.105981 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.106822 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.107408 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.108733 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.109995 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.110920 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.111417 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.111900 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.112841 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.113679 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.114536 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.132299 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.132341 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.132353 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.132370 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.132382 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:10Z","lastTransitionTime":"2026-03-11T08:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.234809 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.234841 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.234849 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.234861 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.234870 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:10Z","lastTransitionTime":"2026-03-11T08:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.336900 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.336935 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.336944 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.336957 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.336966 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:10Z","lastTransitionTime":"2026-03-11T08:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.438689 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.438760 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.438778 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.438802 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.438820 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:10Z","lastTransitionTime":"2026-03-11T08:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.441042 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6"} Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.441101 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7"} Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.441115 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c77730b3ecb25aaa93437c25925f978260c51913537f2c4e048affb9bae94584"} Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.448011 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c"} Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.449113 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"930f6ac38038939f758293b28d5ef83d18f8fababbdc82c64b43e001eec81664"} Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.462253 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:10Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.478826 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:10Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.496038 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:10Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.513996 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:10Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.528707 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:10Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.540882 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.540931 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.540941 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.540961 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.540972 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:10Z","lastTransitionTime":"2026-03-11T08:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.544403 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:10Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.561379 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:10Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.578354 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:10Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.597728 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:10Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.613540 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:10Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.631604 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:10Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.644352 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.644412 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.644426 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.644451 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.644490 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:10Z","lastTransitionTime":"2026-03-11T08:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.650373 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:10Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.677780 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.677883 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.677938 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:10 crc kubenswrapper[4840]: E0311 08:58:10.678040 4840 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:58:10 crc kubenswrapper[4840]: E0311 08:58:10.678067 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:58:12.678022421 +0000 UTC m=+91.343692246 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:58:10 crc kubenswrapper[4840]: E0311 08:58:10.678126 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:12.678106284 +0000 UTC m=+91.343776279 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:58:10 crc kubenswrapper[4840]: E0311 08:58:10.678177 4840 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:58:10 crc kubenswrapper[4840]: E0311 08:58:10.678297 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:12.678270548 +0000 UTC m=+91.343940533 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.748056 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.748088 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.748096 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.748112 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.748123 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:10Z","lastTransitionTime":"2026-03-11T08:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.779177 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.779216 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:10 crc kubenswrapper[4840]: E0311 08:58:10.779332 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:58:10 crc kubenswrapper[4840]: E0311 08:58:10.779361 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:58:10 crc kubenswrapper[4840]: E0311 08:58:10.779372 4840 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:10 crc kubenswrapper[4840]: E0311 08:58:10.779444 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:12.779401698 +0000 UTC m=+91.445071513 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:10 crc kubenswrapper[4840]: E0311 08:58:10.779332 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:58:10 crc kubenswrapper[4840]: E0311 08:58:10.779479 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:58:10 crc kubenswrapper[4840]: E0311 08:58:10.779489 4840 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:10 crc kubenswrapper[4840]: E0311 08:58:10.779515 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:12.779507401 +0000 UTC m=+91.445177216 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.850300 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.850351 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.850379 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.850401 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.850413 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:10Z","lastTransitionTime":"2026-03-11T08:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.953646 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.953751 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.953963 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.953982 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:10 crc kubenswrapper[4840]: I0311 08:58:10.953996 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:10Z","lastTransitionTime":"2026-03-11T08:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.059487 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.059690 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:11 crc kubenswrapper[4840]: E0311 08:58:11.059700 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.059720 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.059733 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.059736 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.059757 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.059777 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.059773 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:11Z","lastTransitionTime":"2026-03-11T08:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:11 crc kubenswrapper[4840]: E0311 08:58:11.059931 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:11 crc kubenswrapper[4840]: E0311 08:58:11.059999 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.162741 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.162773 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.162782 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.162796 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.162806 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:11Z","lastTransitionTime":"2026-03-11T08:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.265420 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.265483 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.265495 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.265511 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.265522 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:11Z","lastTransitionTime":"2026-03-11T08:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.368710 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.368757 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.368771 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.368792 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.368807 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:11Z","lastTransitionTime":"2026-03-11T08:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.471093 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.471136 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.471148 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.471164 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.471176 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:11Z","lastTransitionTime":"2026-03-11T08:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.573525 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.573895 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.574028 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.574172 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.574289 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:11Z","lastTransitionTime":"2026-03-11T08:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.677248 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.677310 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.677325 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.677347 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.677362 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:11Z","lastTransitionTime":"2026-03-11T08:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.780559 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.780612 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.780625 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.780639 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.780649 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:11Z","lastTransitionTime":"2026-03-11T08:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.883995 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.884044 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.884060 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.884080 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.884096 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:11Z","lastTransitionTime":"2026-03-11T08:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.986076 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.986145 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.986170 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.986200 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:11 crc kubenswrapper[4840]: I0311 08:58:11.986224 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:11Z","lastTransitionTime":"2026-03-11T08:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.074665 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.086019 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.088748 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.088780 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.088791 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.088805 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.088816 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:12Z","lastTransitionTime":"2026-03-11T08:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.098976 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.110397 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.122592 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.136193 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.192269 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.192346 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.192364 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.192389 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.192410 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:12Z","lastTransitionTime":"2026-03-11T08:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.294876 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.294947 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.294958 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.294974 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.294984 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:12Z","lastTransitionTime":"2026-03-11T08:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.398383 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.398497 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.398519 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.398556 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.398577 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:12Z","lastTransitionTime":"2026-03-11T08:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.501425 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.501489 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.501501 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.501516 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.501527 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:12Z","lastTransitionTime":"2026-03-11T08:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.604061 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.604146 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.604158 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.604176 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.604186 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:12Z","lastTransitionTime":"2026-03-11T08:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.697226 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.697318 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.697396 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:12 crc kubenswrapper[4840]: E0311 08:58:12.697583 4840 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:58:12 crc kubenswrapper[4840]: E0311 08:58:12.697615 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:58:16.697572436 +0000 UTC m=+95.363242281 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:58:12 crc kubenswrapper[4840]: E0311 08:58:12.697691 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:16.697677098 +0000 UTC m=+95.363346953 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:58:12 crc kubenswrapper[4840]: E0311 08:58:12.697590 4840 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:58:12 crc kubenswrapper[4840]: E0311 08:58:12.697776 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:16.69775109 +0000 UTC m=+95.363420905 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.707693 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.707741 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.707758 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.707783 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.707800 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:12Z","lastTransitionTime":"2026-03-11T08:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.797966 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.798023 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:12 crc kubenswrapper[4840]: E0311 08:58:12.798171 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:58:12 crc kubenswrapper[4840]: E0311 08:58:12.798216 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:58:12 crc kubenswrapper[4840]: E0311 08:58:12.798230 4840 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:12 crc kubenswrapper[4840]: E0311 08:58:12.798293 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:16.798269086 +0000 UTC m=+95.463938901 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:12 crc kubenswrapper[4840]: E0311 08:58:12.798173 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:58:12 crc kubenswrapper[4840]: E0311 08:58:12.798363 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:58:12 crc kubenswrapper[4840]: E0311 08:58:12.798382 4840 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:12 crc kubenswrapper[4840]: E0311 08:58:12.798446 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:16.79842827 +0000 UTC m=+95.464098125 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.810380 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.810407 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.810416 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.810428 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.810437 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:12Z","lastTransitionTime":"2026-03-11T08:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.912805 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.912913 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.912941 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.912976 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:12 crc kubenswrapper[4840]: I0311 08:58:12.913000 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:12Z","lastTransitionTime":"2026-03-11T08:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.015961 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.016037 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.016055 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.016107 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.016124 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:13Z","lastTransitionTime":"2026-03-11T08:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.059349 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.059393 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.059429 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:13 crc kubenswrapper[4840]: E0311 08:58:13.059515 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:13 crc kubenswrapper[4840]: E0311 08:58:13.059658 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:13 crc kubenswrapper[4840]: E0311 08:58:13.059779 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.118616 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.118695 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.118719 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.118753 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.118775 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:13Z","lastTransitionTime":"2026-03-11T08:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.221668 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.221709 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.221719 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.221734 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.221744 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:13Z","lastTransitionTime":"2026-03-11T08:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.324281 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.324327 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.324337 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.324353 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.324363 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:13Z","lastTransitionTime":"2026-03-11T08:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.426774 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.426822 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.426833 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.426853 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.426864 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:13Z","lastTransitionTime":"2026-03-11T08:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.457606 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f"} Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.479505 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:13Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.500396 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:13Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.519317 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:13Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.530146 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.530192 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.530205 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.530222 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.530236 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:13Z","lastTransitionTime":"2026-03-11T08:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.538391 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:13Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.557938 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:13Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.573074 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:13Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.633251 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.633313 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.633332 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.633352 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.633364 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:13Z","lastTransitionTime":"2026-03-11T08:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.735670 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.735716 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.735729 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.735757 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.735771 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:13Z","lastTransitionTime":"2026-03-11T08:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.838085 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.838124 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.838135 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.838150 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.838160 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:13Z","lastTransitionTime":"2026-03-11T08:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.940644 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.940686 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.940697 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.940711 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:13 crc kubenswrapper[4840]: I0311 08:58:13.940721 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:13Z","lastTransitionTime":"2026-03-11T08:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.042692 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.042731 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.042740 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.042794 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.042803 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:14Z","lastTransitionTime":"2026-03-11T08:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.146086 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.146159 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.146180 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.146212 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.146233 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:14Z","lastTransitionTime":"2026-03-11T08:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.248723 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.248798 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.248817 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.248851 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.248874 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:14Z","lastTransitionTime":"2026-03-11T08:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.351628 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.351678 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.351689 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.351706 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.351721 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:14Z","lastTransitionTime":"2026-03-11T08:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.455189 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.455242 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.455254 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.455273 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.455286 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:14Z","lastTransitionTime":"2026-03-11T08:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.558744 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.558840 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.558867 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.558904 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.558929 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:14Z","lastTransitionTime":"2026-03-11T08:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.662232 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.662309 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.662321 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.662339 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.662353 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:14Z","lastTransitionTime":"2026-03-11T08:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.765432 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.765519 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.765531 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.765557 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.765570 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:14Z","lastTransitionTime":"2026-03-11T08:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.868673 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.868720 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.868732 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.868748 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.868760 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:14Z","lastTransitionTime":"2026-03-11T08:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.971178 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.971225 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.971242 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.971262 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:14 crc kubenswrapper[4840]: I0311 08:58:14.971274 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:14Z","lastTransitionTime":"2026-03-11T08:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.060032 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:15 crc kubenswrapper[4840]: E0311 08:58:15.060214 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.060059 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:15 crc kubenswrapper[4840]: E0311 08:58:15.060299 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.060050 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:15 crc kubenswrapper[4840]: E0311 08:58:15.060371 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.074112 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.074146 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.074157 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.074170 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.074179 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:15Z","lastTransitionTime":"2026-03-11T08:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.177848 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.177907 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.177926 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.177953 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.177971 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:15Z","lastTransitionTime":"2026-03-11T08:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.281934 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.281989 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.282002 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.282025 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.282041 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:15Z","lastTransitionTime":"2026-03-11T08:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.385156 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.385215 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.385237 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.385271 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.385305 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:15Z","lastTransitionTime":"2026-03-11T08:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.488764 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.488807 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.488816 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.488857 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.488872 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:15Z","lastTransitionTime":"2026-03-11T08:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.592129 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.592295 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.592319 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.592344 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.592363 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:15Z","lastTransitionTime":"2026-03-11T08:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.695779 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.695850 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.695864 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.695888 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.695904 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:15Z","lastTransitionTime":"2026-03-11T08:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.799888 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.799949 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.799966 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.799992 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.800009 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:15Z","lastTransitionTime":"2026-03-11T08:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.902883 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.902967 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.902987 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.903012 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:15 crc kubenswrapper[4840]: I0311 08:58:15.903030 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:15Z","lastTransitionTime":"2026-03-11T08:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.006184 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.006254 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.006276 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.006304 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.006328 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:16Z","lastTransitionTime":"2026-03-11T08:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.109579 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.109641 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.109660 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.109688 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.109742 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:16Z","lastTransitionTime":"2026-03-11T08:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.212285 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.212322 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.212334 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.212354 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.212365 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:16Z","lastTransitionTime":"2026-03-11T08:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.314557 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.314604 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.314612 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.314627 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.314638 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:16Z","lastTransitionTime":"2026-03-11T08:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.417448 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.417511 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.417521 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.417538 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.417547 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:16Z","lastTransitionTime":"2026-03-11T08:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.520638 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.520715 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.520743 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.520771 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.520794 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:16Z","lastTransitionTime":"2026-03-11T08:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.623635 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.623695 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.623712 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.623731 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.623742 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:16Z","lastTransitionTime":"2026-03-11T08:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.726339 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.726388 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.726400 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.726416 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.726426 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:16Z","lastTransitionTime":"2026-03-11T08:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.734962 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.735074 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.735135 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:16 crc kubenswrapper[4840]: E0311 08:58:16.735179 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:58:24.735156596 +0000 UTC m=+103.400826411 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:58:16 crc kubenswrapper[4840]: E0311 08:58:16.735272 4840 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:58:16 crc kubenswrapper[4840]: E0311 08:58:16.735309 4840 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:58:16 crc kubenswrapper[4840]: E0311 08:58:16.735359 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:24.73533664 +0000 UTC m=+103.401006495 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:58:16 crc kubenswrapper[4840]: E0311 08:58:16.735420 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:24.735393132 +0000 UTC m=+103.401062957 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.829236 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.829333 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.829346 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.829379 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.829390 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:16Z","lastTransitionTime":"2026-03-11T08:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.835779 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.835815 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:16 crc kubenswrapper[4840]: E0311 08:58:16.835951 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:58:16 crc kubenswrapper[4840]: E0311 08:58:16.835975 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:58:16 crc kubenswrapper[4840]: E0311 08:58:16.835988 4840 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:16 crc kubenswrapper[4840]: E0311 08:58:16.836041 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:24.83602283 +0000 UTC m=+103.501692645 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:16 crc kubenswrapper[4840]: E0311 08:58:16.836108 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:58:16 crc kubenswrapper[4840]: E0311 08:58:16.836163 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:58:16 crc kubenswrapper[4840]: E0311 08:58:16.836188 4840 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:16 crc kubenswrapper[4840]: E0311 08:58:16.836295 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:24.836262856 +0000 UTC m=+103.501932831 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.932553 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.932630 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.932645 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.932669 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:16 crc kubenswrapper[4840]: I0311 08:58:16.932684 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:16Z","lastTransitionTime":"2026-03-11T08:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.035720 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.035809 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.035828 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.035881 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.035901 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:17Z","lastTransitionTime":"2026-03-11T08:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.059704 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.059758 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.059801 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:17 crc kubenswrapper[4840]: E0311 08:58:17.059856 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:17 crc kubenswrapper[4840]: E0311 08:58:17.059990 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:17 crc kubenswrapper[4840]: E0311 08:58:17.060108 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.139627 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.139720 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.139735 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.139757 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.139769 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:17Z","lastTransitionTime":"2026-03-11T08:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.243023 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.243107 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.243134 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.243177 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.243205 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:17Z","lastTransitionTime":"2026-03-11T08:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.346780 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.346826 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.346842 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.346861 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.346875 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:17Z","lastTransitionTime":"2026-03-11T08:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.450524 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.450637 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.450658 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.450686 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.450704 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:17Z","lastTransitionTime":"2026-03-11T08:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.553599 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.553654 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.553666 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.553684 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.553699 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:17Z","lastTransitionTime":"2026-03-11T08:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.658267 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.658380 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.658410 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.658450 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.658502 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:17Z","lastTransitionTime":"2026-03-11T08:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.761949 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.762001 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.762011 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.762025 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.762035 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:17Z","lastTransitionTime":"2026-03-11T08:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.864506 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.864543 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.864552 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.864565 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.864575 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:17Z","lastTransitionTime":"2026-03-11T08:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.968045 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.968164 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.968190 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.968224 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:17 crc kubenswrapper[4840]: I0311 08:58:17.968247 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:17Z","lastTransitionTime":"2026-03-11T08:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.071882 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.071954 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.071965 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.071980 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.071990 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:18Z","lastTransitionTime":"2026-03-11T08:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.084776 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.085510 4840 scope.go:117] "RemoveContainer" containerID="58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5" Mar 11 08:58:18 crc kubenswrapper[4840]: E0311 08:58:18.085847 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.175048 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.175102 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.175114 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.175132 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.175145 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:18Z","lastTransitionTime":"2026-03-11T08:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.277421 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.277499 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.277512 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.277532 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.277545 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:18Z","lastTransitionTime":"2026-03-11T08:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.379885 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.379941 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.379953 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.379971 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.379982 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:18Z","lastTransitionTime":"2026-03-11T08:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.469020 4840 scope.go:117] "RemoveContainer" containerID="58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5" Mar 11 08:58:18 crc kubenswrapper[4840]: E0311 08:58:18.469163 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.482209 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.482263 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.482276 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.482295 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.482309 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:18Z","lastTransitionTime":"2026-03-11T08:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.585927 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.585986 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.585999 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.586017 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.586027 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:18Z","lastTransitionTime":"2026-03-11T08:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.688901 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.688953 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.688965 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.688984 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.688997 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:18Z","lastTransitionTime":"2026-03-11T08:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.791815 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.791863 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.791871 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.791887 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.791896 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:18Z","lastTransitionTime":"2026-03-11T08:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.894845 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.894903 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.894913 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.894928 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.894938 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:18Z","lastTransitionTime":"2026-03-11T08:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.900914 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.900972 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.901165 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.901184 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.901196 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:18Z","lastTransitionTime":"2026-03-11T08:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:18 crc kubenswrapper[4840]: E0311 08:58:18.915722 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:18Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.920146 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.920200 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.920212 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.920227 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.920238 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:18Z","lastTransitionTime":"2026-03-11T08:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:18 crc kubenswrapper[4840]: E0311 08:58:18.933693 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:18Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.937491 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.937560 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.937580 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.937612 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.937633 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:18Z","lastTransitionTime":"2026-03-11T08:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:18 crc kubenswrapper[4840]: E0311 08:58:18.955962 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:18Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.960452 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.960522 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.960534 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.960577 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.960592 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:18Z","lastTransitionTime":"2026-03-11T08:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:18 crc kubenswrapper[4840]: E0311 08:58:18.974634 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:18Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.978923 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.978959 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.978972 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.978990 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.979003 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:18Z","lastTransitionTime":"2026-03-11T08:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:18 crc kubenswrapper[4840]: E0311 08:58:18.992798 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:18Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:18 crc kubenswrapper[4840]: E0311 08:58:18.992947 4840 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.997581 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.997687 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.997706 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.997773 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:18 crc kubenswrapper[4840]: I0311 08:58:18.997795 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:18Z","lastTransitionTime":"2026-03-11T08:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.059210 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.059302 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:19 crc kubenswrapper[4840]: E0311 08:58:19.059355 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.059210 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:19 crc kubenswrapper[4840]: E0311 08:58:19.059591 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:58:19 crc kubenswrapper[4840]: E0311 08:58:19.059807 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.101130 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.101176 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.101186 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.101200 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.101211 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:19Z","lastTransitionTime":"2026-03-11T08:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.204625 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.204770 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.204827 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.204858 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.204930 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:19Z","lastTransitionTime":"2026-03-11T08:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.308044 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.308105 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.308126 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.308155 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.308174 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:19Z","lastTransitionTime":"2026-03-11T08:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.411892 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.412077 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.412098 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.412122 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.412172 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:19Z","lastTransitionTime":"2026-03-11T08:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.515042 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.515080 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.515106 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.515121 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.515131 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:19Z","lastTransitionTime":"2026-03-11T08:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.618288 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.618362 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.618382 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.618410 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.618486 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:19Z","lastTransitionTime":"2026-03-11T08:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.722042 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.722118 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.722141 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.722169 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.722190 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:19Z","lastTransitionTime":"2026-03-11T08:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.829310 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.830172 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.830215 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.830236 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.830246 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:19Z","lastTransitionTime":"2026-03-11T08:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.933765 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.933832 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.933845 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.933863 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:19 crc kubenswrapper[4840]: I0311 08:58:19.933875 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:19Z","lastTransitionTime":"2026-03-11T08:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.036504 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.036543 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.036553 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.036576 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.036594 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:20Z","lastTransitionTime":"2026-03-11T08:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.138780 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.138849 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.138860 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.138874 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.138884 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:20Z","lastTransitionTime":"2026-03-11T08:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.241044 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.241091 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.241100 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.241118 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.241130 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:20Z","lastTransitionTime":"2026-03-11T08:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.343704 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.343750 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.343764 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.343805 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.343818 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:20Z","lastTransitionTime":"2026-03-11T08:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.447497 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.447553 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.447569 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.447590 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.447604 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:20Z","lastTransitionTime":"2026-03-11T08:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.550054 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.550106 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.550119 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.550138 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.550148 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:20Z","lastTransitionTime":"2026-03-11T08:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.653263 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.653321 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.653335 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.653359 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.653375 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:20Z","lastTransitionTime":"2026-03-11T08:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.756154 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.756215 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.756234 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.756263 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.756283 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:20Z","lastTransitionTime":"2026-03-11T08:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.859890 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.859993 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.860025 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.860062 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.860094 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:20Z","lastTransitionTime":"2026-03-11T08:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.963159 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.963216 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.963235 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.963261 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:20 crc kubenswrapper[4840]: I0311 08:58:20.963279 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:20Z","lastTransitionTime":"2026-03-11T08:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.059429 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.059441 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.059857 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:21 crc kubenswrapper[4840]: E0311 08:58:21.060121 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:58:21 crc kubenswrapper[4840]: E0311 08:58:21.060669 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:21 crc kubenswrapper[4840]: E0311 08:58:21.060823 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.066031 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.066138 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.066153 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.066173 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.066186 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:21Z","lastTransitionTime":"2026-03-11T08:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.170141 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.170234 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.170258 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.170293 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.170318 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:21Z","lastTransitionTime":"2026-03-11T08:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.274323 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.274381 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.274404 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.274433 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.274453 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:21Z","lastTransitionTime":"2026-03-11T08:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.378649 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.378707 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.378731 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.378764 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.378789 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:21Z","lastTransitionTime":"2026-03-11T08:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.481656 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.481699 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.481711 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.481732 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.481755 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:21Z","lastTransitionTime":"2026-03-11T08:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.584126 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.584183 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.584193 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.584213 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.584225 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:21Z","lastTransitionTime":"2026-03-11T08:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.687065 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.687136 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.687161 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.687193 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.687218 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:21Z","lastTransitionTime":"2026-03-11T08:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.789966 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.790028 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.790039 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.790055 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.790065 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:21Z","lastTransitionTime":"2026-03-11T08:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.894163 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.894262 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.894299 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.894338 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.894366 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:21Z","lastTransitionTime":"2026-03-11T08:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.997428 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.997536 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.997560 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.997626 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:21 crc kubenswrapper[4840]: I0311 08:58:21.997648 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:21Z","lastTransitionTime":"2026-03-11T08:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.077812 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:22Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.098756 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:22Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.100126 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.100156 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.100168 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.100186 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.100200 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:22Z","lastTransitionTime":"2026-03-11T08:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.121442 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:22Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.140328 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:22Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.161814 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:22Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.182527 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:22Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.196210 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:22Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.202142 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.202191 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.202204 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.202226 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.202240 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:22Z","lastTransitionTime":"2026-03-11T08:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.304580 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.304617 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.304628 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.304646 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.304658 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:22Z","lastTransitionTime":"2026-03-11T08:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.407022 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.407085 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.407104 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.407133 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.407152 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:22Z","lastTransitionTime":"2026-03-11T08:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.510856 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.510908 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.510923 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.510945 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.510959 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:22Z","lastTransitionTime":"2026-03-11T08:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.614408 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.614461 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.614483 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.614501 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.614513 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:22Z","lastTransitionTime":"2026-03-11T08:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.717969 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.718024 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.718039 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.718062 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.718081 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:22Z","lastTransitionTime":"2026-03-11T08:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.821666 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.821731 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.821745 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.821766 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.821783 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:22Z","lastTransitionTime":"2026-03-11T08:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.924674 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.924717 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.924730 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.924748 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:22 crc kubenswrapper[4840]: I0311 08:58:22.924761 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:22Z","lastTransitionTime":"2026-03-11T08:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.027037 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.027075 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.027087 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.027102 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.027114 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:23Z","lastTransitionTime":"2026-03-11T08:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.059515 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:23 crc kubenswrapper[4840]: E0311 08:58:23.059639 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.059523 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.059811 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:23 crc kubenswrapper[4840]: E0311 08:58:23.059959 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:23 crc kubenswrapper[4840]: E0311 08:58:23.060074 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.130645 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.130719 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.130739 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.130768 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.130787 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:23Z","lastTransitionTime":"2026-03-11T08:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.233303 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.233359 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.233373 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.233397 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.233414 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:23Z","lastTransitionTime":"2026-03-11T08:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.337287 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.337343 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.337362 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.337389 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.337408 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:23Z","lastTransitionTime":"2026-03-11T08:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.440568 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.440629 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.440648 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.440672 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.440690 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:23Z","lastTransitionTime":"2026-03-11T08:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.543601 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.543673 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.543690 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.543717 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.543734 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:23Z","lastTransitionTime":"2026-03-11T08:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.647240 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.647298 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.647316 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.647341 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.647357 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:23Z","lastTransitionTime":"2026-03-11T08:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.750251 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.750335 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.750354 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.750384 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.750410 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:23Z","lastTransitionTime":"2026-03-11T08:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.853417 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.853526 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.853550 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.853580 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.853605 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:23Z","lastTransitionTime":"2026-03-11T08:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.956261 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.956315 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.956328 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.956350 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:23 crc kubenswrapper[4840]: I0311 08:58:23.956366 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:23Z","lastTransitionTime":"2026-03-11T08:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.059213 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.059274 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.059288 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.059321 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.059334 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:24Z","lastTransitionTime":"2026-03-11T08:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.162357 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.162556 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.162589 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.162655 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.162678 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:24Z","lastTransitionTime":"2026-03-11T08:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.266223 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.266288 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.266302 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.266323 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.266339 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:24Z","lastTransitionTime":"2026-03-11T08:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.369841 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.369914 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.369938 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.369965 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.369987 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:24Z","lastTransitionTime":"2026-03-11T08:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.473589 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.473661 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.473675 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.473722 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.473825 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:24Z","lastTransitionTime":"2026-03-11T08:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.576852 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.576931 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.577028 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.577075 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.577163 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:24Z","lastTransitionTime":"2026-03-11T08:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.680674 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.680748 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.680774 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.680811 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.680837 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:24Z","lastTransitionTime":"2026-03-11T08:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.784112 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.784186 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.784202 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.784237 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.784256 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:24Z","lastTransitionTime":"2026-03-11T08:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.831422 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.831551 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.831591 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:24 crc kubenswrapper[4840]: E0311 08:58:24.831733 4840 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:58:24 crc kubenswrapper[4840]: E0311 08:58:24.831835 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:58:40.831775931 +0000 UTC m=+119.497445786 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:58:24 crc kubenswrapper[4840]: E0311 08:58:24.831857 4840 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:58:24 crc kubenswrapper[4840]: E0311 08:58:24.831938 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:40.831920945 +0000 UTC m=+119.497590800 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:58:24 crc kubenswrapper[4840]: E0311 08:58:24.832030 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:40.831994747 +0000 UTC m=+119.497664582 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.888342 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.888442 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.888459 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.888503 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.888528 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:24Z","lastTransitionTime":"2026-03-11T08:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.933008 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.933152 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:24 crc kubenswrapper[4840]: E0311 08:58:24.933633 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:58:24 crc kubenswrapper[4840]: E0311 08:58:24.933651 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:58:24 crc kubenswrapper[4840]: E0311 08:58:24.933747 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:58:24 crc kubenswrapper[4840]: E0311 08:58:24.933776 4840 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:24 crc kubenswrapper[4840]: E0311 08:58:24.933666 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:58:24 crc kubenswrapper[4840]: E0311 08:58:24.933876 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:40.933844466 +0000 UTC m=+119.599514321 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:24 crc kubenswrapper[4840]: E0311 08:58:24.933878 4840 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:24 crc kubenswrapper[4840]: E0311 08:58:24.933946 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:40.933934038 +0000 UTC m=+119.599603883 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.992576 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.992618 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.992628 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.992644 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:24 crc kubenswrapper[4840]: I0311 08:58:24.992656 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:24Z","lastTransitionTime":"2026-03-11T08:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.059733 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.059749 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:25 crc kubenswrapper[4840]: E0311 08:58:25.059957 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.059769 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:25 crc kubenswrapper[4840]: E0311 08:58:25.060131 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:58:25 crc kubenswrapper[4840]: E0311 08:58:25.060310 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.095776 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.095818 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.095827 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.095843 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.095852 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:25Z","lastTransitionTime":"2026-03-11T08:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.199243 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.199310 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.199325 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.199642 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.199679 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:25Z","lastTransitionTime":"2026-03-11T08:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.303175 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.303227 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.303236 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.303252 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.303262 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:25Z","lastTransitionTime":"2026-03-11T08:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.406627 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.406696 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.406710 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.406732 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.406746 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:25Z","lastTransitionTime":"2026-03-11T08:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.511346 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.511401 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.511414 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.511434 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.511447 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:25Z","lastTransitionTime":"2026-03-11T08:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.614249 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.614301 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.614311 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.614329 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.614339 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:25Z","lastTransitionTime":"2026-03-11T08:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.718117 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.718399 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.718420 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.718443 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.718458 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:25Z","lastTransitionTime":"2026-03-11T08:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.822200 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.822258 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.822272 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.822292 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.822303 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:25Z","lastTransitionTime":"2026-03-11T08:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.926099 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.926173 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.926195 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.926229 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:25 crc kubenswrapper[4840]: I0311 08:58:25.926255 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:25Z","lastTransitionTime":"2026-03-11T08:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.029681 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.029733 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.029742 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.029759 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.029771 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:26Z","lastTransitionTime":"2026-03-11T08:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.133190 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.133266 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.133287 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.133319 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.133339 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:26Z","lastTransitionTime":"2026-03-11T08:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.237238 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.237321 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.237339 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.237373 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.237390 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:26Z","lastTransitionTime":"2026-03-11T08:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.340130 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.340191 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.340201 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.340215 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.340224 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:26Z","lastTransitionTime":"2026-03-11T08:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.443987 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.444063 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.444089 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.444124 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.444149 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:26Z","lastTransitionTime":"2026-03-11T08:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.546809 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.546882 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.546901 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.546925 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.546944 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:26Z","lastTransitionTime":"2026-03-11T08:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.650788 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.650849 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.650870 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.650894 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.650913 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:26Z","lastTransitionTime":"2026-03-11T08:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.711602 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-jlzht"] Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.712077 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jlzht" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.715991 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.716046 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.716977 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.734066 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:26Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.748037 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:26Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.749646 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjngl\" (UniqueName: \"kubernetes.io/projected/97907402-fb5a-4fb4-80ac-5b600527c547-kube-api-access-vjngl\") pod \"node-resolver-jlzht\" (UID: \"97907402-fb5a-4fb4-80ac-5b600527c547\") " pod="openshift-dns/node-resolver-jlzht" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.749901 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/97907402-fb5a-4fb4-80ac-5b600527c547-hosts-file\") pod \"node-resolver-jlzht\" (UID: \"97907402-fb5a-4fb4-80ac-5b600527c547\") " pod="openshift-dns/node-resolver-jlzht" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.753533 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.753619 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.753646 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.753677 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.753709 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:26Z","lastTransitionTime":"2026-03-11T08:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.766033 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:26Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.782343 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:26Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.798893 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:26Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.815926 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:26Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.829873 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:26Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.844104 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:26Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.850701 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjngl\" (UniqueName: \"kubernetes.io/projected/97907402-fb5a-4fb4-80ac-5b600527c547-kube-api-access-vjngl\") pod \"node-resolver-jlzht\" (UID: \"97907402-fb5a-4fb4-80ac-5b600527c547\") " pod="openshift-dns/node-resolver-jlzht" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.850812 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/97907402-fb5a-4fb4-80ac-5b600527c547-hosts-file\") pod \"node-resolver-jlzht\" (UID: \"97907402-fb5a-4fb4-80ac-5b600527c547\") " pod="openshift-dns/node-resolver-jlzht" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.850927 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/97907402-fb5a-4fb4-80ac-5b600527c547-hosts-file\") pod \"node-resolver-jlzht\" (UID: \"97907402-fb5a-4fb4-80ac-5b600527c547\") " pod="openshift-dns/node-resolver-jlzht" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.856838 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.856889 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.856905 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.856934 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.856950 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:26Z","lastTransitionTime":"2026-03-11T08:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.872755 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjngl\" (UniqueName: \"kubernetes.io/projected/97907402-fb5a-4fb4-80ac-5b600527c547-kube-api-access-vjngl\") pod \"node-resolver-jlzht\" (UID: \"97907402-fb5a-4fb4-80ac-5b600527c547\") " pod="openshift-dns/node-resolver-jlzht" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.960452 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.960556 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.960588 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.960623 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:26 crc kubenswrapper[4840]: I0311 08:58:26.960650 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:26Z","lastTransitionTime":"2026-03-11T08:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.030622 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jlzht" Mar 11 08:58:27 crc kubenswrapper[4840]: W0311 08:58:27.051573 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97907402_fb5a_4fb4_80ac_5b600527c547.slice/crio-520bf48d21f6b73cab23674dc0a63351e6d954e119708e7519b62fd620da81ad WatchSource:0}: Error finding container 520bf48d21f6b73cab23674dc0a63351e6d954e119708e7519b62fd620da81ad: Status 404 returned error can't find the container with id 520bf48d21f6b73cab23674dc0a63351e6d954e119708e7519b62fd620da81ad Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.063028 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.063062 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.063029 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:27 crc kubenswrapper[4840]: E0311 08:58:27.063219 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:27 crc kubenswrapper[4840]: E0311 08:58:27.063353 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:27 crc kubenswrapper[4840]: E0311 08:58:27.063497 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.068937 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.068999 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.069013 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.069034 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.069052 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:27Z","lastTransitionTime":"2026-03-11T08:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.092556 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-xn47g"] Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.095206 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-brtht"] Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.096062 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xn47g" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.096439 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-vcb9n"] Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.097193 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.097983 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.105929 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.106015 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.106357 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.106404 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.106415 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.106619 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.106787 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.106896 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.107027 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.107231 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.107364 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.107766 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.124088 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.146674 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.153695 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-host-var-lib-kubelet\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.153741 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-multus-socket-dir-parent\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.153767 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-multus-conf-dir\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.153791 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d-rootfs\") pod \"machine-config-daemon-brtht\" (UID: \"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\") " pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.153815 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-host-run-netns\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.153838 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0c1678fd-7741-474b-9c8e-3008d3570921-cni-binary-copy\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.154035 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-os-release\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.154094 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46dj7\" (UniqueName: \"kubernetes.io/projected/0c1678fd-7741-474b-9c8e-3008d3570921-kube-api-access-46dj7\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.154130 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfcb2\" (UniqueName: \"kubernetes.io/projected/d6b1fe1a-6473-41f8-a45f-aaaa148c1412-kube-api-access-cfcb2\") pod \"multus-additional-cni-plugins-xn47g\" (UID: \"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\") " pod="openshift-multus/multus-additional-cni-plugins-xn47g" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.154169 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-host-run-k8s-cni-cncf-io\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.154223 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0c1678fd-7741-474b-9c8e-3008d3570921-multus-daemon-config\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.154257 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d6b1fe1a-6473-41f8-a45f-aaaa148c1412-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xn47g\" (UID: \"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\") " pod="openshift-multus/multus-additional-cni-plugins-xn47g" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.154290 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-multus-cni-dir\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.154366 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d6b1fe1a-6473-41f8-a45f-aaaa148c1412-cni-binary-copy\") pod \"multus-additional-cni-plugins-xn47g\" (UID: \"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\") " pod="openshift-multus/multus-additional-cni-plugins-xn47g" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.154418 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-etc-kubernetes\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.154439 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d-proxy-tls\") pod \"machine-config-daemon-brtht\" (UID: \"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\") " pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.154457 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c8dj\" (UniqueName: \"kubernetes.io/projected/8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d-kube-api-access-8c8dj\") pod \"machine-config-daemon-brtht\" (UID: \"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\") " pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.154498 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-system-cni-dir\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.154518 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-cnibin\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.154545 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-hostroot\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.154564 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d6b1fe1a-6473-41f8-a45f-aaaa148c1412-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xn47g\" (UID: \"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\") " pod="openshift-multus/multus-additional-cni-plugins-xn47g" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.154640 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-host-var-lib-cni-bin\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.154666 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-host-var-lib-cni-multus\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.154689 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d6b1fe1a-6473-41f8-a45f-aaaa148c1412-os-release\") pod \"multus-additional-cni-plugins-xn47g\" (UID: \"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\") " pod="openshift-multus/multus-additional-cni-plugins-xn47g" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.154711 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d-mcd-auth-proxy-config\") pod \"machine-config-daemon-brtht\" (UID: \"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\") " pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.154735 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d6b1fe1a-6473-41f8-a45f-aaaa148c1412-system-cni-dir\") pod \"multus-additional-cni-plugins-xn47g\" (UID: \"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\") " pod="openshift-multus/multus-additional-cni-plugins-xn47g" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.154799 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-host-run-multus-certs\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.154827 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d6b1fe1a-6473-41f8-a45f-aaaa148c1412-cnibin\") pod \"multus-additional-cni-plugins-xn47g\" (UID: \"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\") " pod="openshift-multus/multus-additional-cni-plugins-xn47g" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.169033 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.173509 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.173556 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.173571 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.173591 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.173607 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:27Z","lastTransitionTime":"2026-03-11T08:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.187199 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.203293 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.223060 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.245658 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.255414 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d6b1fe1a-6473-41f8-a45f-aaaa148c1412-os-release\") pod \"multus-additional-cni-plugins-xn47g\" (UID: \"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\") " pod="openshift-multus/multus-additional-cni-plugins-xn47g" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.255486 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d6b1fe1a-6473-41f8-a45f-aaaa148c1412-system-cni-dir\") pod \"multus-additional-cni-plugins-xn47g\" (UID: \"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\") " pod="openshift-multus/multus-additional-cni-plugins-xn47g" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.255509 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d-mcd-auth-proxy-config\") pod \"machine-config-daemon-brtht\" (UID: \"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\") " pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.255532 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-host-run-multus-certs\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.255555 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d6b1fe1a-6473-41f8-a45f-aaaa148c1412-cnibin\") pod \"multus-additional-cni-plugins-xn47g\" (UID: \"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\") " pod="openshift-multus/multus-additional-cni-plugins-xn47g" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.255574 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-multus-socket-dir-parent\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.255592 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-host-var-lib-kubelet\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.255614 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-host-run-netns\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.255634 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-multus-conf-dir\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.255652 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d-rootfs\") pod \"machine-config-daemon-brtht\" (UID: \"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\") " pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.255673 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0c1678fd-7741-474b-9c8e-3008d3570921-cni-binary-copy\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.255695 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfcb2\" (UniqueName: \"kubernetes.io/projected/d6b1fe1a-6473-41f8-a45f-aaaa148c1412-kube-api-access-cfcb2\") pod \"multus-additional-cni-plugins-xn47g\" (UID: \"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\") " pod="openshift-multus/multus-additional-cni-plugins-xn47g" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.255741 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-os-release\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.255763 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46dj7\" (UniqueName: \"kubernetes.io/projected/0c1678fd-7741-474b-9c8e-3008d3570921-kube-api-access-46dj7\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.255783 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-multus-cni-dir\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.255801 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-host-run-k8s-cni-cncf-io\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.255825 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0c1678fd-7741-474b-9c8e-3008d3570921-multus-daemon-config\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.255848 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d6b1fe1a-6473-41f8-a45f-aaaa148c1412-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xn47g\" (UID: \"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\") " pod="openshift-multus/multus-additional-cni-plugins-xn47g" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.255868 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d6b1fe1a-6473-41f8-a45f-aaaa148c1412-cni-binary-copy\") pod \"multus-additional-cni-plugins-xn47g\" (UID: \"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\") " pod="openshift-multus/multus-additional-cni-plugins-xn47g" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.255910 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-etc-kubernetes\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.255931 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d-proxy-tls\") pod \"machine-config-daemon-brtht\" (UID: \"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\") " pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.255950 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c8dj\" (UniqueName: \"kubernetes.io/projected/8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d-kube-api-access-8c8dj\") pod \"machine-config-daemon-brtht\" (UID: \"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\") " pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.255973 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-system-cni-dir\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.255992 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-cnibin\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.256011 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-hostroot\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.256031 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d6b1fe1a-6473-41f8-a45f-aaaa148c1412-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xn47g\" (UID: \"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\") " pod="openshift-multus/multus-additional-cni-plugins-xn47g" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.256063 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-host-var-lib-cni-bin\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.256083 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-host-var-lib-cni-multus\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.256158 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-host-var-lib-cni-multus\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.256243 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d6b1fe1a-6473-41f8-a45f-aaaa148c1412-os-release\") pod \"multus-additional-cni-plugins-xn47g\" (UID: \"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\") " pod="openshift-multus/multus-additional-cni-plugins-xn47g" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.256274 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d6b1fe1a-6473-41f8-a45f-aaaa148c1412-system-cni-dir\") pod \"multus-additional-cni-plugins-xn47g\" (UID: \"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\") " pod="openshift-multus/multus-additional-cni-plugins-xn47g" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.257113 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d-mcd-auth-proxy-config\") pod \"machine-config-daemon-brtht\" (UID: \"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\") " pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.257173 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-host-run-multus-certs\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.257204 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d6b1fe1a-6473-41f8-a45f-aaaa148c1412-cnibin\") pod \"multus-additional-cni-plugins-xn47g\" (UID: \"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\") " pod="openshift-multus/multus-additional-cni-plugins-xn47g" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.257248 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-multus-socket-dir-parent\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.257274 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-host-var-lib-kubelet\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.257306 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-host-run-netns\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.257345 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-multus-conf-dir\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.257377 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d-rootfs\") pod \"machine-config-daemon-brtht\" (UID: \"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\") " pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.258021 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/0c1678fd-7741-474b-9c8e-3008d3570921-cni-binary-copy\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.258386 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-os-release\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.258617 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-multus-cni-dir\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.258651 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-host-run-k8s-cni-cncf-io\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.259217 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/0c1678fd-7741-474b-9c8e-3008d3570921-multus-daemon-config\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.259360 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-hostroot\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.259452 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-host-var-lib-cni-bin\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.259572 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-cnibin\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.259626 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-system-cni-dir\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.260023 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d6b1fe1a-6473-41f8-a45f-aaaa148c1412-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xn47g\" (UID: \"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\") " pod="openshift-multus/multus-additional-cni-plugins-xn47g" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.260510 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c1678fd-7741-474b-9c8e-3008d3570921-etc-kubernetes\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.261451 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d6b1fe1a-6473-41f8-a45f-aaaa148c1412-cni-binary-copy\") pod \"multus-additional-cni-plugins-xn47g\" (UID: \"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\") " pod="openshift-multus/multus-additional-cni-plugins-xn47g" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.264140 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d6b1fe1a-6473-41f8-a45f-aaaa148c1412-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xn47g\" (UID: \"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\") " pod="openshift-multus/multus-additional-cni-plugins-xn47g" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.264920 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d-proxy-tls\") pod \"machine-config-daemon-brtht\" (UID: \"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\") " pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.265973 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.276869 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.276913 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.276923 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.276942 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.276955 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:27Z","lastTransitionTime":"2026-03-11T08:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.283116 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c8dj\" (UniqueName: \"kubernetes.io/projected/8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d-kube-api-access-8c8dj\") pod \"machine-config-daemon-brtht\" (UID: \"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\") " pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.283615 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.285674 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46dj7\" (UniqueName: \"kubernetes.io/projected/0c1678fd-7741-474b-9c8e-3008d3570921-kube-api-access-46dj7\") pod \"multus-vcb9n\" (UID: \"0c1678fd-7741-474b-9c8e-3008d3570921\") " pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.288196 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfcb2\" (UniqueName: \"kubernetes.io/projected/d6b1fe1a-6473-41f8-a45f-aaaa148c1412-kube-api-access-cfcb2\") pod \"multus-additional-cni-plugins-xn47g\" (UID: \"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\") " pod="openshift-multus/multus-additional-cni-plugins-xn47g" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.299315 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.315613 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.330871 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.345014 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.356138 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.374107 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.379251 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.379298 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.379311 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.379332 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.379343 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:27Z","lastTransitionTime":"2026-03-11T08:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.389205 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.401953 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.418116 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.423939 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xn47g" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.432379 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-vcb9n" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.437032 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.444851 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.453015 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.477809 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7c2zl"] Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.484727 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.488240 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.488290 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.488307 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.488329 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.488346 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:27Z","lastTransitionTime":"2026-03-11T08:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.488662 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.491372 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.491400 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.491485 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.491520 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.491639 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.491813 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.504304 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vcb9n" event={"ID":"0c1678fd-7741-474b-9c8e-3008d3570921","Type":"ContainerStarted","Data":"7bc824ca567b4d93f37b0e405d96372f46749627ba263147d236f82b1d75d1ed"} Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.505005 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.507662 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jlzht" event={"ID":"97907402-fb5a-4fb4-80ac-5b600527c547","Type":"ContainerStarted","Data":"2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0"} Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.507696 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jlzht" event={"ID":"97907402-fb5a-4fb4-80ac-5b600527c547","Type":"ContainerStarted","Data":"520bf48d21f6b73cab23674dc0a63351e6d954e119708e7519b62fd620da81ad"} Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.508920 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" event={"ID":"d6b1fe1a-6473-41f8-a45f-aaaa148c1412","Type":"ContainerStarted","Data":"cb32cff4cdb3635dce9d431f1348a85363f9cc5b68f2db43b2754376575efa1c"} Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.509935 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"677c83c029db98a8b780cb009d7cf69bf0965a3c00abdbccea451b32a2f4ca20"} Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.518658 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.534519 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.545675 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.559643 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.560094 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-run-netns\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.560155 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-run-ovn-kubernetes\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.560184 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-systemd-units\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.560277 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-log-socket\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.560302 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-kubelet\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.560323 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-run-systemd\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.560343 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.560636 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-var-lib-openvswitch\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.560682 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-cni-bin\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.560721 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-cni-netd\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.560795 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-slash\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.560823 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-etc-openvswitch\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.560844 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-node-log\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.560884 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/935336e2-294b-4982-83f9-718806d14e5c-ovnkube-script-lib\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.560929 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-run-openvswitch\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.561241 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-run-ovn\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.561841 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/935336e2-294b-4982-83f9-718806d14e5c-env-overrides\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.561910 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/935336e2-294b-4982-83f9-718806d14e5c-ovnkube-config\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.561944 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/935336e2-294b-4982-83f9-718806d14e5c-ovn-node-metrics-cert\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.561964 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrc2z\" (UniqueName: \"kubernetes.io/projected/935336e2-294b-4982-83f9-718806d14e5c-kube-api-access-lrc2z\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.575706 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.590921 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.591417 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.591454 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.591467 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.591501 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.591515 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:27Z","lastTransitionTime":"2026-03-11T08:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.607292 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.629048 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.646288 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.661340 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.662659 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.662704 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-kubelet\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.662742 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-run-systemd\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.662763 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-var-lib-openvswitch\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.662782 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-cni-bin\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.662800 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-cni-netd\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.662820 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-slash\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.662845 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-etc-openvswitch\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.662862 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-node-log\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.662877 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/935336e2-294b-4982-83f9-718806d14e5c-ovnkube-script-lib\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.662895 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-run-openvswitch\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.662935 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-run-ovn\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.662952 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/935336e2-294b-4982-83f9-718806d14e5c-env-overrides\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.662973 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/935336e2-294b-4982-83f9-718806d14e5c-ovnkube-config\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.662991 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrc2z\" (UniqueName: \"kubernetes.io/projected/935336e2-294b-4982-83f9-718806d14e5c-kube-api-access-lrc2z\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.663017 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/935336e2-294b-4982-83f9-718806d14e5c-ovn-node-metrics-cert\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.663036 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-run-netns\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.663029 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-kubelet\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.663081 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-run-systemd\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.663029 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-cni-bin\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.663129 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-cni-netd\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.663134 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-run-ovn-kubernetes\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.663151 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.663174 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-var-lib-openvswitch\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.663186 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-slash\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.663202 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-run-ovn\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.663216 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-etc-openvswitch\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.663230 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-node-log\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.663057 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-run-ovn-kubernetes\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.663312 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-systemd-units\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.663350 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-log-socket\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.663519 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-log-socket\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.663988 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/935336e2-294b-4982-83f9-718806d14e5c-env-overrides\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.664035 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-run-netns\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.664070 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-run-openvswitch\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.664097 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-systemd-units\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.664289 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/935336e2-294b-4982-83f9-718806d14e5c-ovnkube-script-lib\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.664314 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/935336e2-294b-4982-83f9-718806d14e5c-ovnkube-config\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.667202 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/935336e2-294b-4982-83f9-718806d14e5c-ovn-node-metrics-cert\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.677038 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.680728 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrc2z\" (UniqueName: \"kubernetes.io/projected/935336e2-294b-4982-83f9-718806d14e5c-kube-api-access-lrc2z\") pod \"ovnkube-node-7c2zl\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.693852 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.694255 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.694298 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.694307 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.694357 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.694372 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:27Z","lastTransitionTime":"2026-03-11T08:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.713286 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.732141 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.753350 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.765667 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.784213 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.796885 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.796926 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.796935 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.796950 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.796960 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:27Z","lastTransitionTime":"2026-03-11T08:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.799254 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.802272 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: W0311 08:58:27.813778 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod935336e2_294b_4982_83f9_718806d14e5c.slice/crio-5183fda9fba4bfb5b137d3d27accc77f5c252a73c89e1c6560c5ff2ea00e7c18 WatchSource:0}: Error finding container 5183fda9fba4bfb5b137d3d27accc77f5c252a73c89e1c6560c5ff2ea00e7c18: Status 404 returned error can't find the container with id 5183fda9fba4bfb5b137d3d27accc77f5c252a73c89e1c6560c5ff2ea00e7c18 Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.816010 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.834137 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.846936 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.869032 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.888099 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:27Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.899716 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.899748 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.899757 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.899772 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:27 crc kubenswrapper[4840]: I0311 08:58:27.899781 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:27Z","lastTransitionTime":"2026-03-11T08:58:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.002344 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.002397 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.002409 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.002426 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.002438 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:28Z","lastTransitionTime":"2026-03-11T08:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.104633 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.104670 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.104681 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.104698 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.104709 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:28Z","lastTransitionTime":"2026-03-11T08:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.207035 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.207083 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.207093 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.207108 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.207117 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:28Z","lastTransitionTime":"2026-03-11T08:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.310094 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.310202 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.310222 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.310249 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.310268 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:28Z","lastTransitionTime":"2026-03-11T08:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.413356 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.413400 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.413410 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.413425 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.413435 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:28Z","lastTransitionTime":"2026-03-11T08:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.515622 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.516262 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.516282 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.516317 4840 generic.go:334] "Generic (PLEG): container finished" podID="d6b1fe1a-6473-41f8-a45f-aaaa148c1412" containerID="a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10" exitCode=0 Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.516441 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" event={"ID":"d6b1fe1a-6473-41f8-a45f-aaaa148c1412","Type":"ContainerDied","Data":"a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10"} Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.516627 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.516879 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:28Z","lastTransitionTime":"2026-03-11T08:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.518572 4840 generic.go:334] "Generic (PLEG): container finished" podID="935336e2-294b-4982-83f9-718806d14e5c" containerID="c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19" exitCode=0 Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.518635 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" event={"ID":"935336e2-294b-4982-83f9-718806d14e5c","Type":"ContainerDied","Data":"c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19"} Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.518656 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" event={"ID":"935336e2-294b-4982-83f9-718806d14e5c","Type":"ContainerStarted","Data":"5183fda9fba4bfb5b137d3d27accc77f5c252a73c89e1c6560c5ff2ea00e7c18"} Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.521249 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2"} Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.521299 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803"} Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.523881 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vcb9n" event={"ID":"0c1678fd-7741-474b-9c8e-3008d3570921","Type":"ContainerStarted","Data":"affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638"} Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.545531 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:28Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.565169 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:28Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.581149 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:28Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.608752 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:28Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.624750 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.624799 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.624813 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.624836 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.624853 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:28Z","lastTransitionTime":"2026-03-11T08:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.624999 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:28Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.643637 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:28Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.661546 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:28Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.677008 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:28Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.697961 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:28Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.713592 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:28Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.727031 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:28Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.728208 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.728258 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.728268 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.728288 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.728299 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:28Z","lastTransitionTime":"2026-03-11T08:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.740581 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:28Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.756574 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:28Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.769862 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:28Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.784173 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:28Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.797854 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:28Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.812851 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:28Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.824250 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:28Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.830916 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.831002 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.831016 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.831037 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.831069 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:28Z","lastTransitionTime":"2026-03-11T08:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.849921 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:28Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.872595 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:28Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.888690 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:28Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.903833 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:28Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.918201 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:28Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.931621 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:28Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.933496 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.933550 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.933565 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.933587 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:28 crc kubenswrapper[4840]: I0311 08:58:28.933604 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:28Z","lastTransitionTime":"2026-03-11T08:58:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.036622 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.036658 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.036666 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.036684 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.036695 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:29Z","lastTransitionTime":"2026-03-11T08:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.059419 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.059487 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.059626 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:29 crc kubenswrapper[4840]: E0311 08:58:29.059613 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:58:29 crc kubenswrapper[4840]: E0311 08:58:29.059859 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:29 crc kubenswrapper[4840]: E0311 08:58:29.059986 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.139437 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.139496 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.139506 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.139527 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.139555 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:29Z","lastTransitionTime":"2026-03-11T08:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.240914 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.240958 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.240970 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.240991 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.241005 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:29Z","lastTransitionTime":"2026-03-11T08:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:29 crc kubenswrapper[4840]: E0311 08:58:29.256764 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.261286 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.261327 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.261348 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.261369 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.261383 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:29Z","lastTransitionTime":"2026-03-11T08:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:29 crc kubenswrapper[4840]: E0311 08:58:29.274649 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.278328 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.278391 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.278404 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.278426 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.278439 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:29Z","lastTransitionTime":"2026-03-11T08:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:29 crc kubenswrapper[4840]: E0311 08:58:29.295016 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.300719 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.300770 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.300785 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.300810 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.300831 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:29Z","lastTransitionTime":"2026-03-11T08:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:29 crc kubenswrapper[4840]: E0311 08:58:29.314446 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.320439 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.320687 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.320703 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.320726 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.320740 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:29Z","lastTransitionTime":"2026-03-11T08:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:29 crc kubenswrapper[4840]: E0311 08:58:29.336985 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:29 crc kubenswrapper[4840]: E0311 08:58:29.337109 4840 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.339270 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.339324 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.339337 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.339362 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.339375 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:29Z","lastTransitionTime":"2026-03-11T08:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.444163 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.444772 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.444788 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.444812 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.444825 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:29Z","lastTransitionTime":"2026-03-11T08:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.530652 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" event={"ID":"d6b1fe1a-6473-41f8-a45f-aaaa148c1412","Type":"ContainerStarted","Data":"13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3"} Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.535440 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" event={"ID":"935336e2-294b-4982-83f9-718806d14e5c","Type":"ContainerStarted","Data":"910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c"} Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.535502 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" event={"ID":"935336e2-294b-4982-83f9-718806d14e5c","Type":"ContainerStarted","Data":"047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373"} Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.535514 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" event={"ID":"935336e2-294b-4982-83f9-718806d14e5c","Type":"ContainerStarted","Data":"430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d"} Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.535525 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" event={"ID":"935336e2-294b-4982-83f9-718806d14e5c","Type":"ContainerStarted","Data":"eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24"} Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.547199 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.547415 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.547437 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.547448 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.547483 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.547495 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:29Z","lastTransitionTime":"2026-03-11T08:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.561522 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.577174 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.599281 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.617953 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.637358 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.650721 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.650769 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.650783 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.650807 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.650819 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:29Z","lastTransitionTime":"2026-03-11T08:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.656559 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.671749 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.688641 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.712427 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.726884 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.748895 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:29Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.754773 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.754845 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.754860 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.754889 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.754905 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:29Z","lastTransitionTime":"2026-03-11T08:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.858170 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.858252 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.858275 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.858307 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.858330 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:29Z","lastTransitionTime":"2026-03-11T08:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.962270 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.962324 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.962334 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.962352 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:29 crc kubenswrapper[4840]: I0311 08:58:29.962362 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:29Z","lastTransitionTime":"2026-03-11T08:58:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.064421 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.064459 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.064485 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.064500 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.064511 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:30Z","lastTransitionTime":"2026-03-11T08:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.166851 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.166916 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.166933 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.166956 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.166974 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:30Z","lastTransitionTime":"2026-03-11T08:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.268819 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.268876 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.268890 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.268904 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.268913 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:30Z","lastTransitionTime":"2026-03-11T08:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.372133 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.372198 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.372215 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.372237 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.372254 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:30Z","lastTransitionTime":"2026-03-11T08:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.475092 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.475133 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.475143 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.475158 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.475170 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:30Z","lastTransitionTime":"2026-03-11T08:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.541822 4840 generic.go:334] "Generic (PLEG): container finished" podID="d6b1fe1a-6473-41f8-a45f-aaaa148c1412" containerID="13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3" exitCode=0 Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.541947 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" event={"ID":"d6b1fe1a-6473-41f8-a45f-aaaa148c1412","Type":"ContainerDied","Data":"13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3"} Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.549310 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" event={"ID":"935336e2-294b-4982-83f9-718806d14e5c","Type":"ContainerStarted","Data":"aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd"} Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.549413 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" event={"ID":"935336e2-294b-4982-83f9-718806d14e5c","Type":"ContainerStarted","Data":"4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb"} Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.563204 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:30Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.578291 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.578357 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.578371 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.578396 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.578412 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:30Z","lastTransitionTime":"2026-03-11T08:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.583963 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:30Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.602689 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:30Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.617154 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:30Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.635084 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:30Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.652289 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:30Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.667528 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:30Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.682323 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.682391 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.682653 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.682696 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.682896 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:30Z","lastTransitionTime":"2026-03-11T08:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.684420 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:30Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.703996 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:30Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.719720 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:30Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.735266 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:30Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.751044 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:30Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.787316 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.787365 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.787389 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.787411 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.787427 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:30Z","lastTransitionTime":"2026-03-11T08:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.891551 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.891601 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.891614 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.891634 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.891648 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:30Z","lastTransitionTime":"2026-03-11T08:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.995085 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.995135 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.995147 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.995164 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:30 crc kubenswrapper[4840]: I0311 08:58:30.995175 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:30Z","lastTransitionTime":"2026-03-11T08:58:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.059760 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.059841 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:31 crc kubenswrapper[4840]: E0311 08:58:31.059922 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.059947 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:31 crc kubenswrapper[4840]: E0311 08:58:31.060057 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:31 crc kubenswrapper[4840]: E0311 08:58:31.060138 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.099119 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.099176 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.099190 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.099211 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.099224 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:31Z","lastTransitionTime":"2026-03-11T08:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.202427 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.202548 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.202575 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.202609 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.202637 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:31Z","lastTransitionTime":"2026-03-11T08:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.306289 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.306356 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.306374 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.306401 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.306418 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:31Z","lastTransitionTime":"2026-03-11T08:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.410658 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.410807 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.410848 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.410887 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.410914 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:31Z","lastTransitionTime":"2026-03-11T08:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.514321 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.514418 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.514431 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.514461 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.514498 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:31Z","lastTransitionTime":"2026-03-11T08:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.556981 4840 generic.go:334] "Generic (PLEG): container finished" podID="d6b1fe1a-6473-41f8-a45f-aaaa148c1412" containerID="98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92" exitCode=0 Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.557084 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" event={"ID":"d6b1fe1a-6473-41f8-a45f-aaaa148c1412","Type":"ContainerDied","Data":"98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92"} Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.577532 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:31Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.603500 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:31Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.619614 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.619661 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.619671 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.619689 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.619699 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:31Z","lastTransitionTime":"2026-03-11T08:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.623121 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:31Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.644408 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:31Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.660102 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:31Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.683692 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:31Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.712179 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:31Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.722852 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.722923 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.722965 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.722990 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.723008 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:31Z","lastTransitionTime":"2026-03-11T08:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.728197 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:31Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.758730 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:31Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.772298 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:31Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.785504 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:31Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.798634 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:31Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.825497 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.825537 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.825548 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.825570 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.825584 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:31Z","lastTransitionTime":"2026-03-11T08:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.928859 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.928909 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.928926 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.928947 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:31 crc kubenswrapper[4840]: I0311 08:58:31.928963 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:31Z","lastTransitionTime":"2026-03-11T08:58:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.031996 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.032044 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.032056 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.032077 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.032091 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:32Z","lastTransitionTime":"2026-03-11T08:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.074976 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.091192 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.105865 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.118686 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.134008 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.134041 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.134050 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.134069 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.134081 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:32Z","lastTransitionTime":"2026-03-11T08:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.134481 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.152231 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.171254 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.184259 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.197013 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.210148 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.227845 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.237248 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.237323 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.237343 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.237358 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.237376 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:32Z","lastTransitionTime":"2026-03-11T08:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.242906 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.339899 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.339952 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.339968 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.339986 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.339999 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:32Z","lastTransitionTime":"2026-03-11T08:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.444038 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.444126 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.444148 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.444189 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.444210 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:32Z","lastTransitionTime":"2026-03-11T08:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.551571 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.551660 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.551683 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.551718 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.551753 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:32Z","lastTransitionTime":"2026-03-11T08:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.569313 4840 generic.go:334] "Generic (PLEG): container finished" podID="d6b1fe1a-6473-41f8-a45f-aaaa148c1412" containerID="5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11" exitCode=0 Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.569401 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" event={"ID":"d6b1fe1a-6473-41f8-a45f-aaaa148c1412","Type":"ContainerDied","Data":"5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11"} Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.580912 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" event={"ID":"935336e2-294b-4982-83f9-718806d14e5c","Type":"ContainerStarted","Data":"1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879"} Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.595267 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.623262 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.638298 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.650236 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.657558 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.657604 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.657621 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.657646 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.657661 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:32Z","lastTransitionTime":"2026-03-11T08:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.669047 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.683005 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.695670 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.709961 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.726601 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.749097 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.762987 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.765412 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.765444 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.765457 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.765527 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.765729 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:32Z","lastTransitionTime":"2026-03-11T08:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.777370 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:32Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.869203 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.869668 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.869681 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.869702 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.869715 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:32Z","lastTransitionTime":"2026-03-11T08:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.975451 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.975572 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.975601 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.975639 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:32 crc kubenswrapper[4840]: I0311 08:58:32.975665 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:32Z","lastTransitionTime":"2026-03-11T08:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.059622 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.059714 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.059810 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:33 crc kubenswrapper[4840]: E0311 08:58:33.060029 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:33 crc kubenswrapper[4840]: E0311 08:58:33.060275 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:58:33 crc kubenswrapper[4840]: E0311 08:58:33.060786 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.061062 4840 scope.go:117] "RemoveContainer" containerID="58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.079326 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.079403 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.079430 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.079496 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.079525 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:33Z","lastTransitionTime":"2026-03-11T08:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.182972 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.183053 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.183077 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.183115 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.183137 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:33Z","lastTransitionTime":"2026-03-11T08:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.286400 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.286441 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.286453 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.286489 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.286506 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:33Z","lastTransitionTime":"2026-03-11T08:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.389365 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.389417 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.389431 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.389450 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.389482 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:33Z","lastTransitionTime":"2026-03-11T08:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.492299 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.492352 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.492369 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.492396 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.492414 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:33Z","lastTransitionTime":"2026-03-11T08:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.509484 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-4tjtn"] Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.510039 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4tjtn" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.513239 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.513314 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.513397 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.513540 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.531662 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.552280 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.575901 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.587852 4840 generic.go:334] "Generic (PLEG): container finished" podID="d6b1fe1a-6473-41f8-a45f-aaaa148c1412" containerID="156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16" exitCode=0 Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.587920 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" event={"ID":"d6b1fe1a-6473-41f8-a45f-aaaa148c1412","Type":"ContainerDied","Data":"156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16"} Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.589708 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.591256 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc"} Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.591544 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.594014 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.594082 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.594094 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.594111 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.594125 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:33Z","lastTransitionTime":"2026-03-11T08:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.594996 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.627020 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.647382 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.647550 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c3b7839-5a3d-42f4-a871-1baa77d4f6a3-host\") pod \"node-ca-4tjtn\" (UID: \"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\") " pod="openshift-image-registry/node-ca-4tjtn" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.647664 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0c3b7839-5a3d-42f4-a871-1baa77d4f6a3-serviceca\") pod \"node-ca-4tjtn\" (UID: \"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\") " pod="openshift-image-registry/node-ca-4tjtn" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.647792 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5b8p\" (UniqueName: \"kubernetes.io/projected/0c3b7839-5a3d-42f4-a871-1baa77d4f6a3-kube-api-access-q5b8p\") pod \"node-ca-4tjtn\" (UID: \"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\") " pod="openshift-image-registry/node-ca-4tjtn" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.661181 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.678214 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.699612 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.700325 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.700374 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.700415 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.700432 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.700442 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:33Z","lastTransitionTime":"2026-03-11T08:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.713809 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.726417 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.745283 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4tjtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5b8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4tjtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.748320 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0c3b7839-5a3d-42f4-a871-1baa77d4f6a3-serviceca\") pod \"node-ca-4tjtn\" (UID: \"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\") " pod="openshift-image-registry/node-ca-4tjtn" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.748367 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5b8p\" (UniqueName: \"kubernetes.io/projected/0c3b7839-5a3d-42f4-a871-1baa77d4f6a3-kube-api-access-q5b8p\") pod \"node-ca-4tjtn\" (UID: \"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\") " pod="openshift-image-registry/node-ca-4tjtn" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.748403 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c3b7839-5a3d-42f4-a871-1baa77d4f6a3-host\") pod \"node-ca-4tjtn\" (UID: \"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\") " pod="openshift-image-registry/node-ca-4tjtn" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.748505 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0c3b7839-5a3d-42f4-a871-1baa77d4f6a3-host\") pod \"node-ca-4tjtn\" (UID: \"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\") " pod="openshift-image-registry/node-ca-4tjtn" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.750201 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0c3b7839-5a3d-42f4-a871-1baa77d4f6a3-serviceca\") pod \"node-ca-4tjtn\" (UID: \"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\") " pod="openshift-image-registry/node-ca-4tjtn" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.762452 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.774350 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5b8p\" (UniqueName: \"kubernetes.io/projected/0c3b7839-5a3d-42f4-a871-1baa77d4f6a3-kube-api-access-q5b8p\") pod \"node-ca-4tjtn\" (UID: \"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\") " pod="openshift-image-registry/node-ca-4tjtn" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.777655 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.791902 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.808000 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.808580 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.808592 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.808613 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.808630 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:33Z","lastTransitionTime":"2026-03-11T08:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.808746 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.824821 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4tjtn" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.829901 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.842778 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.867212 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.884502 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.905608 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.912750 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.912810 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.912824 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.912842 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.912853 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:33Z","lastTransitionTime":"2026-03-11T08:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.928615 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.945371 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4tjtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5b8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4tjtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.965855 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.980572 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:33 crc kubenswrapper[4840]: I0311 08:58:33.994242 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:33Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.015188 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.015243 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.015254 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.015272 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.015282 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:34Z","lastTransitionTime":"2026-03-11T08:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.118021 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.118067 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.118080 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.118098 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.118111 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:34Z","lastTransitionTime":"2026-03-11T08:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.220889 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.220938 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.220953 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.220977 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.220990 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:34Z","lastTransitionTime":"2026-03-11T08:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.324207 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.324259 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.324271 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.324292 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.324312 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:34Z","lastTransitionTime":"2026-03-11T08:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.426983 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.427031 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.427040 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.427060 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.427069 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:34Z","lastTransitionTime":"2026-03-11T08:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.529764 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.529806 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.529819 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.529836 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.529849 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:34Z","lastTransitionTime":"2026-03-11T08:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.601385 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4tjtn" event={"ID":"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3","Type":"ContainerStarted","Data":"3703388f56cc4d0f73dcdc31a5fae2dd6cc275b3a86ae8ca3733190a08bac57a"} Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.601494 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4tjtn" event={"ID":"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3","Type":"ContainerStarted","Data":"7415e848110d5aaf1c9bd79278f408018ea488da41d2309a97ecaceefa1853ba"} Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.616365 4840 generic.go:334] "Generic (PLEG): container finished" podID="d6b1fe1a-6473-41f8-a45f-aaaa148c1412" containerID="62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18" exitCode=0 Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.616580 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" event={"ID":"d6b1fe1a-6473-41f8-a45f-aaaa148c1412","Type":"ContainerDied","Data":"62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18"} Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.626742 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" event={"ID":"935336e2-294b-4982-83f9-718806d14e5c","Type":"ContainerStarted","Data":"5372c48a41c475f20ba6b46f1b3e1cd5e8b16dba66380a60259987ba30c13fa5"} Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.626820 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.626886 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.627168 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.627730 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.634782 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.634843 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.634863 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.634891 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.634912 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:34Z","lastTransitionTime":"2026-03-11T08:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.645115 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.666166 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.671374 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.677068 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.682225 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.698660 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.713088 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.726072 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.738282 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.738325 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.738334 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.738351 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.738362 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:34Z","lastTransitionTime":"2026-03-11T08:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.744189 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.762595 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.780962 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.793973 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.808292 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4tjtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3703388f56cc4d0f73dcdc31a5fae2dd6cc275b3a86ae8ca3733190a08bac57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5b8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4tjtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.821516 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.833684 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.842617 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.842661 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.842673 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.842694 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.842705 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:34Z","lastTransitionTime":"2026-03-11T08:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.851095 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.864978 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.881506 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.896774 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.913371 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.931746 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.945662 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.945708 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.945721 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.945741 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.945754 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:34Z","lastTransitionTime":"2026-03-11T08:58:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.947676 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.967610 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:34 crc kubenswrapper[4840]: I0311 08:58:34.991997 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5372c48a41c475f20ba6b46f1b3e1cd5e8b16dba66380a60259987ba30c13fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:34Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.008495 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:35Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.028748 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:35Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.042178 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4tjtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3703388f56cc4d0f73dcdc31a5fae2dd6cc275b3a86ae8ca3733190a08bac57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5b8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4tjtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:35Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.048601 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.048653 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.048665 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.048685 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.048699 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:35Z","lastTransitionTime":"2026-03-11T08:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.060122 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.060119 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:35 crc kubenswrapper[4840]: E0311 08:58:35.060324 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.060139 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:35 crc kubenswrapper[4840]: E0311 08:58:35.060431 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:35 crc kubenswrapper[4840]: E0311 08:58:35.060598 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.151691 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.151735 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.151747 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.151767 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.151780 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:35Z","lastTransitionTime":"2026-03-11T08:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.255698 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.255770 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.255789 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.255818 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.255841 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:35Z","lastTransitionTime":"2026-03-11T08:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.359914 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.360316 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.360392 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.360493 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.360562 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:35Z","lastTransitionTime":"2026-03-11T08:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.464567 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.464645 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.464670 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.464886 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.464903 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:35Z","lastTransitionTime":"2026-03-11T08:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.568593 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.568984 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.569107 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.569212 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.569310 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:35Z","lastTransitionTime":"2026-03-11T08:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.633999 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" event={"ID":"d6b1fe1a-6473-41f8-a45f-aaaa148c1412","Type":"ContainerStarted","Data":"572fd2c3055a64a29be3fef394c48782227519d0d445ca2dc38a6f9093f7698b"} Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.652415 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:35Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.669040 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fd2c3055a64a29be3fef394c48782227519d0d445ca2dc38a6f9093f7698b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:35Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.671843 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.671884 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.671897 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.671915 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.671927 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:35Z","lastTransitionTime":"2026-03-11T08:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.684928 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:35Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.700890 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:35Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.716037 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:35Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.730137 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:35Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.760895 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5372c48a41c475f20ba6b46f1b3e1cd5e8b16dba66380a60259987ba30c13fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:35Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.775503 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.775534 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.775548 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.775593 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.775612 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:35Z","lastTransitionTime":"2026-03-11T08:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.781362 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:35Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.796239 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:35Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.816068 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4tjtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3703388f56cc4d0f73dcdc31a5fae2dd6cc275b3a86ae8ca3733190a08bac57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5b8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4tjtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:35Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.833851 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:35Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.852870 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:35Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.876642 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:35Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.878899 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.878989 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.879016 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.879058 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.879081 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:35Z","lastTransitionTime":"2026-03-11T08:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.981791 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.981849 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.981867 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.981891 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:35 crc kubenswrapper[4840]: I0311 08:58:35.981907 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:35Z","lastTransitionTime":"2026-03-11T08:58:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.084702 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.084809 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.084822 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.084856 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.084869 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:36Z","lastTransitionTime":"2026-03-11T08:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.188905 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.188966 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.188980 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.189006 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.189021 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:36Z","lastTransitionTime":"2026-03-11T08:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.292407 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.292458 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.292485 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.292504 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.292516 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:36Z","lastTransitionTime":"2026-03-11T08:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.395356 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.395402 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.395414 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.395430 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.395440 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:36Z","lastTransitionTime":"2026-03-11T08:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.498802 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.498847 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.498859 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.498880 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.498893 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:36Z","lastTransitionTime":"2026-03-11T08:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.601024 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.601070 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.601079 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.601094 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.601105 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:36Z","lastTransitionTime":"2026-03-11T08:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.704146 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.704199 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.704211 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.704231 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.704246 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:36Z","lastTransitionTime":"2026-03-11T08:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.807436 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.807492 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.807504 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.807519 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.807531 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:36Z","lastTransitionTime":"2026-03-11T08:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.912203 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.912253 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.912268 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.912290 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:36 crc kubenswrapper[4840]: I0311 08:58:36.912306 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:36Z","lastTransitionTime":"2026-03-11T08:58:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.015638 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.015699 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.015714 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.015737 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.015752 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:37Z","lastTransitionTime":"2026-03-11T08:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.059561 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.059611 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.059710 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:37 crc kubenswrapper[4840]: E0311 08:58:37.059799 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:37 crc kubenswrapper[4840]: E0311 08:58:37.059963 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:58:37 crc kubenswrapper[4840]: E0311 08:58:37.060171 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.120387 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.120531 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.120550 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.120606 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.120626 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:37Z","lastTransitionTime":"2026-03-11T08:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.224664 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.224735 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.224755 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.224786 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.224808 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:37Z","lastTransitionTime":"2026-03-11T08:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.328918 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.329003 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.329037 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.329070 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.329097 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:37Z","lastTransitionTime":"2026-03-11T08:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.432564 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.432622 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.432652 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.432692 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.432717 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:37Z","lastTransitionTime":"2026-03-11T08:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.536429 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.536567 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.536590 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.536626 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.536644 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:37Z","lastTransitionTime":"2026-03-11T08:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.640913 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.640994 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.641027 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.641061 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.641085 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:37Z","lastTransitionTime":"2026-03-11T08:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.644198 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7c2zl_935336e2-294b-4982-83f9-718806d14e5c/ovnkube-controller/0.log" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.648122 4840 generic.go:334] "Generic (PLEG): container finished" podID="935336e2-294b-4982-83f9-718806d14e5c" containerID="5372c48a41c475f20ba6b46f1b3e1cd5e8b16dba66380a60259987ba30c13fa5" exitCode=1 Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.648183 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" event={"ID":"935336e2-294b-4982-83f9-718806d14e5c","Type":"ContainerDied","Data":"5372c48a41c475f20ba6b46f1b3e1cd5e8b16dba66380a60259987ba30c13fa5"} Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.649531 4840 scope.go:117] "RemoveContainer" containerID="5372c48a41c475f20ba6b46f1b3e1cd5e8b16dba66380a60259987ba30c13fa5" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.681927 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:37Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.707766 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:37Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.722280 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4tjtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3703388f56cc4d0f73dcdc31a5fae2dd6cc275b3a86ae8ca3733190a08bac57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5b8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4tjtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:37Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.740381 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:37Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.743392 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.743433 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.743448 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.743486 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.743500 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:37Z","lastTransitionTime":"2026-03-11T08:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.762281 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:37Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.785959 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:37Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.803683 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:37Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.818498 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:37Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.837919 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:37Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.846062 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.846129 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.846148 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.846180 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.846199 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:37Z","lastTransitionTime":"2026-03-11T08:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.856095 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:37Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.870665 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:37Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.892149 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fd2c3055a64a29be3fef394c48782227519d0d445ca2dc38a6f9093f7698b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:37Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.916755 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5372c48a41c475f20ba6b46f1b3e1cd5e8b16dba66380a60259987ba30c13fa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5372c48a41c475f20ba6b46f1b3e1cd5e8b16dba66380a60259987ba30c13fa5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:58:37Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0311 08:58:36.991387 6713 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0311 08:58:36.993163 6713 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0311 08:58:36.993268 6713 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0311 08:58:36.993333 6713 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0311 08:58:36.993364 6713 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0311 08:58:36.993379 6713 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0311 08:58:36.993435 6713 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0311 08:58:36.993455 6713 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0311 08:58:36.993531 6713 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0311 08:58:36.993587 6713 handler.go:208] Removed *v1.Node event handler 7\\\\nI0311 08:58:36.993635 6713 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0311 08:58:36.993567 6713 factory.go:656] Stopping watch factory\\\\nI0311 08:58:36.993729 6713 ovnkube.go:599] Stopped ovnkube\\\\nI0311 08:58:36.993666 6713 handler.go:208] Removed *v1.Node event handler 2\\\\nI0311 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:37Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.949769 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.949811 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.949821 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.949839 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:37 crc kubenswrapper[4840]: I0311 08:58:37.949853 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:37Z","lastTransitionTime":"2026-03-11T08:58:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.054329 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.054371 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.054381 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.054398 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.054408 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:38Z","lastTransitionTime":"2026-03-11T08:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.158760 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.158838 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.158866 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.158900 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.158923 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:38Z","lastTransitionTime":"2026-03-11T08:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.262197 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.262243 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.262255 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.262277 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.262291 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:38Z","lastTransitionTime":"2026-03-11T08:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.365636 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.365696 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.365709 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.365728 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.365746 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:38Z","lastTransitionTime":"2026-03-11T08:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.468731 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.468780 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.468792 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.468812 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.468826 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:38Z","lastTransitionTime":"2026-03-11T08:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.571545 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.571602 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.571615 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.571633 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.571645 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:38Z","lastTransitionTime":"2026-03-11T08:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.654571 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7c2zl_935336e2-294b-4982-83f9-718806d14e5c/ovnkube-controller/0.log" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.658033 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" event={"ID":"935336e2-294b-4982-83f9-718806d14e5c","Type":"ContainerStarted","Data":"fa44cbb9a24ea530f3aca75e40aa270291e88352a947597d4997175ebd38f746"} Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.658538 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.674080 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.674132 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.674147 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.674173 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.674188 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:38Z","lastTransitionTime":"2026-03-11T08:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.687666 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:38Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.705007 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4tjtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3703388f56cc4d0f73dcdc31a5fae2dd6cc275b3a86ae8ca3733190a08bac57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5b8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4tjtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:38Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.718682 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:38Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.729186 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:38Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.740073 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:38Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.751775 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:38Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.766401 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:38Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.778180 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.778267 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.778280 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.778303 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.778318 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:38Z","lastTransitionTime":"2026-03-11T08:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.783354 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:38Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.798808 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:38Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.817384 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fd2c3055a64a29be3fef394c48782227519d0d445ca2dc38a6f9093f7698b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:38Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.830985 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:38Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.845963 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:38Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.870307 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa44cbb9a24ea530f3aca75e40aa270291e88352a947597d4997175ebd38f746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5372c48a41c475f20ba6b46f1b3e1cd5e8b16dba66380a60259987ba30c13fa5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:58:37Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0311 08:58:36.991387 6713 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0311 08:58:36.993163 6713 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0311 08:58:36.993268 6713 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0311 08:58:36.993333 6713 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0311 08:58:36.993364 6713 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0311 08:58:36.993379 6713 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0311 08:58:36.993435 6713 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0311 08:58:36.993455 6713 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0311 08:58:36.993531 6713 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0311 08:58:36.993587 6713 handler.go:208] Removed *v1.Node event handler 7\\\\nI0311 08:58:36.993635 6713 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0311 08:58:36.993567 6713 factory.go:656] Stopping watch factory\\\\nI0311 08:58:36.993729 6713 ovnkube.go:599] Stopped ovnkube\\\\nI0311 08:58:36.993666 6713 handler.go:208] Removed *v1.Node event handler 2\\\\nI0311 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:38Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.880954 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.880992 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.881003 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.881022 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.881034 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:38Z","lastTransitionTime":"2026-03-11T08:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.984211 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.984263 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.984278 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.984300 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:38 crc kubenswrapper[4840]: I0311 08:58:38.984316 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:38Z","lastTransitionTime":"2026-03-11T08:58:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.059228 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.059276 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:39 crc kubenswrapper[4840]: E0311 08:58:39.059396 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.059384 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:39 crc kubenswrapper[4840]: E0311 08:58:39.059601 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:58:39 crc kubenswrapper[4840]: E0311 08:58:39.059867 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.087878 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.087983 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.088039 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.088072 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.088120 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:39Z","lastTransitionTime":"2026-03-11T08:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.192439 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.192521 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.192541 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.192567 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.192586 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:39Z","lastTransitionTime":"2026-03-11T08:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.296255 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.296323 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.296346 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.296372 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.296390 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:39Z","lastTransitionTime":"2026-03-11T08:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.399253 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.399297 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.399306 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.399321 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.399330 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:39Z","lastTransitionTime":"2026-03-11T08:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.503024 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.503394 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.503517 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.503640 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.503994 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:39Z","lastTransitionTime":"2026-03-11T08:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.558928 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc"] Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.560399 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.562035 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.563799 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.578629 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.582087 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.582203 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.582317 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.582417 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.582534 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:39Z","lastTransitionTime":"2026-03-11T08:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.594090 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: E0311 08:58:39.596118 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.601333 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.601489 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.601563 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.601628 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.601709 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:39Z","lastTransitionTime":"2026-03-11T08:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.606853 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4tjtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3703388f56cc4d0f73dcdc31a5fae2dd6cc275b3a86ae8ca3733190a08bac57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5b8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4tjtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: E0311 08:58:39.613878 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.618853 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.620660 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.620716 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.620730 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.620750 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.620762 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:39Z","lastTransitionTime":"2026-03-11T08:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.633579 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: E0311 08:58:39.634698 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.638633 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.638744 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.638762 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.638789 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.638831 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:39Z","lastTransitionTime":"2026-03-11T08:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.647395 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: E0311 08:58:39.653315 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.657259 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.657342 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.657361 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.657387 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.657420 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:39Z","lastTransitionTime":"2026-03-11T08:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.664250 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7c2zl_935336e2-294b-4982-83f9-718806d14e5c/ovnkube-controller/1.log" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.664227 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.664900 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7c2zl_935336e2-294b-4982-83f9-718806d14e5c/ovnkube-controller/0.log" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.668298 4840 generic.go:334] "Generic (PLEG): container finished" podID="935336e2-294b-4982-83f9-718806d14e5c" containerID="fa44cbb9a24ea530f3aca75e40aa270291e88352a947597d4997175ebd38f746" exitCode=1 Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.668351 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" event={"ID":"935336e2-294b-4982-83f9-718806d14e5c","Type":"ContainerDied","Data":"fa44cbb9a24ea530f3aca75e40aa270291e88352a947597d4997175ebd38f746"} Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.668391 4840 scope.go:117] "RemoveContainer" containerID="5372c48a41c475f20ba6b46f1b3e1cd5e8b16dba66380a60259987ba30c13fa5" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.669373 4840 scope.go:117] "RemoveContainer" containerID="fa44cbb9a24ea530f3aca75e40aa270291e88352a947597d4997175ebd38f746" Mar 11 08:58:39 crc kubenswrapper[4840]: E0311 08:58:39.669597 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7c2zl_openshift-ovn-kubernetes(935336e2-294b-4982-83f9-718806d14e5c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" podUID="935336e2-294b-4982-83f9-718806d14e5c" Mar 11 08:58:39 crc kubenswrapper[4840]: E0311 08:58:39.677115 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: E0311 08:58:39.677229 4840 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.679137 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.682564 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.682614 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.682630 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.682989 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.683055 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:39Z","lastTransitionTime":"2026-03-11T08:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.700640 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fd2c3055a64a29be3fef394c48782227519d0d445ca2dc38a6f9093f7698b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.715027 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.717644 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1a97da03-4135-4850-9393-640dffab4289-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9stqc\" (UID: \"1a97da03-4135-4850-9393-640dffab4289\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.717689 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1a97da03-4135-4850-9393-640dffab4289-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9stqc\" (UID: \"1a97da03-4135-4850-9393-640dffab4289\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.717734 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j7fb\" (UniqueName: \"kubernetes.io/projected/1a97da03-4135-4850-9393-640dffab4289-kube-api-access-7j7fb\") pod \"ovnkube-control-plane-749d76644c-9stqc\" (UID: \"1a97da03-4135-4850-9393-640dffab4289\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.717765 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1a97da03-4135-4850-9393-640dffab4289-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9stqc\" (UID: \"1a97da03-4135-4850-9393-640dffab4289\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.729239 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.743983 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.763742 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa44cbb9a24ea530f3aca75e40aa270291e88352a947597d4997175ebd38f746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5372c48a41c475f20ba6b46f1b3e1cd5e8b16dba66380a60259987ba30c13fa5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:58:37Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0311 08:58:36.991387 6713 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0311 08:58:36.993163 6713 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0311 08:58:36.993268 6713 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0311 08:58:36.993333 6713 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0311 08:58:36.993364 6713 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0311 08:58:36.993379 6713 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0311 08:58:36.993435 6713 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0311 08:58:36.993455 6713 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0311 08:58:36.993531 6713 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0311 08:58:36.993587 6713 handler.go:208] Removed *v1.Node event handler 7\\\\nI0311 08:58:36.993635 6713 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0311 08:58:36.993567 6713 factory.go:656] Stopping watch factory\\\\nI0311 08:58:36.993729 6713 ovnkube.go:599] Stopped ovnkube\\\\nI0311 08:58:36.993666 6713 handler.go:208] Removed *v1.Node event handler 2\\\\nI0311 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.778990 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a97da03-4135-4850-9393-640dffab4289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9stqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.786036 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.786076 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.786087 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.786101 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.786110 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:39Z","lastTransitionTime":"2026-03-11T08:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.793354 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.809943 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.818675 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1a97da03-4135-4850-9393-640dffab4289-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9stqc\" (UID: \"1a97da03-4135-4850-9393-640dffab4289\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.818811 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1a97da03-4135-4850-9393-640dffab4289-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9stqc\" (UID: \"1a97da03-4135-4850-9393-640dffab4289\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.818841 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1a97da03-4135-4850-9393-640dffab4289-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9stqc\" (UID: \"1a97da03-4135-4850-9393-640dffab4289\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.818876 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j7fb\" (UniqueName: \"kubernetes.io/projected/1a97da03-4135-4850-9393-640dffab4289-kube-api-access-7j7fb\") pod \"ovnkube-control-plane-749d76644c-9stqc\" (UID: \"1a97da03-4135-4850-9393-640dffab4289\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.820178 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1a97da03-4135-4850-9393-640dffab4289-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-9stqc\" (UID: \"1a97da03-4135-4850-9393-640dffab4289\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.821443 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1a97da03-4135-4850-9393-640dffab4289-env-overrides\") pod \"ovnkube-control-plane-749d76644c-9stqc\" (UID: \"1a97da03-4135-4850-9393-640dffab4289\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.825941 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.826213 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1a97da03-4135-4850-9393-640dffab4289-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-9stqc\" (UID: \"1a97da03-4135-4850-9393-640dffab4289\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.838137 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.838847 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j7fb\" (UniqueName: \"kubernetes.io/projected/1a97da03-4135-4850-9393-640dffab4289-kube-api-access-7j7fb\") pod \"ovnkube-control-plane-749d76644c-9stqc\" (UID: \"1a97da03-4135-4850-9393-640dffab4289\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.850872 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.863341 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.873073 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.875823 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.886785 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fd2c3055a64a29be3fef394c48782227519d0d445ca2dc38a6f9093f7698b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.888086 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.888146 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.888159 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.888180 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.888194 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:39Z","lastTransitionTime":"2026-03-11T08:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:39 crc kubenswrapper[4840]: W0311 08:58:39.892535 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a97da03_4135_4850_9393_640dffab4289.slice/crio-b83174efa6c8cf4c520394c1527dabd8328540b7fa2ce99fd6d5601c9f30abe8 WatchSource:0}: Error finding container b83174efa6c8cf4c520394c1527dabd8328540b7fa2ce99fd6d5601c9f30abe8: Status 404 returned error can't find the container with id b83174efa6c8cf4c520394c1527dabd8328540b7fa2ce99fd6d5601c9f30abe8 Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.908446 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa44cbb9a24ea530f3aca75e40aa270291e88352a947597d4997175ebd38f746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5372c48a41c475f20ba6b46f1b3e1cd5e8b16dba66380a60259987ba30c13fa5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:58:37Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0311 08:58:36.991387 6713 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0311 08:58:36.993163 6713 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0311 08:58:36.993268 6713 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0311 08:58:36.993333 6713 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0311 08:58:36.993364 6713 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0311 08:58:36.993379 6713 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0311 08:58:36.993435 6713 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0311 08:58:36.993455 6713 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0311 08:58:36.993531 6713 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0311 08:58:36.993587 6713 handler.go:208] Removed *v1.Node event handler 7\\\\nI0311 08:58:36.993635 6713 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0311 08:58:36.993567 6713 factory.go:656] Stopping watch factory\\\\nI0311 08:58:36.993729 6713 ovnkube.go:599] Stopped ovnkube\\\\nI0311 08:58:36.993666 6713 handler.go:208] Removed *v1.Node event handler 2\\\\nI0311 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa44cbb9a24ea530f3aca75e40aa270291e88352a947597d4997175ebd38f746\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"milyPolicy:*SingleStack,ClusterIPs:[10.217.5.253],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0311 08:58:38.705363 6867 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:38Z is after 2025-08-24T17:21:41Z]\\\\nI0311 08:58:38.705374 6867 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.919931 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a97da03-4135-4850-9393-640dffab4289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9stqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.935289 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.948975 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.962174 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4tjtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3703388f56cc4d0f73dcdc31a5fae2dd6cc275b3a86ae8ca3733190a08bac57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5b8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4tjtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.977596 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:39Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.991378 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.991436 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.991450 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.991484 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:39 crc kubenswrapper[4840]: I0311 08:58:39.991498 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:39Z","lastTransitionTime":"2026-03-11T08:58:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.094017 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.094067 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.094076 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.094092 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.094105 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:40Z","lastTransitionTime":"2026-03-11T08:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.196717 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.196754 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.196763 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.196779 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.196790 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:40Z","lastTransitionTime":"2026-03-11T08:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.275149 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-gjgkz"] Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.275592 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:58:40 crc kubenswrapper[4840]: E0311 08:58:40.275642 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.292071 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.303203 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.303271 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.303291 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.303318 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.303334 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:40Z","lastTransitionTime":"2026-03-11T08:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.320796 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.337304 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.350731 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.373908 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.395753 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.407654 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.407785 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.407822 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.407835 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.407855 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.407868 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:40Z","lastTransitionTime":"2026-03-11T08:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.423878 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fd2c3055a64a29be3fef394c48782227519d0d445ca2dc38a6f9093f7698b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.425672 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h65t\" (UniqueName: \"kubernetes.io/projected/6e87442b-4d54-472c-bad6-e2086c95df50-kube-api-access-4h65t\") pod \"network-metrics-daemon-gjgkz\" (UID: \"6e87442b-4d54-472c-bad6-e2086c95df50\") " pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.425730 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e87442b-4d54-472c-bad6-e2086c95df50-metrics-certs\") pod \"network-metrics-daemon-gjgkz\" (UID: \"6e87442b-4d54-472c-bad6-e2086c95df50\") " pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.444606 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa44cbb9a24ea530f3aca75e40aa270291e88352a947597d4997175ebd38f746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5372c48a41c475f20ba6b46f1b3e1cd5e8b16dba66380a60259987ba30c13fa5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:58:37Z\\\",\\\"message\\\":\\\"reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0311 08:58:36.991387 6713 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0311 08:58:36.993163 6713 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0311 08:58:36.993268 6713 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0311 08:58:36.993333 6713 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0311 08:58:36.993364 6713 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0311 08:58:36.993379 6713 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0311 08:58:36.993435 6713 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0311 08:58:36.993455 6713 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0311 08:58:36.993531 6713 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0311 08:58:36.993587 6713 handler.go:208] Removed *v1.Node event handler 7\\\\nI0311 08:58:36.993635 6713 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0311 08:58:36.993567 6713 factory.go:656] Stopping watch factory\\\\nI0311 08:58:36.993729 6713 ovnkube.go:599] Stopped ovnkube\\\\nI0311 08:58:36.993666 6713 handler.go:208] Removed *v1.Node event handler 2\\\\nI0311 08\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa44cbb9a24ea530f3aca75e40aa270291e88352a947597d4997175ebd38f746\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"milyPolicy:*SingleStack,ClusterIPs:[10.217.5.253],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0311 08:58:38.705363 6867 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:38Z is after 2025-08-24T17:21:41Z]\\\\nI0311 08:58:38.705374 6867 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.461519 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a97da03-4135-4850-9393-640dffab4289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9stqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.474285 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gjgkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e87442b-4d54-472c-bad6-e2086c95df50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gjgkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.495042 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.510449 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.510512 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.510525 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.510540 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.510553 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:40Z","lastTransitionTime":"2026-03-11T08:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.512566 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.526950 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h65t\" (UniqueName: \"kubernetes.io/projected/6e87442b-4d54-472c-bad6-e2086c95df50-kube-api-access-4h65t\") pod \"network-metrics-daemon-gjgkz\" (UID: \"6e87442b-4d54-472c-bad6-e2086c95df50\") " pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.527000 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e87442b-4d54-472c-bad6-e2086c95df50-metrics-certs\") pod \"network-metrics-daemon-gjgkz\" (UID: \"6e87442b-4d54-472c-bad6-e2086c95df50\") " pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:58:40 crc kubenswrapper[4840]: E0311 08:58:40.527112 4840 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:58:40 crc kubenswrapper[4840]: E0311 08:58:40.527164 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e87442b-4d54-472c-bad6-e2086c95df50-metrics-certs podName:6e87442b-4d54-472c-bad6-e2086c95df50 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:41.027146505 +0000 UTC m=+119.692816320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6e87442b-4d54-472c-bad6-e2086c95df50-metrics-certs") pod "network-metrics-daemon-gjgkz" (UID: "6e87442b-4d54-472c-bad6-e2086c95df50") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.529681 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4tjtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3703388f56cc4d0f73dcdc31a5fae2dd6cc275b3a86ae8ca3733190a08bac57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5b8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4tjtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.546358 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h65t\" (UniqueName: \"kubernetes.io/projected/6e87442b-4d54-472c-bad6-e2086c95df50-kube-api-access-4h65t\") pod \"network-metrics-daemon-gjgkz\" (UID: \"6e87442b-4d54-472c-bad6-e2086c95df50\") " pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.547509 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.613771 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.615450 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.615549 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.615624 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.615701 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:40Z","lastTransitionTime":"2026-03-11T08:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.675684 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7c2zl_935336e2-294b-4982-83f9-718806d14e5c/ovnkube-controller/1.log" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.679330 4840 scope.go:117] "RemoveContainer" containerID="fa44cbb9a24ea530f3aca75e40aa270291e88352a947597d4997175ebd38f746" Mar 11 08:58:40 crc kubenswrapper[4840]: E0311 08:58:40.679542 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7c2zl_openshift-ovn-kubernetes(935336e2-294b-4982-83f9-718806d14e5c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" podUID="935336e2-294b-4982-83f9-718806d14e5c" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.684997 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" event={"ID":"1a97da03-4135-4850-9393-640dffab4289","Type":"ContainerStarted","Data":"bf5b701d5975fea1d305cf5ef501224f312a6a3afd03977a691910a5acaf9ce7"} Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.685086 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" event={"ID":"1a97da03-4135-4850-9393-640dffab4289","Type":"ContainerStarted","Data":"57ffc84765682190c64926aa41837eb62aed5f155016eda6cec84aca86e1ac31"} Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.685108 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" event={"ID":"1a97da03-4135-4850-9393-640dffab4289","Type":"ContainerStarted","Data":"b83174efa6c8cf4c520394c1527dabd8328540b7fa2ce99fd6d5601c9f30abe8"} Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.695534 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.712418 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.717745 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.717781 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.717789 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.717804 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.717814 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:40Z","lastTransitionTime":"2026-03-11T08:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.726232 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.738902 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.756688 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fd2c3055a64a29be3fef394c48782227519d0d445ca2dc38a6f9093f7698b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.786374 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.820520 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.820875 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.821017 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.821120 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.821216 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:40Z","lastTransitionTime":"2026-03-11T08:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.822156 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.847892 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.871658 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa44cbb9a24ea530f3aca75e40aa270291e88352a947597d4997175ebd38f746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa44cbb9a24ea530f3aca75e40aa270291e88352a947597d4997175ebd38f746\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"milyPolicy:*SingleStack,ClusterIPs:[10.217.5.253],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0311 08:58:38.705363 6867 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:38Z is after 2025-08-24T17:21:41Z]\\\\nI0311 08:58:38.705374 6867 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7c2zl_openshift-ovn-kubernetes(935336e2-294b-4982-83f9-718806d14e5c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.886833 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a97da03-4135-4850-9393-640dffab4289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9stqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.901099 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gjgkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e87442b-4d54-472c-bad6-e2086c95df50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gjgkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.915148 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.924288 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.924392 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.924453 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.924562 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.924634 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:40Z","lastTransitionTime":"2026-03-11T08:58:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.932003 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.932170 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:40 crc kubenswrapper[4840]: E0311 08:58:40.932257 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:12.932213231 +0000 UTC m=+151.597883086 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:58:40 crc kubenswrapper[4840]: E0311 08:58:40.932322 4840 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:58:40 crc kubenswrapper[4840]: E0311 08:58:40.932418 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:59:12.932393166 +0000 UTC m=+151.598063171 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.932506 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:40 crc kubenswrapper[4840]: E0311 08:58:40.932733 4840 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.932811 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: E0311 08:58:40.932852 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 08:59:12.932828067 +0000 UTC m=+151.598497892 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.945844 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4tjtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3703388f56cc4d0f73dcdc31a5fae2dd6cc275b3a86ae8ca3733190a08bac57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5b8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4tjtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.961548 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.977272 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:40 crc kubenswrapper[4840]: I0311 08:58:40.991364 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:40Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.006139 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.026432 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.030260 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.030311 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.030333 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.030349 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.030362 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:41Z","lastTransitionTime":"2026-03-11T08:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.033309 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.033352 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.033407 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e87442b-4d54-472c-bad6-e2086c95df50-metrics-certs\") pod \"network-metrics-daemon-gjgkz\" (UID: \"6e87442b-4d54-472c-bad6-e2086c95df50\") " pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:58:41 crc kubenswrapper[4840]: E0311 08:58:41.033537 4840 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:58:41 crc kubenswrapper[4840]: E0311 08:58:41.033563 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:58:41 crc kubenswrapper[4840]: E0311 08:58:41.033585 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:58:41 crc kubenswrapper[4840]: E0311 08:58:41.033600 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e87442b-4d54-472c-bad6-e2086c95df50-metrics-certs podName:6e87442b-4d54-472c-bad6-e2086c95df50 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:42.033582538 +0000 UTC m=+120.699252363 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6e87442b-4d54-472c-bad6-e2086c95df50-metrics-certs") pod "network-metrics-daemon-gjgkz" (UID: "6e87442b-4d54-472c-bad6-e2086c95df50") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:58:41 crc kubenswrapper[4840]: E0311 08:58:41.033601 4840 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:41 crc kubenswrapper[4840]: E0311 08:58:41.033537 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:58:41 crc kubenswrapper[4840]: E0311 08:58:41.033674 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:58:41 crc kubenswrapper[4840]: E0311 08:58:41.033690 4840 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:41 crc kubenswrapper[4840]: E0311 08:58:41.033645 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-11 08:59:13.033632739 +0000 UTC m=+151.699302554 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:41 crc kubenswrapper[4840]: E0311 08:58:41.033777 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-11 08:59:13.033743832 +0000 UTC m=+151.699413647 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.042342 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.059318 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.059347 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.059299 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fd2c3055a64a29be3fef394c48782227519d0d445ca2dc38a6f9093f7698b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:41 crc kubenswrapper[4840]: E0311 08:58:41.059427 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.059331 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:41 crc kubenswrapper[4840]: E0311 08:58:41.059583 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:41 crc kubenswrapper[4840]: E0311 08:58:41.059655 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.074343 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.088716 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.110342 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa44cbb9a24ea530f3aca75e40aa270291e88352a947597d4997175ebd38f746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa44cbb9a24ea530f3aca75e40aa270291e88352a947597d4997175ebd38f746\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"milyPolicy:*SingleStack,ClusterIPs:[10.217.5.253],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0311 08:58:38.705363 6867 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:38Z is after 2025-08-24T17:21:41Z]\\\\nI0311 08:58:38.705374 6867 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7c2zl_openshift-ovn-kubernetes(935336e2-294b-4982-83f9-718806d14e5c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.123562 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a97da03-4135-4850-9393-640dffab4289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57ffc84765682190c64926aa41837eb62aed5f155016eda6cec84aca86e1ac31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5b701d5975fea1d305cf5ef501224f312a6a3afd03977a691910a5acaf9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9stqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.133340 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.133377 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.133387 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.133403 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.133416 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:41Z","lastTransitionTime":"2026-03-11T08:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.136300 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gjgkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e87442b-4d54-472c-bad6-e2086c95df50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gjgkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.150455 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.167089 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.178064 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4tjtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3703388f56cc4d0f73dcdc31a5fae2dd6cc275b3a86ae8ca3733190a08bac57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5b8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4tjtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.191419 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:41Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.236374 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.236430 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.236441 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.236460 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.236490 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:41Z","lastTransitionTime":"2026-03-11T08:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.339712 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.339806 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.339828 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.339877 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.339906 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:41Z","lastTransitionTime":"2026-03-11T08:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.443020 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.443090 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.443115 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.443148 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.443168 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:41Z","lastTransitionTime":"2026-03-11T08:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.547263 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.547333 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.547358 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.547390 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.547414 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:41Z","lastTransitionTime":"2026-03-11T08:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.650983 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.651051 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.651073 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.651100 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.651119 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:41Z","lastTransitionTime":"2026-03-11T08:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.754140 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.754208 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.754234 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.754267 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.754291 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:41Z","lastTransitionTime":"2026-03-11T08:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.857763 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.857814 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.857826 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.857845 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.857859 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:41Z","lastTransitionTime":"2026-03-11T08:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.961229 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.961289 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.961302 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.961334 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:41 crc kubenswrapper[4840]: I0311 08:58:41.961356 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:41Z","lastTransitionTime":"2026-03-11T08:58:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:42 crc kubenswrapper[4840]: I0311 08:58:42.065072 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e87442b-4d54-472c-bad6-e2086c95df50-metrics-certs\") pod \"network-metrics-daemon-gjgkz\" (UID: \"6e87442b-4d54-472c-bad6-e2086c95df50\") " pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:58:42 crc kubenswrapper[4840]: E0311 08:58:42.071952 4840 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:58:42 crc kubenswrapper[4840]: E0311 08:58:42.072074 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e87442b-4d54-472c-bad6-e2086c95df50-metrics-certs podName:6e87442b-4d54-472c-bad6-e2086c95df50 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:44.072041654 +0000 UTC m=+122.737711479 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6e87442b-4d54-472c-bad6-e2086c95df50-metrics-certs") pod "network-metrics-daemon-gjgkz" (UID: "6e87442b-4d54-472c-bad6-e2086c95df50") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:58:42 crc kubenswrapper[4840]: I0311 08:58:42.072973 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:58:42 crc kubenswrapper[4840]: E0311 08:58:42.073230 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:58:42 crc kubenswrapper[4840]: E0311 08:58:42.074378 4840 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 11 08:58:42 crc kubenswrapper[4840]: I0311 08:58:42.097091 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:42 crc kubenswrapper[4840]: I0311 08:58:42.113743 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:42 crc kubenswrapper[4840]: I0311 08:58:42.129302 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:42 crc kubenswrapper[4840]: E0311 08:58:42.135918 4840 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:58:42 crc kubenswrapper[4840]: I0311 08:58:42.144757 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:42 crc kubenswrapper[4840]: I0311 08:58:42.166045 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fd2c3055a64a29be3fef394c48782227519d0d445ca2dc38a6f9093f7698b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:42 crc kubenswrapper[4840]: I0311 08:58:42.182482 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:42 crc kubenswrapper[4840]: I0311 08:58:42.206262 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa44cbb9a24ea530f3aca75e40aa270291e88352a947597d4997175ebd38f746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa44cbb9a24ea530f3aca75e40aa270291e88352a947597d4997175ebd38f746\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"milyPolicy:*SingleStack,ClusterIPs:[10.217.5.253],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0311 08:58:38.705363 6867 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:38Z is after 2025-08-24T17:21:41Z]\\\\nI0311 08:58:38.705374 6867 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7c2zl_openshift-ovn-kubernetes(935336e2-294b-4982-83f9-718806d14e5c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:42 crc kubenswrapper[4840]: I0311 08:58:42.220428 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a97da03-4135-4850-9393-640dffab4289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57ffc84765682190c64926aa41837eb62aed5f155016eda6cec84aca86e1ac31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5b701d5975fea1d305cf5ef501224f312a6a3afd03977a691910a5acaf9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9stqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:42 crc kubenswrapper[4840]: I0311 08:58:42.231868 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gjgkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e87442b-4d54-472c-bad6-e2086c95df50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gjgkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:42 crc kubenswrapper[4840]: I0311 08:58:42.244732 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:42 crc kubenswrapper[4840]: I0311 08:58:42.256890 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:42 crc kubenswrapper[4840]: I0311 08:58:42.270344 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4tjtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3703388f56cc4d0f73dcdc31a5fae2dd6cc275b3a86ae8ca3733190a08bac57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5b8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4tjtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:42 crc kubenswrapper[4840]: I0311 08:58:42.284235 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:42 crc kubenswrapper[4840]: I0311 08:58:42.296948 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:42 crc kubenswrapper[4840]: I0311 08:58:42.307911 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:42Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:43 crc kubenswrapper[4840]: I0311 08:58:43.060137 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:43 crc kubenswrapper[4840]: I0311 08:58:43.060256 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:43 crc kubenswrapper[4840]: I0311 08:58:43.060337 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:43 crc kubenswrapper[4840]: E0311 08:58:43.060445 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:43 crc kubenswrapper[4840]: E0311 08:58:43.061363 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:43 crc kubenswrapper[4840]: E0311 08:58:43.061498 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:58:44 crc kubenswrapper[4840]: I0311 08:58:44.060248 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:58:44 crc kubenswrapper[4840]: E0311 08:58:44.060457 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:58:44 crc kubenswrapper[4840]: I0311 08:58:44.088169 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e87442b-4d54-472c-bad6-e2086c95df50-metrics-certs\") pod \"network-metrics-daemon-gjgkz\" (UID: \"6e87442b-4d54-472c-bad6-e2086c95df50\") " pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:58:44 crc kubenswrapper[4840]: E0311 08:58:44.088438 4840 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:58:44 crc kubenswrapper[4840]: E0311 08:58:44.088577 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e87442b-4d54-472c-bad6-e2086c95df50-metrics-certs podName:6e87442b-4d54-472c-bad6-e2086c95df50 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:48.088545431 +0000 UTC m=+126.754215426 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6e87442b-4d54-472c-bad6-e2086c95df50-metrics-certs") pod "network-metrics-daemon-gjgkz" (UID: "6e87442b-4d54-472c-bad6-e2086c95df50") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:58:45 crc kubenswrapper[4840]: I0311 08:58:45.059553 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:45 crc kubenswrapper[4840]: I0311 08:58:45.059605 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:45 crc kubenswrapper[4840]: E0311 08:58:45.059761 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:45 crc kubenswrapper[4840]: E0311 08:58:45.059921 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:45 crc kubenswrapper[4840]: I0311 08:58:45.060449 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:45 crc kubenswrapper[4840]: E0311 08:58:45.061740 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:58:45 crc kubenswrapper[4840]: I0311 08:58:45.621427 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 08:58:45 crc kubenswrapper[4840]: I0311 08:58:45.635984 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:45 crc kubenswrapper[4840]: I0311 08:58:45.653046 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:45 crc kubenswrapper[4840]: I0311 08:58:45.666907 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fd2c3055a64a29be3fef394c48782227519d0d445ca2dc38a6f9093f7698b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:45 crc kubenswrapper[4840]: I0311 08:58:45.680411 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:45 crc kubenswrapper[4840]: I0311 08:58:45.693454 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:45 crc kubenswrapper[4840]: I0311 08:58:45.711555 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:45 crc kubenswrapper[4840]: I0311 08:58:45.724598 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:45 crc kubenswrapper[4840]: I0311 08:58:45.738540 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:45 crc kubenswrapper[4840]: I0311 08:58:45.762058 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa44cbb9a24ea530f3aca75e40aa270291e88352a947597d4997175ebd38f746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa44cbb9a24ea530f3aca75e40aa270291e88352a947597d4997175ebd38f746\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"milyPolicy:*SingleStack,ClusterIPs:[10.217.5.253],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0311 08:58:38.705363 6867 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:38Z is after 2025-08-24T17:21:41Z]\\\\nI0311 08:58:38.705374 6867 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7c2zl_openshift-ovn-kubernetes(935336e2-294b-4982-83f9-718806d14e5c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:45 crc kubenswrapper[4840]: I0311 08:58:45.776940 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a97da03-4135-4850-9393-640dffab4289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57ffc84765682190c64926aa41837eb62aed5f155016eda6cec84aca86e1ac31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5b701d5975fea1d305cf5ef501224f312a6a3afd03977a691910a5acaf9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9stqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:45 crc kubenswrapper[4840]: I0311 08:58:45.790586 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gjgkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e87442b-4d54-472c-bad6-e2086c95df50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gjgkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:45 crc kubenswrapper[4840]: I0311 08:58:45.808641 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:45 crc kubenswrapper[4840]: I0311 08:58:45.815528 4840 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 11 08:58:45 crc kubenswrapper[4840]: I0311 08:58:45.826326 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:45 crc kubenswrapper[4840]: I0311 08:58:45.838824 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4tjtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3703388f56cc4d0f73dcdc31a5fae2dd6cc275b3a86ae8ca3733190a08bac57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5b8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4tjtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:45 crc kubenswrapper[4840]: I0311 08:58:45.851911 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:45Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:46 crc kubenswrapper[4840]: I0311 08:58:46.059682 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:58:46 crc kubenswrapper[4840]: E0311 08:58:46.059954 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:58:47 crc kubenswrapper[4840]: I0311 08:58:47.059431 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:47 crc kubenswrapper[4840]: I0311 08:58:47.059431 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:47 crc kubenswrapper[4840]: I0311 08:58:47.059595 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:47 crc kubenswrapper[4840]: E0311 08:58:47.059694 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:47 crc kubenswrapper[4840]: E0311 08:58:47.059800 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:58:47 crc kubenswrapper[4840]: E0311 08:58:47.059963 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:47 crc kubenswrapper[4840]: E0311 08:58:47.138927 4840 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:58:48 crc kubenswrapper[4840]: I0311 08:58:48.059623 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:58:48 crc kubenswrapper[4840]: E0311 08:58:48.059860 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:58:48 crc kubenswrapper[4840]: I0311 08:58:48.133273 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e87442b-4d54-472c-bad6-e2086c95df50-metrics-certs\") pod \"network-metrics-daemon-gjgkz\" (UID: \"6e87442b-4d54-472c-bad6-e2086c95df50\") " pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:58:48 crc kubenswrapper[4840]: E0311 08:58:48.133437 4840 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:58:48 crc kubenswrapper[4840]: E0311 08:58:48.134052 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e87442b-4d54-472c-bad6-e2086c95df50-metrics-certs podName:6e87442b-4d54-472c-bad6-e2086c95df50 nodeName:}" failed. No retries permitted until 2026-03-11 08:58:56.133959589 +0000 UTC m=+134.799629454 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6e87442b-4d54-472c-bad6-e2086c95df50-metrics-certs") pod "network-metrics-daemon-gjgkz" (UID: "6e87442b-4d54-472c-bad6-e2086c95df50") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:58:49 crc kubenswrapper[4840]: I0311 08:58:49.059680 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:49 crc kubenswrapper[4840]: E0311 08:58:49.059959 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:58:49 crc kubenswrapper[4840]: I0311 08:58:49.059714 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:49 crc kubenswrapper[4840]: I0311 08:58:49.059680 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:49 crc kubenswrapper[4840]: E0311 08:58:49.060092 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:49 crc kubenswrapper[4840]: E0311 08:58:49.060612 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.029810 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.029851 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.029861 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.029875 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.029883 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:50Z","lastTransitionTime":"2026-03-11T08:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:50 crc kubenswrapper[4840]: E0311 08:58:50.043341 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.047635 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.047682 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.047693 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.047712 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.047722 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:50Z","lastTransitionTime":"2026-03-11T08:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.059122 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:58:50 crc kubenswrapper[4840]: E0311 08:58:50.059269 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:58:50 crc kubenswrapper[4840]: E0311 08:58:50.062075 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.066698 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.066742 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.066759 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.066776 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.066791 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:50Z","lastTransitionTime":"2026-03-11T08:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:50 crc kubenswrapper[4840]: E0311 08:58:50.080223 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.085457 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.085506 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.085516 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.085533 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.085545 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:50Z","lastTransitionTime":"2026-03-11T08:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:50 crc kubenswrapper[4840]: E0311 08:58:50.099311 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.103163 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.103208 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.103225 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.103249 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:58:50 crc kubenswrapper[4840]: I0311 08:58:50.103267 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:58:50Z","lastTransitionTime":"2026-03-11T08:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:58:50 crc kubenswrapper[4840]: E0311 08:58:50.117665 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:50Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:50 crc kubenswrapper[4840]: E0311 08:58:50.118259 4840 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 11 08:58:51 crc kubenswrapper[4840]: I0311 08:58:51.059552 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:51 crc kubenswrapper[4840]: I0311 08:58:51.059628 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:51 crc kubenswrapper[4840]: I0311 08:58:51.059776 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:51 crc kubenswrapper[4840]: E0311 08:58:51.059855 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:51 crc kubenswrapper[4840]: E0311 08:58:51.060013 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:58:51 crc kubenswrapper[4840]: E0311 08:58:51.060569 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:51 crc kubenswrapper[4840]: I0311 08:58:51.070771 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 11 08:58:52 crc kubenswrapper[4840]: I0311 08:58:52.060515 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:58:52 crc kubenswrapper[4840]: E0311 08:58:52.060798 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:58:52 crc kubenswrapper[4840]: I0311 08:58:52.082311 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:52 crc kubenswrapper[4840]: I0311 08:58:52.101037 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:52 crc kubenswrapper[4840]: I0311 08:58:52.118952 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4tjtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3703388f56cc4d0f73dcdc31a5fae2dd6cc275b3a86ae8ca3733190a08bac57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5b8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4tjtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:52 crc kubenswrapper[4840]: E0311 08:58:52.139608 4840 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:58:52 crc kubenswrapper[4840]: I0311 08:58:52.146849 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:52 crc kubenswrapper[4840]: I0311 08:58:52.169912 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccc0ba5b-4f84-48fc-a334-73db829e45a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68fa952dc486db4733cdc74294744871c4e8e27f3a797c3bb641c93bc1ba7549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191f6779c704bc04c6db18ab7604aa56472fa73c65a49ab54f8213a17dfc89dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8b376bd2621e314b7488c7f19023eb2927deea932f4764289fa82d1309f6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://656c404aff0b9e454e59bceb73d79f5d045ac4c05de3dee6f0f38019bf78fac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://656c404aff0b9e454e59bceb73d79f5d045ac4c05de3dee6f0f38019bf78fac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:52 crc kubenswrapper[4840]: I0311 08:58:52.191372 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:52 crc kubenswrapper[4840]: I0311 08:58:52.209675 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:52 crc kubenswrapper[4840]: I0311 08:58:52.228410 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:52 crc kubenswrapper[4840]: I0311 08:58:52.244884 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:52 crc kubenswrapper[4840]: I0311 08:58:52.265631 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fd2c3055a64a29be3fef394c48782227519d0d445ca2dc38a6f9093f7698b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:52 crc kubenswrapper[4840]: I0311 08:58:52.286224 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:52 crc kubenswrapper[4840]: I0311 08:58:52.303628 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:52 crc kubenswrapper[4840]: I0311 08:58:52.322169 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:52 crc kubenswrapper[4840]: I0311 08:58:52.349111 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa44cbb9a24ea530f3aca75e40aa270291e88352a947597d4997175ebd38f746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa44cbb9a24ea530f3aca75e40aa270291e88352a947597d4997175ebd38f746\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"milyPolicy:*SingleStack,ClusterIPs:[10.217.5.253],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0311 08:58:38.705363 6867 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:38Z is after 2025-08-24T17:21:41Z]\\\\nI0311 08:58:38.705374 6867 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7c2zl_openshift-ovn-kubernetes(935336e2-294b-4982-83f9-718806d14e5c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:52 crc kubenswrapper[4840]: I0311 08:58:52.367144 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a97da03-4135-4850-9393-640dffab4289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57ffc84765682190c64926aa41837eb62aed5f155016eda6cec84aca86e1ac31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5b701d5975fea1d305cf5ef501224f312a6a3afd03977a691910a5acaf9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9stqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:52 crc kubenswrapper[4840]: I0311 08:58:52.386978 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gjgkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e87442b-4d54-472c-bad6-e2086c95df50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gjgkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:52Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:53 crc kubenswrapper[4840]: I0311 08:58:53.059708 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:53 crc kubenswrapper[4840]: I0311 08:58:53.059791 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:53 crc kubenswrapper[4840]: I0311 08:58:53.059804 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:53 crc kubenswrapper[4840]: E0311 08:58:53.059911 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:58:53 crc kubenswrapper[4840]: E0311 08:58:53.060052 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:53 crc kubenswrapper[4840]: E0311 08:58:53.060230 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:54 crc kubenswrapper[4840]: I0311 08:58:54.059300 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:58:54 crc kubenswrapper[4840]: E0311 08:58:54.059495 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:58:55 crc kubenswrapper[4840]: I0311 08:58:55.059380 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:55 crc kubenswrapper[4840]: I0311 08:58:55.059443 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:55 crc kubenswrapper[4840]: E0311 08:58:55.059682 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:55 crc kubenswrapper[4840]: I0311 08:58:55.059767 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:55 crc kubenswrapper[4840]: E0311 08:58:55.059991 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:55 crc kubenswrapper[4840]: E0311 08:58:55.060115 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:58:56 crc kubenswrapper[4840]: I0311 08:58:56.059565 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:58:56 crc kubenswrapper[4840]: E0311 08:58:56.059888 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:58:56 crc kubenswrapper[4840]: I0311 08:58:56.061207 4840 scope.go:117] "RemoveContainer" containerID="fa44cbb9a24ea530f3aca75e40aa270291e88352a947597d4997175ebd38f746" Mar 11 08:58:56 crc kubenswrapper[4840]: I0311 08:58:56.234072 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e87442b-4d54-472c-bad6-e2086c95df50-metrics-certs\") pod \"network-metrics-daemon-gjgkz\" (UID: \"6e87442b-4d54-472c-bad6-e2086c95df50\") " pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:58:56 crc kubenswrapper[4840]: E0311 08:58:56.234366 4840 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:58:56 crc kubenswrapper[4840]: E0311 08:58:56.234928 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e87442b-4d54-472c-bad6-e2086c95df50-metrics-certs podName:6e87442b-4d54-472c-bad6-e2086c95df50 nodeName:}" failed. No retries permitted until 2026-03-11 08:59:12.234896 +0000 UTC m=+150.900565845 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6e87442b-4d54-472c-bad6-e2086c95df50-metrics-certs") pod "network-metrics-daemon-gjgkz" (UID: "6e87442b-4d54-472c-bad6-e2086c95df50") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:58:56 crc kubenswrapper[4840]: I0311 08:58:56.748585 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7c2zl_935336e2-294b-4982-83f9-718806d14e5c/ovnkube-controller/1.log" Mar 11 08:58:56 crc kubenswrapper[4840]: I0311 08:58:56.751636 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" event={"ID":"935336e2-294b-4982-83f9-718806d14e5c","Type":"ContainerStarted","Data":"0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b"} Mar 11 08:58:56 crc kubenswrapper[4840]: I0311 08:58:56.752184 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:58:56 crc kubenswrapper[4840]: I0311 08:58:56.770703 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:56 crc kubenswrapper[4840]: I0311 08:58:56.783756 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccc0ba5b-4f84-48fc-a334-73db829e45a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68fa952dc486db4733cdc74294744871c4e8e27f3a797c3bb641c93bc1ba7549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191f6779c704bc04c6db18ab7604aa56472fa73c65a49ab54f8213a17dfc89dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8b376bd2621e314b7488c7f19023eb2927deea932f4764289fa82d1309f6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://656c404aff0b9e454e59bceb73d79f5d045ac4c05de3dee6f0f38019bf78fac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://656c404aff0b9e454e59bceb73d79f5d045ac4c05de3dee6f0f38019bf78fac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:56 crc kubenswrapper[4840]: I0311 08:58:56.797698 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:56 crc kubenswrapper[4840]: I0311 08:58:56.810885 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:56 crc kubenswrapper[4840]: I0311 08:58:56.843115 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:56 crc kubenswrapper[4840]: I0311 08:58:56.864600 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:56 crc kubenswrapper[4840]: I0311 08:58:56.880591 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:56 crc kubenswrapper[4840]: I0311 08:58:56.895791 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fd2c3055a64a29be3fef394c48782227519d0d445ca2dc38a6f9093f7698b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:56 crc kubenswrapper[4840]: I0311 08:58:56.910355 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:56 crc kubenswrapper[4840]: I0311 08:58:56.922181 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:56 crc kubenswrapper[4840]: I0311 08:58:56.939247 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa44cbb9a24ea530f3aca75e40aa270291e88352a947597d4997175ebd38f746\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"milyPolicy:*SingleStack,ClusterIPs:[10.217.5.253],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0311 08:58:38.705363 6867 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:38Z is after 2025-08-24T17:21:41Z]\\\\nI0311 08:58:38.705374 6867 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:56 crc kubenswrapper[4840]: I0311 08:58:56.950820 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a97da03-4135-4850-9393-640dffab4289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57ffc84765682190c64926aa41837eb62aed5f155016eda6cec84aca86e1ac31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5b701d5975fea1d305cf5ef501224f312a6a3afd03977a691910a5acaf9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9stqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:56 crc kubenswrapper[4840]: I0311 08:58:56.962308 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gjgkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e87442b-4d54-472c-bad6-e2086c95df50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gjgkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:56 crc kubenswrapper[4840]: I0311 08:58:56.973340 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:56 crc kubenswrapper[4840]: I0311 08:58:56.985307 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:56Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.015714 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4tjtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3703388f56cc4d0f73dcdc31a5fae2dd6cc275b3a86ae8ca3733190a08bac57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5b8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4tjtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.060147 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.060208 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.060328 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:57 crc kubenswrapper[4840]: E0311 08:58:57.060502 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:57 crc kubenswrapper[4840]: E0311 08:58:57.060658 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:58:57 crc kubenswrapper[4840]: E0311 08:58:57.060790 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:57 crc kubenswrapper[4840]: E0311 08:58:57.141600 4840 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.760668 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7c2zl_935336e2-294b-4982-83f9-718806d14e5c/ovnkube-controller/2.log" Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.761525 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7c2zl_935336e2-294b-4982-83f9-718806d14e5c/ovnkube-controller/1.log" Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.765969 4840 generic.go:334] "Generic (PLEG): container finished" podID="935336e2-294b-4982-83f9-718806d14e5c" containerID="0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b" exitCode=1 Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.766056 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" event={"ID":"935336e2-294b-4982-83f9-718806d14e5c","Type":"ContainerDied","Data":"0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b"} Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.766129 4840 scope.go:117] "RemoveContainer" containerID="fa44cbb9a24ea530f3aca75e40aa270291e88352a947597d4997175ebd38f746" Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.767313 4840 scope.go:117] "RemoveContainer" containerID="0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b" Mar 11 08:58:57 crc kubenswrapper[4840]: E0311 08:58:57.767671 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7c2zl_openshift-ovn-kubernetes(935336e2-294b-4982-83f9-718806d14e5c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" podUID="935336e2-294b-4982-83f9-718806d14e5c" Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.784630 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.799379 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.811189 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4tjtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3703388f56cc4d0f73dcdc31a5fae2dd6cc275b3a86ae8ca3733190a08bac57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5b8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4tjtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.825906 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.843169 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccc0ba5b-4f84-48fc-a334-73db829e45a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68fa952dc486db4733cdc74294744871c4e8e27f3a797c3bb641c93bc1ba7549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191f6779c704bc04c6db18ab7604aa56472fa73c65a49ab54f8213a17dfc89dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8b376bd2621e314b7488c7f19023eb2927deea932f4764289fa82d1309f6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://656c404aff0b9e454e59bceb73d79f5d045ac4c05de3dee6f0f38019bf78fac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://656c404aff0b9e454e59bceb73d79f5d045ac4c05de3dee6f0f38019bf78fac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.858771 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.873643 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.889243 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.902637 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.912581 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.928513 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fd2c3055a64a29be3fef394c48782227519d0d445ca2dc38a6f9093f7698b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.940795 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.950359 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.970173 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fa44cbb9a24ea530f3aca75e40aa270291e88352a947597d4997175ebd38f746\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"message\\\":\\\"milyPolicy:*SingleStack,ClusterIPs:[10.217.5.253],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0311 08:58:38.705363 6867 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:38Z is after 2025-08-24T17:21:41Z]\\\\nI0311 08:58:38.705374 6867 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:5c 10.217.0.92]} options:{GoMap:map[iface-id-ver:5fe485a1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:58:57Z\\\",\\\"message\\\":\\\"r/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.37\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0311 08:58:57.060984 7161 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:56Z \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.982522 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a97da03-4135-4850-9393-640dffab4289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57ffc84765682190c64926aa41837eb62aed5f155016eda6cec84aca86e1ac31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5b701d5975fea1d305cf5ef501224f312a6a3afd03977a691910a5acaf9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9stqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:57 crc kubenswrapper[4840]: I0311 08:58:57.998155 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gjgkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e87442b-4d54-472c-bad6-e2086c95df50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gjgkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:57Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:58 crc kubenswrapper[4840]: I0311 08:58:58.060271 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:58:58 crc kubenswrapper[4840]: E0311 08:58:58.060496 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:58:58 crc kubenswrapper[4840]: I0311 08:58:58.773580 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7c2zl_935336e2-294b-4982-83f9-718806d14e5c/ovnkube-controller/2.log" Mar 11 08:58:58 crc kubenswrapper[4840]: I0311 08:58:58.779214 4840 scope.go:117] "RemoveContainer" containerID="0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b" Mar 11 08:58:58 crc kubenswrapper[4840]: E0311 08:58:58.780226 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7c2zl_openshift-ovn-kubernetes(935336e2-294b-4982-83f9-718806d14e5c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" podUID="935336e2-294b-4982-83f9-718806d14e5c" Mar 11 08:58:58 crc kubenswrapper[4840]: I0311 08:58:58.795871 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:58 crc kubenswrapper[4840]: I0311 08:58:58.812712 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:58 crc kubenswrapper[4840]: I0311 08:58:58.829251 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:58 crc kubenswrapper[4840]: I0311 08:58:58.845313 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fd2c3055a64a29be3fef394c48782227519d0d445ca2dc38a6f9093f7698b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:58 crc kubenswrapper[4840]: I0311 08:58:58.858904 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:58 crc kubenswrapper[4840]: I0311 08:58:58.876954 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:58 crc kubenswrapper[4840]: I0311 08:58:58.892138 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a97da03-4135-4850-9393-640dffab4289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57ffc84765682190c64926aa41837eb62aed5f155016eda6cec84aca86e1ac31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5b701d5975fea1d305cf5ef501224f312a6a3afd03977a691910a5acaf9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9stqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:58 crc kubenswrapper[4840]: I0311 08:58:58.902597 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gjgkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e87442b-4d54-472c-bad6-e2086c95df50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gjgkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:58 crc kubenswrapper[4840]: I0311 08:58:58.927639 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:58:57Z\\\",\\\"message\\\":\\\"r/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.37\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0311 08:58:57.060984 7161 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:56Z \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7c2zl_openshift-ovn-kubernetes(935336e2-294b-4982-83f9-718806d14e5c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:58 crc kubenswrapper[4840]: I0311 08:58:58.946113 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:58 crc kubenswrapper[4840]: I0311 08:58:58.960322 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4tjtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3703388f56cc4d0f73dcdc31a5fae2dd6cc275b3a86ae8ca3733190a08bac57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5b8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4tjtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:58 crc kubenswrapper[4840]: I0311 08:58:58.979707 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:58 crc kubenswrapper[4840]: I0311 08:58:58.993308 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:58Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:59 crc kubenswrapper[4840]: I0311 08:58:59.010094 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:59 crc kubenswrapper[4840]: I0311 08:58:59.030135 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:59 crc kubenswrapper[4840]: I0311 08:58:59.048398 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccc0ba5b-4f84-48fc-a334-73db829e45a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68fa952dc486db4733cdc74294744871c4e8e27f3a797c3bb641c93bc1ba7549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191f6779c704bc04c6db18ab7604aa56472fa73c65a49ab54f8213a17dfc89dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8b376bd2621e314b7488c7f19023eb2927deea932f4764289fa82d1309f6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://656c404aff0b9e454e59bceb73d79f5d045ac4c05de3dee6f0f38019bf78fac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://656c404aff0b9e454e59bceb73d79f5d045ac4c05de3dee6f0f38019bf78fac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:59Z is after 2025-08-24T17:21:41Z" Mar 11 08:58:59 crc kubenswrapper[4840]: I0311 08:58:59.059680 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:58:59 crc kubenswrapper[4840]: I0311 08:58:59.059728 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:58:59 crc kubenswrapper[4840]: I0311 08:58:59.059919 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:58:59 crc kubenswrapper[4840]: E0311 08:58:59.059864 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:58:59 crc kubenswrapper[4840]: E0311 08:58:59.060063 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:58:59 crc kubenswrapper[4840]: E0311 08:58:59.060341 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.059871 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:59:00 crc kubenswrapper[4840]: E0311 08:59:00.060631 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.388312 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.388385 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.388405 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.388432 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.388448 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:59:00Z","lastTransitionTime":"2026-03-11T08:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:59:00 crc kubenswrapper[4840]: E0311 08:59:00.409364 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:00Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.414300 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.414354 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.414369 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.414388 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.414401 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:59:00Z","lastTransitionTime":"2026-03-11T08:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:59:00 crc kubenswrapper[4840]: E0311 08:59:00.430726 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:00Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.435399 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.435515 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.435542 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.435574 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.435599 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:59:00Z","lastTransitionTime":"2026-03-11T08:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:59:00 crc kubenswrapper[4840]: E0311 08:59:00.455924 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:00Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.461234 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.461299 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.461323 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.461350 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.461370 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:59:00Z","lastTransitionTime":"2026-03-11T08:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:59:00 crc kubenswrapper[4840]: E0311 08:59:00.480604 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:00Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.487076 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.487149 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.487167 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.487193 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:59:00 crc kubenswrapper[4840]: I0311 08:59:00.487211 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:59:00Z","lastTransitionTime":"2026-03-11T08:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:59:00 crc kubenswrapper[4840]: E0311 08:59:00.501312 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:00Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:00 crc kubenswrapper[4840]: E0311 08:59:00.501961 4840 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 11 08:59:01 crc kubenswrapper[4840]: I0311 08:59:01.059809 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:59:01 crc kubenswrapper[4840]: I0311 08:59:01.059861 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:59:01 crc kubenswrapper[4840]: I0311 08:59:01.059912 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:59:01 crc kubenswrapper[4840]: E0311 08:59:01.060012 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:59:01 crc kubenswrapper[4840]: E0311 08:59:01.060387 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:59:01 crc kubenswrapper[4840]: E0311 08:59:01.060419 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:59:02 crc kubenswrapper[4840]: I0311 08:59:02.059597 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:59:02 crc kubenswrapper[4840]: E0311 08:59:02.059882 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:59:02 crc kubenswrapper[4840]: I0311 08:59:02.077663 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:02 crc kubenswrapper[4840]: I0311 08:59:02.091041 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:02 crc kubenswrapper[4840]: I0311 08:59:02.106611 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:02 crc kubenswrapper[4840]: I0311 08:59:02.123146 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:02 crc kubenswrapper[4840]: I0311 08:59:02.135116 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:02 crc kubenswrapper[4840]: E0311 08:59:02.142005 4840 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:59:02 crc kubenswrapper[4840]: I0311 08:59:02.152373 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fd2c3055a64a29be3fef394c48782227519d0d445ca2dc38a6f9093f7698b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:02 crc kubenswrapper[4840]: I0311 08:59:02.175021 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:58:57Z\\\",\\\"message\\\":\\\"r/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.37\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0311 08:58:57.060984 7161 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:56Z \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7c2zl_openshift-ovn-kubernetes(935336e2-294b-4982-83f9-718806d14e5c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:02 crc kubenswrapper[4840]: I0311 08:59:02.187937 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a97da03-4135-4850-9393-640dffab4289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57ffc84765682190c64926aa41837eb62aed5f155016eda6cec84aca86e1ac31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5b701d5975fea1d305cf5ef501224f312a6a3afd03977a691910a5acaf9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9stqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:02 crc kubenswrapper[4840]: I0311 08:59:02.202413 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gjgkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e87442b-4d54-472c-bad6-e2086c95df50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gjgkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:02 crc kubenswrapper[4840]: I0311 08:59:02.217279 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:02 crc kubenswrapper[4840]: I0311 08:59:02.231460 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:02 crc kubenswrapper[4840]: I0311 08:59:02.241021 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4tjtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3703388f56cc4d0f73dcdc31a5fae2dd6cc275b3a86ae8ca3733190a08bac57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5b8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4tjtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:02 crc kubenswrapper[4840]: I0311 08:59:02.254443 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:02 crc kubenswrapper[4840]: I0311 08:59:02.268964 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccc0ba5b-4f84-48fc-a334-73db829e45a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68fa952dc486db4733cdc74294744871c4e8e27f3a797c3bb641c93bc1ba7549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191f6779c704bc04c6db18ab7604aa56472fa73c65a49ab54f8213a17dfc89dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8b376bd2621e314b7488c7f19023eb2927deea932f4764289fa82d1309f6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://656c404aff0b9e454e59bceb73d79f5d045ac4c05de3dee6f0f38019bf78fac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://656c404aff0b9e454e59bceb73d79f5d045ac4c05de3dee6f0f38019bf78fac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:02 crc kubenswrapper[4840]: I0311 08:59:02.282856 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:02 crc kubenswrapper[4840]: I0311 08:59:02.295843 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:02Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:03 crc kubenswrapper[4840]: I0311 08:59:03.060285 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:59:03 crc kubenswrapper[4840]: I0311 08:59:03.060356 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:59:03 crc kubenswrapper[4840]: I0311 08:59:03.060321 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:59:03 crc kubenswrapper[4840]: E0311 08:59:03.060581 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:59:03 crc kubenswrapper[4840]: E0311 08:59:03.060716 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:59:03 crc kubenswrapper[4840]: E0311 08:59:03.060802 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:59:04 crc kubenswrapper[4840]: I0311 08:59:04.059355 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:59:04 crc kubenswrapper[4840]: E0311 08:59:04.059553 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:59:05 crc kubenswrapper[4840]: I0311 08:59:05.060123 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:59:05 crc kubenswrapper[4840]: I0311 08:59:05.060174 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:59:05 crc kubenswrapper[4840]: I0311 08:59:05.060334 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:59:05 crc kubenswrapper[4840]: E0311 08:59:05.060627 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:59:05 crc kubenswrapper[4840]: E0311 08:59:05.060799 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:59:05 crc kubenswrapper[4840]: E0311 08:59:05.060949 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:59:06 crc kubenswrapper[4840]: I0311 08:59:06.060203 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:59:06 crc kubenswrapper[4840]: E0311 08:59:06.060396 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:59:07 crc kubenswrapper[4840]: I0311 08:59:07.059866 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:59:07 crc kubenswrapper[4840]: I0311 08:59:07.059966 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:59:07 crc kubenswrapper[4840]: E0311 08:59:07.060013 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:59:07 crc kubenswrapper[4840]: E0311 08:59:07.060104 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:59:07 crc kubenswrapper[4840]: I0311 08:59:07.059966 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:59:07 crc kubenswrapper[4840]: E0311 08:59:07.060173 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:59:07 crc kubenswrapper[4840]: E0311 08:59:07.143692 4840 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:59:08 crc kubenswrapper[4840]: I0311 08:59:08.060088 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:59:08 crc kubenswrapper[4840]: E0311 08:59:08.060346 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:59:09 crc kubenswrapper[4840]: I0311 08:59:09.059359 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:59:09 crc kubenswrapper[4840]: E0311 08:59:09.059660 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:59:09 crc kubenswrapper[4840]: I0311 08:59:09.059780 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:59:09 crc kubenswrapper[4840]: I0311 08:59:09.059766 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:59:09 crc kubenswrapper[4840]: E0311 08:59:09.059971 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:59:09 crc kubenswrapper[4840]: E0311 08:59:09.060536 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:59:09 crc kubenswrapper[4840]: I0311 08:59:09.076234 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.060217 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:59:10 crc kubenswrapper[4840]: E0311 08:59:10.060421 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.061188 4840 scope.go:117] "RemoveContainer" containerID="0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b" Mar 11 08:59:10 crc kubenswrapper[4840]: E0311 08:59:10.061359 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7c2zl_openshift-ovn-kubernetes(935336e2-294b-4982-83f9-718806d14e5c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" podUID="935336e2-294b-4982-83f9-718806d14e5c" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.821218 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.821306 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.821323 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.821350 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.821366 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:59:10Z","lastTransitionTime":"2026-03-11T08:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:59:10 crc kubenswrapper[4840]: E0311 08:59:10.842584 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:10Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.847075 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.847118 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.847130 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.847147 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.847160 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:59:10Z","lastTransitionTime":"2026-03-11T08:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:59:10 crc kubenswrapper[4840]: E0311 08:59:10.862391 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:10Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.867519 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.867619 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.867631 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.867646 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.867657 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:59:10Z","lastTransitionTime":"2026-03-11T08:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:59:10 crc kubenswrapper[4840]: E0311 08:59:10.887720 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:10Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.892380 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.892429 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.892440 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.892458 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.892485 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:59:10Z","lastTransitionTime":"2026-03-11T08:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:59:10 crc kubenswrapper[4840]: E0311 08:59:10.911694 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:10Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.918645 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.918727 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.918749 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.918777 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:59:10 crc kubenswrapper[4840]: I0311 08:59:10.918798 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:59:10Z","lastTransitionTime":"2026-03-11T08:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:59:10 crc kubenswrapper[4840]: E0311 08:59:10.936489 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"b40dc5ac-6e20-4fe3-8d4f-1dab2691799c\\\",\\\"systemUUID\\\":\\\"e5bb6cc6-19d8-441f-bba6-b926930273a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:10Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:10 crc kubenswrapper[4840]: E0311 08:59:10.936671 4840 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 11 08:59:11 crc kubenswrapper[4840]: I0311 08:59:11.059933 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:59:11 crc kubenswrapper[4840]: E0311 08:59:11.060084 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:59:11 crc kubenswrapper[4840]: I0311 08:59:11.060099 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:59:11 crc kubenswrapper[4840]: E0311 08:59:11.060212 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:59:11 crc kubenswrapper[4840]: I0311 08:59:11.060290 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:59:11 crc kubenswrapper[4840]: E0311 08:59:11.060536 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:59:11 crc kubenswrapper[4840]: I0311 08:59:11.073345 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 11 08:59:12 crc kubenswrapper[4840]: I0311 08:59:12.060055 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:59:12 crc kubenswrapper[4840]: E0311 08:59:12.060320 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:59:12 crc kubenswrapper[4840]: I0311 08:59:12.078484 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:12 crc kubenswrapper[4840]: I0311 08:59:12.096008 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49dae5ae-5197-4a82-b8d8-b81f34646421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958f66fb4312fb869f6679ae90fca57b2ffb5b5f1946ddc4e1d4345b2a291016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9be0868d10e5abebdb9b4fba34e5c202c8cd14fbc120551405b9709a816dc4c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0311 08:57:14.519755 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0311 08:57:14.520776 1 observer_polling.go:159] Starting file observer\\\\nI0311 08:57:14.521843 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0311 08:57:14.522678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0311 08:57:38.865344 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0311 08:57:44.035576 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0311 08:57:44.035815 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:14Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8267a35ad5296c0a02a207b535f1fe5f48c725dfb4ab7254fc2833e84148eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7deb8c4d6c80705675e84dfb26c8041214e85b2c425ce6d2f6883fb2ab191c2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91a1312d85b3716ac13f37e631c5c53b4d5541614972597efb39c406e32f738a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:12 crc kubenswrapper[4840]: I0311 08:59:12.117767 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:12 crc kubenswrapper[4840]: I0311 08:59:12.134533 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:12 crc kubenswrapper[4840]: E0311 08:59:12.144564 4840 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:59:12 crc kubenswrapper[4840]: I0311 08:59:12.156567 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccc0ba5b-4f84-48fc-a334-73db829e45a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68fa952dc486db4733cdc74294744871c4e8e27f3a797c3bb641c93bc1ba7549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191f6779c704bc04c6db18ab7604aa56472fa73c65a49ab54f8213a17dfc89dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8b376bd2621e314b7488c7f19023eb2927deea932f4764289fa82d1309f6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://656c404aff0b9e454e59bceb73d79f5d045ac4c05de3dee6f0f38019bf78fac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://656c404aff0b9e454e59bceb73d79f5d045ac4c05de3dee6f0f38019bf78fac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:12 crc kubenswrapper[4840]: I0311 08:59:12.180992 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:12 crc kubenswrapper[4840]: I0311 08:59:12.201482 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:12 crc kubenswrapper[4840]: I0311 08:59:12.219575 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:12 crc kubenswrapper[4840]: I0311 08:59:12.240729 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fd2c3055a64a29be3fef394c48782227519d0d445ca2dc38a6f9093f7698b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:12 crc kubenswrapper[4840]: I0311 08:59:12.257536 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:12 crc kubenswrapper[4840]: I0311 08:59:12.257828 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e87442b-4d54-472c-bad6-e2086c95df50-metrics-certs\") pod \"network-metrics-daemon-gjgkz\" (UID: \"6e87442b-4d54-472c-bad6-e2086c95df50\") " pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:59:12 crc kubenswrapper[4840]: E0311 08:59:12.258072 4840 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:59:12 crc kubenswrapper[4840]: E0311 08:59:12.258162 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e87442b-4d54-472c-bad6-e2086c95df50-metrics-certs podName:6e87442b-4d54-472c-bad6-e2086c95df50 nodeName:}" failed. No retries permitted until 2026-03-11 08:59:44.258137668 +0000 UTC m=+182.923807493 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6e87442b-4d54-472c-bad6-e2086c95df50-metrics-certs") pod "network-metrics-daemon-gjgkz" (UID: "6e87442b-4d54-472c-bad6-e2086c95df50") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 11 08:59:12 crc kubenswrapper[4840]: I0311 08:59:12.272260 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:12 crc kubenswrapper[4840]: I0311 08:59:12.285202 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7534e3a0-4e29-4bac-ad21-3907f9bfab68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1d24a6fa03febf90b39f2984c5d8adc70cf5e0b019cba4c3d99830dc9e2cbdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5815f294ed0f63415a41c689e1aaea2d9dd1f0ee4999752e4e85065c08cb1d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5815f294ed0f63415a41c689e1aaea2d9dd1f0ee4999752e4e85065c08cb1d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:12 crc kubenswrapper[4840]: I0311 08:59:12.297364 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a97da03-4135-4850-9393-640dffab4289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57ffc84765682190c64926aa41837eb62aed5f155016eda6cec84aca86e1ac31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5b701d5975fea1d305cf5ef501224f312a6a3afd03977a691910a5acaf9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9stqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:12 crc kubenswrapper[4840]: I0311 08:59:12.310123 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gjgkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e87442b-4d54-472c-bad6-e2086c95df50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gjgkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:12 crc kubenswrapper[4840]: I0311 08:59:12.341688 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:58:57Z\\\",\\\"message\\\":\\\"r/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.37\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0311 08:58:57.060984 7161 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:56Z \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7c2zl_openshift-ovn-kubernetes(935336e2-294b-4982-83f9-718806d14e5c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:12 crc kubenswrapper[4840]: I0311 08:59:12.358528 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:12 crc kubenswrapper[4840]: I0311 08:59:12.373798 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4tjtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3703388f56cc4d0f73dcdc31a5fae2dd6cc275b3a86ae8ca3733190a08bac57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5b8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4tjtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:12 crc kubenswrapper[4840]: I0311 08:59:12.391919 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:12Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:12 crc kubenswrapper[4840]: I0311 08:59:12.966991 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:12 crc kubenswrapper[4840]: I0311 08:59:12.967223 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:59:12 crc kubenswrapper[4840]: E0311 08:59:12.967372 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 09:00:16.967319534 +0000 UTC m=+215.632989359 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:12 crc kubenswrapper[4840]: E0311 08:59:12.967404 4840 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:59:12 crc kubenswrapper[4840]: E0311 08:59:12.967548 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 09:00:16.96753218 +0000 UTC m=+215.633202265 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 11 08:59:12 crc kubenswrapper[4840]: I0311 08:59:12.967594 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:59:12 crc kubenswrapper[4840]: E0311 08:59:12.967845 4840 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:59:12 crc kubenswrapper[4840]: E0311 08:59:12.967912 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-11 09:00:16.96789355 +0000 UTC m=+215.633563585 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 11 08:59:13 crc kubenswrapper[4840]: I0311 08:59:13.059441 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:59:13 crc kubenswrapper[4840]: I0311 08:59:13.059547 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:59:13 crc kubenswrapper[4840]: E0311 08:59:13.059631 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:59:13 crc kubenswrapper[4840]: I0311 08:59:13.059559 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:59:13 crc kubenswrapper[4840]: E0311 08:59:13.059777 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:59:13 crc kubenswrapper[4840]: E0311 08:59:13.059901 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:59:13 crc kubenswrapper[4840]: I0311 08:59:13.068275 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:59:13 crc kubenswrapper[4840]: I0311 08:59:13.068336 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:59:13 crc kubenswrapper[4840]: E0311 08:59:13.068549 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:59:13 crc kubenswrapper[4840]: E0311 08:59:13.068592 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:59:13 crc kubenswrapper[4840]: E0311 08:59:13.068611 4840 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:59:13 crc kubenswrapper[4840]: E0311 08:59:13.068549 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 11 08:59:13 crc kubenswrapper[4840]: E0311 08:59:13.068690 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 11 08:59:13 crc kubenswrapper[4840]: E0311 08:59:13.068701 4840 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:59:13 crc kubenswrapper[4840]: E0311 08:59:13.068674 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-11 09:00:17.06865363 +0000 UTC m=+215.734323445 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:59:13 crc kubenswrapper[4840]: E0311 08:59:13.068752 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-11 09:00:17.068732222 +0000 UTC m=+215.734402037 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 11 08:59:14 crc kubenswrapper[4840]: I0311 08:59:14.059693 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:59:14 crc kubenswrapper[4840]: E0311 08:59:14.059910 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:59:14 crc kubenswrapper[4840]: I0311 08:59:14.857768 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcb9n_0c1678fd-7741-474b-9c8e-3008d3570921/kube-multus/0.log" Mar 11 08:59:14 crc kubenswrapper[4840]: I0311 08:59:14.857841 4840 generic.go:334] "Generic (PLEG): container finished" podID="0c1678fd-7741-474b-9c8e-3008d3570921" containerID="affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638" exitCode=1 Mar 11 08:59:14 crc kubenswrapper[4840]: I0311 08:59:14.857888 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vcb9n" event={"ID":"0c1678fd-7741-474b-9c8e-3008d3570921","Type":"ContainerDied","Data":"affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638"} Mar 11 08:59:14 crc kubenswrapper[4840]: I0311 08:59:14.858675 4840 scope.go:117] "RemoveContainer" containerID="affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638" Mar 11 08:59:14 crc kubenswrapper[4840]: I0311 08:59:14.871288 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:14Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:14 crc kubenswrapper[4840]: I0311 08:59:14.895700 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fd2c3055a64a29be3fef394c48782227519d0d445ca2dc38a6f9093f7698b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:14Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:14 crc kubenswrapper[4840]: I0311 08:59:14.916595 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:59:14Z\\\",\\\"message\\\":\\\"2026-03-11T08:58:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3702d317-44f5-47fd-87f2-58e3c78c07d0\\\\n2026-03-11T08:58:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3702d317-44f5-47fd-87f2-58e3c78c07d0 to /host/opt/cni/bin/\\\\n2026-03-11T08:58:29Z [verbose] multus-daemon started\\\\n2026-03-11T08:58:29Z [verbose] Readiness Indicator file check\\\\n2026-03-11T08:59:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:14Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:14 crc kubenswrapper[4840]: I0311 08:59:14.931084 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:14Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:14 crc kubenswrapper[4840]: I0311 08:59:14.945967 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7534e3a0-4e29-4bac-ad21-3907f9bfab68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1d24a6fa03febf90b39f2984c5d8adc70cf5e0b019cba4c3d99830dc9e2cbdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5815f294ed0f63415a41c689e1aaea2d9dd1f0ee4999752e4e85065c08cb1d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5815f294ed0f63415a41c689e1aaea2d9dd1f0ee4999752e4e85065c08cb1d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:14Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:14 crc kubenswrapper[4840]: I0311 08:59:14.966686 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:14Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:14 crc kubenswrapper[4840]: I0311 08:59:14.987183 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:14Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:15 crc kubenswrapper[4840]: I0311 08:59:15.009955 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:58:57Z\\\",\\\"message\\\":\\\"r/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.37\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0311 08:58:57.060984 7161 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:56Z \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7c2zl_openshift-ovn-kubernetes(935336e2-294b-4982-83f9-718806d14e5c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:15 crc kubenswrapper[4840]: I0311 08:59:15.027136 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a97da03-4135-4850-9393-640dffab4289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57ffc84765682190c64926aa41837eb62aed5f155016eda6cec84aca86e1ac31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5b701d5975fea1d305cf5ef501224f312a6a3afd03977a691910a5acaf9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9stqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:15 crc kubenswrapper[4840]: I0311 08:59:15.037978 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gjgkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e87442b-4d54-472c-bad6-e2086c95df50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gjgkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:15 crc kubenswrapper[4840]: I0311 08:59:15.050918 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:15 crc kubenswrapper[4840]: I0311 08:59:15.059130 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:59:15 crc kubenswrapper[4840]: I0311 08:59:15.059182 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:59:15 crc kubenswrapper[4840]: E0311 08:59:15.059280 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:59:15 crc kubenswrapper[4840]: I0311 08:59:15.059151 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:59:15 crc kubenswrapper[4840]: E0311 08:59:15.059440 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:59:15 crc kubenswrapper[4840]: E0311 08:59:15.059524 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:59:15 crc kubenswrapper[4840]: I0311 08:59:15.063930 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:15 crc kubenswrapper[4840]: I0311 08:59:15.077040 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4tjtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3703388f56cc4d0f73dcdc31a5fae2dd6cc275b3a86ae8ca3733190a08bac57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5b8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4tjtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:15 crc kubenswrapper[4840]: I0311 08:59:15.091489 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49dae5ae-5197-4a82-b8d8-b81f34646421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958f66fb4312fb869f6679ae90fca57b2ffb5b5f1946ddc4e1d4345b2a291016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9be0868d10e5abebdb9b4fba34e5c202c8cd14fbc120551405b9709a816dc4c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0311 08:57:14.519755 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0311 08:57:14.520776 1 observer_polling.go:159] Starting file observer\\\\nI0311 08:57:14.521843 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0311 08:57:14.522678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0311 08:57:38.865344 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0311 08:57:44.035576 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0311 08:57:44.035815 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:14Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8267a35ad5296c0a02a207b535f1fe5f48c725dfb4ab7254fc2833e84148eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7deb8c4d6c80705675e84dfb26c8041214e85b2c425ce6d2f6883fb2ab191c2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91a1312d85b3716ac13f37e631c5c53b4d5541614972597efb39c406e32f738a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:15 crc kubenswrapper[4840]: I0311 08:59:15.106125 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:15 crc kubenswrapper[4840]: I0311 08:59:15.122762 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccc0ba5b-4f84-48fc-a334-73db829e45a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68fa952dc486db4733cdc74294744871c4e8e27f3a797c3bb641c93bc1ba7549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191f6779c704bc04c6db18ab7604aa56472fa73c65a49ab54f8213a17dfc89dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8b376bd2621e314b7488c7f19023eb2927deea932f4764289fa82d1309f6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://656c404aff0b9e454e59bceb73d79f5d045ac4c05de3dee6f0f38019bf78fac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://656c404aff0b9e454e59bceb73d79f5d045ac4c05de3dee6f0f38019bf78fac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:15 crc kubenswrapper[4840]: I0311 08:59:15.140980 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:15 crc kubenswrapper[4840]: I0311 08:59:15.154729 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:15 crc kubenswrapper[4840]: I0311 08:59:15.862665 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcb9n_0c1678fd-7741-474b-9c8e-3008d3570921/kube-multus/0.log" Mar 11 08:59:15 crc kubenswrapper[4840]: I0311 08:59:15.862717 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vcb9n" event={"ID":"0c1678fd-7741-474b-9c8e-3008d3570921","Type":"ContainerStarted","Data":"ae57d680327e6f0eb22304dbeba1a9a8e001326b065bfd0ae0266dcb5e561d87"} Mar 11 08:59:15 crc kubenswrapper[4840]: I0311 08:59:15.875200 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fb68ea79d13c044390345ef25093ba46a60cb10b989c5acd69c63cafc1c4631f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:15 crc kubenswrapper[4840]: I0311 08:59:15.886417 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49dae5ae-5197-4a82-b8d8-b81f34646421\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://958f66fb4312fb869f6679ae90fca57b2ffb5b5f1946ddc4e1d4345b2a291016\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9be0868d10e5abebdb9b4fba34e5c202c8cd14fbc120551405b9709a816dc4c4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0311 08:57:14.519755 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0311 08:57:14.520776 1 observer_polling.go:159] Starting file observer\\\\nI0311 08:57:14.521843 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0311 08:57:14.522678 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0311 08:57:38.865344 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0311 08:57:44.035576 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0311 08:57:44.035815 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:14Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8267a35ad5296c0a02a207b535f1fe5f48c725dfb4ab7254fc2833e84148eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7deb8c4d6c80705675e84dfb26c8041214e85b2c425ce6d2f6883fb2ab191c2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91a1312d85b3716ac13f37e631c5c53b4d5541614972597efb39c406e32f738a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:15 crc kubenswrapper[4840]: I0311 08:59:15.900174 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:15 crc kubenswrapper[4840]: I0311 08:59:15.914489 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a735ab91afbbc50a948e293cb4907a10212b9a77c9e7506e21d75a4de4c74c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8a498e6ab267b758bd2ba69dcbd7e6af3089b6c38bbf50dea80f1304c2190f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:15 crc kubenswrapper[4840]: I0311 08:59:15.927046 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ccc0ba5b-4f84-48fc-a334-73db829e45a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68fa952dc486db4733cdc74294744871c4e8e27f3a797c3bb641c93bc1ba7549\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://191f6779c704bc04c6db18ab7604aa56472fa73c65a49ab54f8213a17dfc89dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b8b376bd2621e314b7488c7f19023eb2927deea932f4764289fa82d1309f6d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://656c404aff0b9e454e59bceb73d79f5d045ac4c05de3dee6f0f38019bf78fac4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://656c404aff0b9e454e59bceb73d79f5d045ac4c05de3dee6f0f38019bf78fac4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:15 crc kubenswrapper[4840]: I0311 08:59:15.944797 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3c3b2bde-8421-4e22-85ab-8b651c65bc9e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-11T08:57:44Z\\\",\\\"message\\\":\\\"le observer\\\\nW0311 08:57:44.699595 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0311 08:57:44.699747 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0311 08:57:44.700340 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2673359297/tls.crt::/tmp/serving-cert-2673359297/tls.key\\\\\\\"\\\\nI0311 08:57:44.923694 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0311 08:57:44.927612 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0311 08:57:44.927650 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0311 08:57:44.927690 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0311 08:57:44.927701 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0311 08:57:44.934515 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0311 08:57:44.934548 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0311 08:57:44.934556 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934567 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0311 08:57:44.934574 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0311 08:57:44.934577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0311 08:57:44.934581 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0311 08:57:44.934584 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0311 08:57:44.935676 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:57:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:15 crc kubenswrapper[4840]: I0311 08:59:15.958439 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a26aaac35b1d936b4ca0b8a96fccd108317a1456b2bac1c21eb28017ac6fd32c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:15 crc kubenswrapper[4840]: I0311 08:59:15.968726 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jlzht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"97907402-fb5a-4fb4-80ac-5b600527c547\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2004ebf76b86d594e220fd1f85f945d24094ff26400ba17e286e95b2e3a8d7a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vjngl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jlzht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:15 crc kubenswrapper[4840]: I0311 08:59:15.990689 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xn47g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6b1fe1a-6473-41f8-a45f-aaaa148c1412\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572fd2c3055a64a29be3fef394c48782227519d0d445ca2dc38a6f9093f7698b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a889eccbb6a0cbc75c8ed8ddeffb1713f30c667182a4ba135863c46bd6b81a10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13c2ce0ba741f2aad38db697e3bf19c9e912296fd737657feace60a93382e2f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98d87bbbdfbd9708e04ae69bfe137afd132af84db3b4328625bca81dfc385d92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b25179d0a842def4bdb185e250d96cabd12909f5bc849f186d714721ba20a11\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://156a271cc5e6bd9b7ea68a0a1486ba9460f2d31812bb42a1c06a9e79439e1c16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62a49366d891284607145968679e0c211b99a48d955857bdf0577792e0fc1f18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cfcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xn47g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:15Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:16 crc kubenswrapper[4840]: I0311 08:59:16.010113 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vcb9n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c1678fd-7741-474b-9c8e-3008d3570921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:59:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae57d680327e6f0eb22304dbeba1a9a8e001326b065bfd0ae0266dcb5e561d87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:59:14Z\\\",\\\"message\\\":\\\"2026-03-11T08:58:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_3702d317-44f5-47fd-87f2-58e3c78c07d0\\\\n2026-03-11T08:58:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_3702d317-44f5-47fd-87f2-58e3c78c07d0 to /host/opt/cni/bin/\\\\n2026-03-11T08:58:29Z [verbose] multus-daemon started\\\\n2026-03-11T08:58:29Z [verbose] Readiness Indicator file check\\\\n2026-03-11T08:59:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:59:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-46dj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vcb9n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:16 crc kubenswrapper[4840]: I0311 08:59:16.025103 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://153456b9c54a907a1dada25ba9e84b3fe9f9b44a6e6b8e96d512944fb56884a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8c8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-brtht\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:16 crc kubenswrapper[4840]: I0311 08:59:16.036549 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7534e3a0-4e29-4bac-ad21-3907f9bfab68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:56:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1d24a6fa03febf90b39f2984c5d8adc70cf5e0b019cba4c3d99830dc9e2cbdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:56:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f5815f294ed0f63415a41c689e1aaea2d9dd1f0ee4999752e4e85065c08cb1d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5815f294ed0f63415a41c689e1aaea2d9dd1f0ee4999752e4e85065c08cb1d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:56:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:56:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:56:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:16 crc kubenswrapper[4840]: I0311 08:59:16.047706 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a97da03-4135-4850-9393-640dffab4289\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57ffc84765682190c64926aa41837eb62aed5f155016eda6cec84aca86e1ac31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf5b701d5975fea1d305cf5ef501224f312a6a3afd03977a691910a5acaf9ce7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j7fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-9stqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:16 crc kubenswrapper[4840]: I0311 08:59:16.059405 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gjgkz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e87442b-4d54-472c-bad6-e2086c95df50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4h65t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:40Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gjgkz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:16 crc kubenswrapper[4840]: I0311 08:59:16.059572 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:59:16 crc kubenswrapper[4840]: E0311 08:59:16.059764 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:59:16 crc kubenswrapper[4840]: I0311 08:59:16.081029 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"935336e2-294b-4982-83f9-718806d14e5c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-11T08:58:57Z\\\",\\\"message\\\":\\\"r/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.37\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0311 08:58:57.060984 7161 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:58:56Z \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7c2zl_openshift-ovn-kubernetes(935336e2-294b-4982-83f9-718806d14e5c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-11T08:58:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-11T08:58:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrc2z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7c2zl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:16 crc kubenswrapper[4840]: I0311 08:59:16.093533 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:16 crc kubenswrapper[4840]: I0311 08:59:16.102734 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4tjtn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c3b7839-5a3d-42f4-a871-1baa77d4f6a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3703388f56cc4d0f73dcdc31a5fae2dd6cc275b3a86ae8ca3733190a08bac57a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-11T08:58:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q5b8p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-11T08:58:33Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4tjtn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:16 crc kubenswrapper[4840]: I0311 08:59:16.112764 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-11T08:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-11T08:59:16Z is after 2025-08-24T17:21:41Z" Mar 11 08:59:17 crc kubenswrapper[4840]: I0311 08:59:17.059924 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:59:17 crc kubenswrapper[4840]: I0311 08:59:17.060139 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:59:17 crc kubenswrapper[4840]: E0311 08:59:17.060579 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:59:17 crc kubenswrapper[4840]: I0311 08:59:17.060730 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:59:17 crc kubenswrapper[4840]: E0311 08:59:17.060968 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:59:17 crc kubenswrapper[4840]: E0311 08:59:17.061142 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:59:17 crc kubenswrapper[4840]: E0311 08:59:17.146745 4840 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:59:18 crc kubenswrapper[4840]: I0311 08:59:18.060196 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:59:18 crc kubenswrapper[4840]: E0311 08:59:18.060442 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:59:19 crc kubenswrapper[4840]: I0311 08:59:19.060276 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:59:19 crc kubenswrapper[4840]: I0311 08:59:19.060399 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:59:19 crc kubenswrapper[4840]: I0311 08:59:19.061194 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:59:19 crc kubenswrapper[4840]: E0311 08:59:19.061421 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:59:19 crc kubenswrapper[4840]: E0311 08:59:19.061614 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:59:19 crc kubenswrapper[4840]: E0311 08:59:19.061836 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:59:20 crc kubenswrapper[4840]: I0311 08:59:20.060301 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:59:20 crc kubenswrapper[4840]: E0311 08:59:20.060619 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.059973 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.060710 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.061700 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:59:21 crc kubenswrapper[4840]: E0311 08:59:21.060920 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:59:21 crc kubenswrapper[4840]: E0311 08:59:21.061930 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:59:21 crc kubenswrapper[4840]: E0311 08:59:21.062130 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.324384 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.324451 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.324486 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.324510 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.324527 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-11T08:59:21Z","lastTransitionTime":"2026-03-11T08:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.396863 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-vddsr"] Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.397497 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vddsr" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.401038 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.401718 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.402006 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.402230 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.470205 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-4tjtn" podStartSLOduration=91.470173254 podStartE2EDuration="1m31.470173254s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:21.469894216 +0000 UTC m=+160.135564071" watchObservedRunningTime="2026-03-11 08:59:21.470173254 +0000 UTC m=+160.135843079" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.478668 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b5f220b-09ef-44d2-ad3e-8e62f656c5c4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vddsr\" (UID: \"0b5f220b-09ef-44d2-ad3e-8e62f656c5c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vddsr" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.478743 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0b5f220b-09ef-44d2-ad3e-8e62f656c5c4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vddsr\" (UID: \"0b5f220b-09ef-44d2-ad3e-8e62f656c5c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vddsr" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.478793 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5f220b-09ef-44d2-ad3e-8e62f656c5c4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vddsr\" (UID: \"0b5f220b-09ef-44d2-ad3e-8e62f656c5c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vddsr" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.478930 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0b5f220b-09ef-44d2-ad3e-8e62f656c5c4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vddsr\" (UID: \"0b5f220b-09ef-44d2-ad3e-8e62f656c5c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vddsr" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.478973 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5f220b-09ef-44d2-ad3e-8e62f656c5c4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vddsr\" (UID: \"0b5f220b-09ef-44d2-ad3e-8e62f656c5c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vddsr" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.485884 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=10.485858053 podStartE2EDuration="10.485858053s" podCreationTimestamp="2026-03-11 08:59:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:21.485697228 +0000 UTC m=+160.151367063" watchObservedRunningTime="2026-03-11 08:59:21.485858053 +0000 UTC m=+160.151527878" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.529046 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=30.529015916 podStartE2EDuration="30.529015916s" podCreationTimestamp="2026-03-11 08:58:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:21.528895942 +0000 UTC m=+160.194565797" watchObservedRunningTime="2026-03-11 08:59:21.529015916 +0000 UTC m=+160.194685771" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.580291 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b5f220b-09ef-44d2-ad3e-8e62f656c5c4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vddsr\" (UID: \"0b5f220b-09ef-44d2-ad3e-8e62f656c5c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vddsr" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.580374 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0b5f220b-09ef-44d2-ad3e-8e62f656c5c4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vddsr\" (UID: \"0b5f220b-09ef-44d2-ad3e-8e62f656c5c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vddsr" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.580404 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5f220b-09ef-44d2-ad3e-8e62f656c5c4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vddsr\" (UID: \"0b5f220b-09ef-44d2-ad3e-8e62f656c5c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vddsr" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.580551 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0b5f220b-09ef-44d2-ad3e-8e62f656c5c4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vddsr\" (UID: \"0b5f220b-09ef-44d2-ad3e-8e62f656c5c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vddsr" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.580550 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0b5f220b-09ef-44d2-ad3e-8e62f656c5c4-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vddsr\" (UID: \"0b5f220b-09ef-44d2-ad3e-8e62f656c5c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vddsr" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.580582 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5f220b-09ef-44d2-ad3e-8e62f656c5c4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vddsr\" (UID: \"0b5f220b-09ef-44d2-ad3e-8e62f656c5c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vddsr" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.580780 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0b5f220b-09ef-44d2-ad3e-8e62f656c5c4-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vddsr\" (UID: \"0b5f220b-09ef-44d2-ad3e-8e62f656c5c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vddsr" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.581937 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b5f220b-09ef-44d2-ad3e-8e62f656c5c4-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vddsr\" (UID: \"0b5f220b-09ef-44d2-ad3e-8e62f656c5c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vddsr" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.589952 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b5f220b-09ef-44d2-ad3e-8e62f656c5c4-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vddsr\" (UID: \"0b5f220b-09ef-44d2-ad3e-8e62f656c5c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vddsr" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.590990 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-vcb9n" podStartSLOduration=91.590960626 podStartE2EDuration="1m31.590960626s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:21.590704469 +0000 UTC m=+160.256374334" watchObservedRunningTime="2026-03-11 08:59:21.590960626 +0000 UTC m=+160.256630451" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.603665 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b5f220b-09ef-44d2-ad3e-8e62f656c5c4-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vddsr\" (UID: \"0b5f220b-09ef-44d2-ad3e-8e62f656c5c4\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vddsr" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.613805 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podStartSLOduration=91.613752547 podStartE2EDuration="1m31.613752547s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:21.612842381 +0000 UTC m=+160.278512236" watchObservedRunningTime="2026-03-11 08:59:21.613752547 +0000 UTC m=+160.279422372" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.629628 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=12.62959504 podStartE2EDuration="12.62959504s" podCreationTimestamp="2026-03-11 08:59:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:21.629296182 +0000 UTC m=+160.294966007" watchObservedRunningTime="2026-03-11 08:59:21.62959504 +0000 UTC m=+160.295264885" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.653339 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=63.653304388 podStartE2EDuration="1m3.653304388s" podCreationTimestamp="2026-03-11 08:58:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:21.652251368 +0000 UTC m=+160.317921193" watchObservedRunningTime="2026-03-11 08:59:21.653304388 +0000 UTC m=+160.318974253" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.687606 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-jlzht" podStartSLOduration=91.687581677 podStartE2EDuration="1m31.687581677s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:21.687566777 +0000 UTC m=+160.353236632" watchObservedRunningTime="2026-03-11 08:59:21.687581677 +0000 UTC m=+160.353251502" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.726018 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vddsr" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.738612 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-xn47g" podStartSLOduration=91.738579545 podStartE2EDuration="1m31.738579545s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:21.738495382 +0000 UTC m=+160.404165207" watchObservedRunningTime="2026-03-11 08:59:21.738579545 +0000 UTC m=+160.404249380" Mar 11 08:59:21 crc kubenswrapper[4840]: W0311 08:59:21.748732 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b5f220b_09ef_44d2_ad3e_8e62f656c5c4.slice/crio-ec46c43f1e86346c2252a5495fdd4c4ca3dc855b06b101b834972a662c304ec3 WatchSource:0}: Error finding container ec46c43f1e86346c2252a5495fdd4c4ca3dc855b06b101b834972a662c304ec3: Status 404 returned error can't find the container with id ec46c43f1e86346c2252a5495fdd4c4ca3dc855b06b101b834972a662c304ec3 Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.806569 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-9stqc" podStartSLOduration=91.806538006 podStartE2EDuration="1m31.806538006s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:21.805566448 +0000 UTC m=+160.471236263" watchObservedRunningTime="2026-03-11 08:59:21.806538006 +0000 UTC m=+160.472207821" Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.887310 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vddsr" event={"ID":"0b5f220b-09ef-44d2-ad3e-8e62f656c5c4","Type":"ContainerStarted","Data":"9b91a343970fcd7f8be9b4387a044a4ddcc4a20dae640a5159ca13de08e3929d"} Mar 11 08:59:21 crc kubenswrapper[4840]: I0311 08:59:21.887371 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vddsr" event={"ID":"0b5f220b-09ef-44d2-ad3e-8e62f656c5c4","Type":"ContainerStarted","Data":"ec46c43f1e86346c2252a5495fdd4c4ca3dc855b06b101b834972a662c304ec3"} Mar 11 08:59:22 crc kubenswrapper[4840]: I0311 08:59:22.059555 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:59:22 crc kubenswrapper[4840]: E0311 08:59:22.060297 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:59:22 crc kubenswrapper[4840]: I0311 08:59:22.113139 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 11 08:59:22 crc kubenswrapper[4840]: I0311 08:59:22.122759 4840 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 11 08:59:22 crc kubenswrapper[4840]: E0311 08:59:22.147706 4840 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:59:23 crc kubenswrapper[4840]: I0311 08:59:23.060023 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:59:23 crc kubenswrapper[4840]: I0311 08:59:23.060101 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:59:23 crc kubenswrapper[4840]: E0311 08:59:23.060836 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:59:23 crc kubenswrapper[4840]: I0311 08:59:23.060141 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:59:23 crc kubenswrapper[4840]: E0311 08:59:23.061040 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:59:23 crc kubenswrapper[4840]: E0311 08:59:23.061153 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:59:24 crc kubenswrapper[4840]: I0311 08:59:24.060009 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:59:24 crc kubenswrapper[4840]: E0311 08:59:24.061187 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:59:24 crc kubenswrapper[4840]: I0311 08:59:24.062053 4840 scope.go:117] "RemoveContainer" containerID="0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b" Mar 11 08:59:24 crc kubenswrapper[4840]: I0311 08:59:24.900930 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7c2zl_935336e2-294b-4982-83f9-718806d14e5c/ovnkube-controller/2.log" Mar 11 08:59:24 crc kubenswrapper[4840]: I0311 08:59:24.903942 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" event={"ID":"935336e2-294b-4982-83f9-718806d14e5c","Type":"ContainerStarted","Data":"7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924"} Mar 11 08:59:24 crc kubenswrapper[4840]: I0311 08:59:24.904530 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:59:24 crc kubenswrapper[4840]: I0311 08:59:24.930838 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" podStartSLOduration=94.930817867 podStartE2EDuration="1m34.930817867s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:24.930351233 +0000 UTC m=+163.596021048" watchObservedRunningTime="2026-03-11 08:59:24.930817867 +0000 UTC m=+163.596487692" Mar 11 08:59:24 crc kubenswrapper[4840]: I0311 08:59:24.931343 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vddsr" podStartSLOduration=94.931333922 podStartE2EDuration="1m34.931333922s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:21.901906001 +0000 UTC m=+160.567575856" watchObservedRunningTime="2026-03-11 08:59:24.931333922 +0000 UTC m=+163.597003747" Mar 11 08:59:25 crc kubenswrapper[4840]: I0311 08:59:25.059790 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:59:25 crc kubenswrapper[4840]: I0311 08:59:25.059872 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:59:25 crc kubenswrapper[4840]: I0311 08:59:25.059823 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:59:25 crc kubenswrapper[4840]: E0311 08:59:25.060029 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:59:25 crc kubenswrapper[4840]: E0311 08:59:25.060706 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:59:25 crc kubenswrapper[4840]: E0311 08:59:25.060803 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:59:25 crc kubenswrapper[4840]: I0311 08:59:25.155976 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gjgkz"] Mar 11 08:59:25 crc kubenswrapper[4840]: I0311 08:59:25.156120 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:59:25 crc kubenswrapper[4840]: E0311 08:59:25.156212 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:59:27 crc kubenswrapper[4840]: I0311 08:59:27.060014 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:59:27 crc kubenswrapper[4840]: I0311 08:59:27.060115 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:59:27 crc kubenswrapper[4840]: I0311 08:59:27.060155 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:59:27 crc kubenswrapper[4840]: I0311 08:59:27.060272 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:59:27 crc kubenswrapper[4840]: E0311 08:59:27.060651 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:59:27 crc kubenswrapper[4840]: E0311 08:59:27.060725 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:59:27 crc kubenswrapper[4840]: E0311 08:59:27.060765 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:59:27 crc kubenswrapper[4840]: E0311 08:59:27.060790 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:59:27 crc kubenswrapper[4840]: E0311 08:59:27.148486 4840 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 08:59:29 crc kubenswrapper[4840]: I0311 08:59:29.060062 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:59:29 crc kubenswrapper[4840]: E0311 08:59:29.060251 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:59:29 crc kubenswrapper[4840]: I0311 08:59:29.060322 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:59:29 crc kubenswrapper[4840]: I0311 08:59:29.060346 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:59:29 crc kubenswrapper[4840]: E0311 08:59:29.060500 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:59:29 crc kubenswrapper[4840]: E0311 08:59:29.060654 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:59:29 crc kubenswrapper[4840]: I0311 08:59:29.060720 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:59:29 crc kubenswrapper[4840]: E0311 08:59:29.060797 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:59:30 crc kubenswrapper[4840]: I0311 08:59:30.085946 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 11 08:59:31 crc kubenswrapper[4840]: I0311 08:59:31.059812 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:59:31 crc kubenswrapper[4840]: E0311 08:59:31.060313 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 11 08:59:31 crc kubenswrapper[4840]: I0311 08:59:31.059848 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:59:31 crc kubenswrapper[4840]: I0311 08:59:31.059832 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:59:31 crc kubenswrapper[4840]: E0311 08:59:31.060377 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 11 08:59:31 crc kubenswrapper[4840]: I0311 08:59:31.059843 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:59:31 crc kubenswrapper[4840]: E0311 08:59:31.060436 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gjgkz" podUID="6e87442b-4d54-472c-bad6-e2086c95df50" Mar 11 08:59:31 crc kubenswrapper[4840]: E0311 08:59:31.060490 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 11 08:59:32 crc kubenswrapper[4840]: I0311 08:59:32.094779 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.094747543 podStartE2EDuration="2.094747543s" podCreationTimestamp="2026-03-11 08:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:32.094085954 +0000 UTC m=+170.759755789" watchObservedRunningTime="2026-03-11 08:59:32.094747543 +0000 UTC m=+170.760417398" Mar 11 08:59:33 crc kubenswrapper[4840]: I0311 08:59:33.059618 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 08:59:33 crc kubenswrapper[4840]: I0311 08:59:33.059695 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:59:33 crc kubenswrapper[4840]: I0311 08:59:33.059777 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 08:59:33 crc kubenswrapper[4840]: I0311 08:59:33.059670 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 08:59:33 crc kubenswrapper[4840]: I0311 08:59:33.062393 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 11 08:59:33 crc kubenswrapper[4840]: I0311 08:59:33.062908 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 11 08:59:33 crc kubenswrapper[4840]: I0311 08:59:33.063125 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 11 08:59:33 crc kubenswrapper[4840]: I0311 08:59:33.063167 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 11 08:59:33 crc kubenswrapper[4840]: I0311 08:59:33.063247 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 11 08:59:33 crc kubenswrapper[4840]: I0311 08:59:33.065067 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.522694 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.581760 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-j4zs8"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.582724 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6mpbx"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.583245 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.583816 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.584908 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hgxgb"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.585712 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hgxgb" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.588548 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bhjvl"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.589553 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bhjvl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.590112 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwt6s"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.591069 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwt6s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.592313 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.592940 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.594684 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.595395 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.601307 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.601737 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.601946 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.602192 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.602442 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.602454 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.603170 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.603386 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.603640 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.605282 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.606593 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.606647 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.609867 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.610156 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.610330 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.610925 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.611786 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.612687 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.612944 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.612685 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.612741 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.613252 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.613390 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.629667 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-mwnmw"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.631888 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.651706 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.651836 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.652125 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.652238 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.652262 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.652338 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.652430 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.652737 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.653078 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mwnmw" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.653488 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.653538 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-nh5h5"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.653969 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.654218 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.654379 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nh5h5" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.654503 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dn5hs"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.654577 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.654634 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.654879 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655125 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dn5hs" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655314 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-etcd-client\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655346 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db966398-cd84-4fb6-bedf-f1f13c670ce8-serving-cert\") pod \"controller-manager-879f6c89f-6mpbx\" (UID: \"db966398-cd84-4fb6-bedf-f1f13c670ce8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655370 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655386 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2878f5c-d2e1-4561-acae-3ea4ed26a5c0-serving-cert\") pod \"apiserver-7bbb656c7d-d5xpl\" (UID: \"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655401 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1993577b-4a86-4663-97e5-902753a07816-service-ca-bundle\") pod \"authentication-operator-69f744f599-bhjvl\" (UID: \"1993577b-4a86-4663-97e5-902753a07816\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bhjvl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655418 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5475l\" (UniqueName: \"kubernetes.io/projected/1993577b-4a86-4663-97e5-902753a07816-kube-api-access-5475l\") pod \"authentication-operator-69f744f599-bhjvl\" (UID: \"1993577b-4a86-4663-97e5-902753a07816\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bhjvl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655436 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5m6m\" (UniqueName: \"kubernetes.io/projected/f2878f5c-d2e1-4561-acae-3ea4ed26a5c0-kube-api-access-s5m6m\") pod \"apiserver-7bbb656c7d-d5xpl\" (UID: \"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655486 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-serving-cert\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655510 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b145694-7add-48aa-9321-56703922b613-config\") pod \"route-controller-manager-6576b87f9c-lzd44\" (UID: \"7b145694-7add-48aa-9321-56703922b613\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655528 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b94670ce-123d-4562-b9ae-7a7fe898bff7-images\") pod \"machine-api-operator-5694c8668f-hgxgb\" (UID: \"b94670ce-123d-4562-b9ae-7a7fe898bff7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgxgb" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655541 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f2878f5c-d2e1-4561-acae-3ea4ed26a5c0-audit-policies\") pod \"apiserver-7bbb656c7d-d5xpl\" (UID: \"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655552 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655556 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmr8k\" (UniqueName: \"kubernetes.io/projected/a5e0e773-1dac-4bc9-ac4c-5d6db746fee9-kube-api-access-cmr8k\") pod \"openshift-apiserver-operator-796bbdcf4f-nwt6s\" (UID: \"a5e0e773-1dac-4bc9-ac4c-5d6db746fee9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwt6s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655571 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-image-import-ca\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655595 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9cjl\" (UniqueName: \"kubernetes.io/projected/db966398-cd84-4fb6-bedf-f1f13c670ce8-kube-api-access-k9cjl\") pod \"controller-manager-879f6c89f-6mpbx\" (UID: \"db966398-cd84-4fb6-bedf-f1f13c670ce8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655612 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5e0e773-1dac-4bc9-ac4c-5d6db746fee9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nwt6s\" (UID: \"a5e0e773-1dac-4bc9-ac4c-5d6db746fee9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwt6s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655635 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f2878f5c-d2e1-4561-acae-3ea4ed26a5c0-audit-dir\") pod \"apiserver-7bbb656c7d-d5xpl\" (UID: \"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655650 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db966398-cd84-4fb6-bedf-f1f13c670ce8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6mpbx\" (UID: \"db966398-cd84-4fb6-bedf-f1f13c670ce8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655669 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b145694-7add-48aa-9321-56703922b613-client-ca\") pod \"route-controller-manager-6576b87f9c-lzd44\" (UID: \"7b145694-7add-48aa-9321-56703922b613\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655688 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvklz\" (UniqueName: \"kubernetes.io/projected/7b145694-7add-48aa-9321-56703922b613-kube-api-access-gvklz\") pod \"route-controller-manager-6576b87f9c-lzd44\" (UID: \"7b145694-7add-48aa-9321-56703922b613\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655713 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655709 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f2878f5c-d2e1-4561-acae-3ea4ed26a5c0-encryption-config\") pod \"apiserver-7bbb656c7d-d5xpl\" (UID: \"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655794 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1993577b-4a86-4663-97e5-902753a07816-config\") pod \"authentication-operator-69f744f599-bhjvl\" (UID: \"1993577b-4a86-4663-97e5-902753a07816\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bhjvl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655805 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655813 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-node-pullsecrets\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655830 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-audit\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655870 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1993577b-4a86-4663-97e5-902753a07816-serving-cert\") pod \"authentication-operator-69f744f599-bhjvl\" (UID: \"1993577b-4a86-4663-97e5-902753a07816\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bhjvl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655926 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b145694-7add-48aa-9321-56703922b613-serving-cert\") pod \"route-controller-manager-6576b87f9c-lzd44\" (UID: \"7b145694-7add-48aa-9321-56703922b613\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655949 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzfks\" (UniqueName: \"kubernetes.io/projected/b94670ce-123d-4562-b9ae-7a7fe898bff7-kube-api-access-bzfks\") pod \"machine-api-operator-5694c8668f-hgxgb\" (UID: \"b94670ce-123d-4562-b9ae-7a7fe898bff7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgxgb" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.655995 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-encryption-config\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.656012 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-audit-dir\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.656028 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b94670ce-123d-4562-b9ae-7a7fe898bff7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hgxgb\" (UID: \"b94670ce-123d-4562-b9ae-7a7fe898bff7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgxgb" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.656046 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f2878f5c-d2e1-4561-acae-3ea4ed26a5c0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-d5xpl\" (UID: \"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.656062 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1993577b-4a86-4663-97e5-902753a07816-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bhjvl\" (UID: \"1993577b-4a86-4663-97e5-902753a07816\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bhjvl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.656081 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-config\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.656101 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-etcd-serving-ca\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.656123 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hrnh\" (UniqueName: \"kubernetes.io/projected/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-kube-api-access-4hrnh\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.656138 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f2878f5c-d2e1-4561-acae-3ea4ed26a5c0-etcd-client\") pod \"apiserver-7bbb656c7d-d5xpl\" (UID: \"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.656154 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2878f5c-d2e1-4561-acae-3ea4ed26a5c0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-d5xpl\" (UID: \"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.656172 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db966398-cd84-4fb6-bedf-f1f13c670ce8-client-ca\") pod \"controller-manager-879f6c89f-6mpbx\" (UID: \"db966398-cd84-4fb6-bedf-f1f13c670ce8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.656204 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db966398-cd84-4fb6-bedf-f1f13c670ce8-config\") pod \"controller-manager-879f6c89f-6mpbx\" (UID: \"db966398-cd84-4fb6-bedf-f1f13c670ce8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.656220 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5e0e773-1dac-4bc9-ac4c-5d6db746fee9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nwt6s\" (UID: \"a5e0e773-1dac-4bc9-ac4c-5d6db746fee9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwt6s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.656234 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b94670ce-123d-4562-b9ae-7a7fe898bff7-config\") pod \"machine-api-operator-5694c8668f-hgxgb\" (UID: \"b94670ce-123d-4562-b9ae-7a7fe898bff7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgxgb" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.656370 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.656533 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.656682 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.656735 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.656741 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2p584"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.656797 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.656888 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.656905 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.657134 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2p584" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.657966 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-td29c"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.658311 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.659107 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-9k5xp"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.659907 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9k5xp" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.660357 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.661155 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2gglt"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.661693 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2gglt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.667914 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.671258 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zm7f9"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.671551 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.672634 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.672996 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.673198 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zm7f9" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.673252 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.673499 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.673673 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.674362 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.674386 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.674498 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.674529 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.674574 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.674596 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.674378 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.674655 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.674721 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.674752 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.675850 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.676325 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.677276 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.677518 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.677680 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.677861 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ktl5r"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.678039 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.678347 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.678451 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ktl5r" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.678517 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.678658 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.678753 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.678869 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.678902 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.678788 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.679141 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.680131 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.680655 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.680655 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.680709 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.680734 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.680882 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.680772 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.681495 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.681610 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.681906 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.683728 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-72h56"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.684332 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-72h56" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.684727 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-xkq7s"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.688902 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.708771 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.708997 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwt6s"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.709222 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.718682 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.725183 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-j4zs8"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.725426 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.725543 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hgxgb"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.726342 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.726670 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.726871 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.727024 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.727175 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k4c7d"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.728155 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k4c7d" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.732068 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bhjvl"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.732145 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pdrj8"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.732237 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.732874 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.732933 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-wwr6r"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.733724 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.733773 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.734848 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5wlch"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.735224 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zqtkm"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.735727 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjm9q"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.735732 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.736261 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjm9q" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.736608 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-98fj7"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.736741 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.736759 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5wlch" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.736796 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqtkm" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.736800 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wwr6r" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.737843 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.740670 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.740793 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-98fj7" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.740066 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2kbzn"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.744333 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d6cv4"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.744815 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-nz4rs"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.745829 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8ql29"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.746053 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2kbzn" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.746576 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.747427 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nz4rs" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.747670 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79btk"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.747910 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.748405 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79btk" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.748485 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ql29" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.750087 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcgsw"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.751111 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pvx88"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.751294 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcgsw" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.752183 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xt7ft"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.752286 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pvx88" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.753044 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xt7ft" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.755793 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mcz2j"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.756574 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-drmmh"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.757026 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-8pjx9"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.757707 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-drmmh" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.757849 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b94670ce-123d-4562-b9ae-7a7fe898bff7-images\") pod \"machine-api-operator-5694c8668f-hgxgb\" (UID: \"b94670ce-123d-4562-b9ae-7a7fe898bff7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgxgb" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.757874 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f2878f5c-d2e1-4561-acae-3ea4ed26a5c0-audit-policies\") pod \"apiserver-7bbb656c7d-d5xpl\" (UID: \"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.757901 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.757922 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ftrs\" (UniqueName: \"kubernetes.io/projected/78462b75-44da-4862-88c5-5cf892a91058-kube-api-access-8ftrs\") pod \"downloads-7954f5f757-9k5xp\" (UID: \"78462b75-44da-4862-88c5-5cf892a91058\") " pod="openshift-console/downloads-7954f5f757-9k5xp" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.757945 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84bf2582-c99c-4ad3-b53c-37f357ba2cc5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-72h56\" (UID: \"84bf2582-c99c-4ad3-b53c-37f357ba2cc5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-72h56" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.757967 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6138a548-439a-4af8-ad4d-a6ea89f686b7-etcd-ca\") pod \"etcd-operator-b45778765-zm7f9\" (UID: \"6138a548-439a-4af8-ad4d-a6ea89f686b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zm7f9" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.757982 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1d0c791-1d0c-4e11-91ce-bb352ce3fce1-serving-cert\") pod \"openshift-config-operator-7777fb866f-nh5h5\" (UID: \"b1d0c791-1d0c-4e11-91ce-bb352ce3fce1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nh5h5" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.757998 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/642bcc9a-edf9-4845-b6ad-0936618ec9b0-trusted-ca\") pod \"console-operator-58897d9998-2p584\" (UID: \"642bcc9a-edf9-4845-b6ad-0936618ec9b0\") " pod="openshift-console-operator/console-operator-58897d9998-2p584" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758013 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-mcz2j" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758017 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5dc5ef77-d18a-4474-a523-473f27166095-console-serving-cert\") pod \"console-f9d7485db-xkq7s\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758036 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmr8k\" (UniqueName: \"kubernetes.io/projected/a5e0e773-1dac-4bc9-ac4c-5d6db746fee9-kube-api-access-cmr8k\") pod \"openshift-apiserver-operator-796bbdcf4f-nwt6s\" (UID: \"a5e0e773-1dac-4bc9-ac4c-5d6db746fee9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwt6s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758055 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-image-import-ca\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758081 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9cjl\" (UniqueName: \"kubernetes.io/projected/db966398-cd84-4fb6-bedf-f1f13c670ce8-kube-api-access-k9cjl\") pod \"controller-manager-879f6c89f-6mpbx\" (UID: \"db966398-cd84-4fb6-bedf-f1f13c670ce8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758107 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcgh8\" (UniqueName: \"kubernetes.io/projected/01a98c93-30e9-4161-ae55-553a6107a67f-kube-api-access-zcgh8\") pod \"openshift-controller-manager-operator-756b6f6bc6-ktl5r\" (UID: \"01a98c93-30e9-4161-ae55-553a6107a67f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ktl5r" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758125 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8kxm\" (UniqueName: \"kubernetes.io/projected/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-kube-api-access-b8kxm\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758143 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5e0e773-1dac-4bc9-ac4c-5d6db746fee9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nwt6s\" (UID: \"a5e0e773-1dac-4bc9-ac4c-5d6db746fee9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwt6s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758159 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/642bcc9a-edf9-4845-b6ad-0936618ec9b0-serving-cert\") pod \"console-operator-58897d9998-2p584\" (UID: \"642bcc9a-edf9-4845-b6ad-0936618ec9b0\") " pod="openshift-console-operator/console-operator-58897d9998-2p584" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758177 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f2878f5c-d2e1-4561-acae-3ea4ed26a5c0-audit-dir\") pod \"apiserver-7bbb656c7d-d5xpl\" (UID: \"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758192 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db966398-cd84-4fb6-bedf-f1f13c670ce8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6mpbx\" (UID: \"db966398-cd84-4fb6-bedf-f1f13c670ce8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758208 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6138a548-439a-4af8-ad4d-a6ea89f686b7-etcd-client\") pod \"etcd-operator-b45778765-zm7f9\" (UID: \"6138a548-439a-4af8-ad4d-a6ea89f686b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zm7f9" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758229 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b145694-7add-48aa-9321-56703922b613-client-ca\") pod \"route-controller-manager-6576b87f9c-lzd44\" (UID: \"7b145694-7add-48aa-9321-56703922b613\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758249 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvklz\" (UniqueName: \"kubernetes.io/projected/7b145694-7add-48aa-9321-56703922b613-kube-api-access-gvklz\") pod \"route-controller-manager-6576b87f9c-lzd44\" (UID: \"7b145694-7add-48aa-9321-56703922b613\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758269 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-audit-dir\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758286 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758307 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcbfq\" (UniqueName: \"kubernetes.io/projected/5dc5ef77-d18a-4474-a523-473f27166095-kube-api-access-zcbfq\") pod \"console-f9d7485db-xkq7s\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758324 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bqmv\" (UniqueName: \"kubernetes.io/projected/6138a548-439a-4af8-ad4d-a6ea89f686b7-kube-api-access-9bqmv\") pod \"etcd-operator-b45778765-zm7f9\" (UID: \"6138a548-439a-4af8-ad4d-a6ea89f686b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zm7f9" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758343 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5dc5ef77-d18a-4474-a523-473f27166095-service-ca\") pod \"console-f9d7485db-xkq7s\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758358 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f2878f5c-d2e1-4561-acae-3ea4ed26a5c0-encryption-config\") pod \"apiserver-7bbb656c7d-d5xpl\" (UID: \"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758375 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1993577b-4a86-4663-97e5-902753a07816-config\") pod \"authentication-operator-69f744f599-bhjvl\" (UID: \"1993577b-4a86-4663-97e5-902753a07816\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bhjvl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758396 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-node-pullsecrets\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758413 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-audit\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758428 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1993577b-4a86-4663-97e5-902753a07816-serving-cert\") pod \"authentication-operator-69f744f599-bhjvl\" (UID: \"1993577b-4a86-4663-97e5-902753a07816\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bhjvl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758446 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b1d0c791-1d0c-4e11-91ce-bb352ce3fce1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-nh5h5\" (UID: \"b1d0c791-1d0c-4e11-91ce-bb352ce3fce1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nh5h5" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758461 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b145694-7add-48aa-9321-56703922b613-serving-cert\") pod \"route-controller-manager-6576b87f9c-lzd44\" (UID: \"7b145694-7add-48aa-9321-56703922b613\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758526 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpdhh\" (UniqueName: \"kubernetes.io/projected/bdc11055-ee85-488f-9812-536b5cd31e50-kube-api-access-wpdhh\") pod \"machine-approver-56656f9798-mwnmw\" (UID: \"bdc11055-ee85-488f-9812-536b5cd31e50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mwnmw" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758544 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b97mp\" (UniqueName: \"kubernetes.io/projected/ed60e671-1c71-42fc-828d-51a4e85e3153-kube-api-access-b97mp\") pod \"cluster-samples-operator-665b6dd947-dn5hs\" (UID: \"ed60e671-1c71-42fc-828d-51a4e85e3153\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dn5hs" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758561 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758579 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzfks\" (UniqueName: \"kubernetes.io/projected/b94670ce-123d-4562-b9ae-7a7fe898bff7-kube-api-access-bzfks\") pod \"machine-api-operator-5694c8668f-hgxgb\" (UID: \"b94670ce-123d-4562-b9ae-7a7fe898bff7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgxgb" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758612 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-audit-policies\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758629 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5dc5ef77-d18a-4474-a523-473f27166095-oauth-serving-cert\") pod \"console-f9d7485db-xkq7s\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758663 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-encryption-config\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758679 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bdc11055-ee85-488f-9812-536b5cd31e50-auth-proxy-config\") pod \"machine-approver-56656f9798-mwnmw\" (UID: \"bdc11055-ee85-488f-9812-536b5cd31e50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mwnmw" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758696 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b94670ce-123d-4562-b9ae-7a7fe898bff7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hgxgb\" (UID: \"b94670ce-123d-4562-b9ae-7a7fe898bff7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgxgb" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758713 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f2878f5c-d2e1-4561-acae-3ea4ed26a5c0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-d5xpl\" (UID: \"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758733 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1993577b-4a86-4663-97e5-902753a07816-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bhjvl\" (UID: \"1993577b-4a86-4663-97e5-902753a07816\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bhjvl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758754 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5dc5ef77-d18a-4474-a523-473f27166095-console-config\") pod \"console-f9d7485db-xkq7s\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758772 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-audit-dir\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758794 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-config\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758810 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-etcd-serving-ca\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758826 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6138a548-439a-4af8-ad4d-a6ea89f686b7-serving-cert\") pod \"etcd-operator-b45778765-zm7f9\" (UID: \"6138a548-439a-4af8-ad4d-a6ea89f686b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zm7f9" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758841 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gvmg\" (UniqueName: \"kubernetes.io/projected/642bcc9a-edf9-4845-b6ad-0936618ec9b0-kube-api-access-9gvmg\") pod \"console-operator-58897d9998-2p584\" (UID: \"642bcc9a-edf9-4845-b6ad-0936618ec9b0\") " pod="openshift-console-operator/console-operator-58897d9998-2p584" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758864 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758881 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758920 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5dc5ef77-d18a-4474-a523-473f27166095-trusted-ca-bundle\") pod \"console-f9d7485db-xkq7s\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.758983 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8pjx9" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.759497 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-k9hqk"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.760131 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b2bw"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.760788 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553645-gw77z"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.761152 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-gw77z" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.761155 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-k9hqk" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.761434 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b2bw" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.761582 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.761994 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.762025 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.762055 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f2878f5c-d2e1-4561-acae-3ea4ed26a5c0-etcd-client\") pod \"apiserver-7bbb656c7d-d5xpl\" (UID: \"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.762077 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2878f5c-d2e1-4561-acae-3ea4ed26a5c0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-d5xpl\" (UID: \"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.762099 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hrnh\" (UniqueName: \"kubernetes.io/projected/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-kube-api-access-4hrnh\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.762122 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdc11055-ee85-488f-9812-536b5cd31e50-config\") pod \"machine-approver-56656f9798-mwnmw\" (UID: \"bdc11055-ee85-488f-9812-536b5cd31e50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mwnmw" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.762145 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db966398-cd84-4fb6-bedf-f1f13c670ce8-client-ca\") pod \"controller-manager-879f6c89f-6mpbx\" (UID: \"db966398-cd84-4fb6-bedf-f1f13c670ce8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.762201 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db966398-cd84-4fb6-bedf-f1f13c670ce8-config\") pod \"controller-manager-879f6c89f-6mpbx\" (UID: \"db966398-cd84-4fb6-bedf-f1f13c670ce8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.762220 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2745m\" (UniqueName: \"kubernetes.io/projected/b1d0c791-1d0c-4e11-91ce-bb352ce3fce1-kube-api-access-2745m\") pod \"openshift-config-operator-7777fb866f-nh5h5\" (UID: \"b1d0c791-1d0c-4e11-91ce-bb352ce3fce1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nh5h5" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.762243 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpnnj\" (UniqueName: \"kubernetes.io/projected/d62191a1-c0bf-4b73-8a5b-9a084143772c-kube-api-access-qpnnj\") pod \"dns-operator-744455d44c-2gglt\" (UID: \"d62191a1-c0bf-4b73-8a5b-9a084143772c\") " pod="openshift-dns-operator/dns-operator-744455d44c-2gglt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.762267 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5e0e773-1dac-4bc9-ac4c-5d6db746fee9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nwt6s\" (UID: \"a5e0e773-1dac-4bc9-ac4c-5d6db746fee9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwt6s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.762284 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b94670ce-123d-4562-b9ae-7a7fe898bff7-config\") pod \"machine-api-operator-5694c8668f-hgxgb\" (UID: \"b94670ce-123d-4562-b9ae-7a7fe898bff7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgxgb" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.762300 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a98c93-30e9-4161-ae55-553a6107a67f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ktl5r\" (UID: \"01a98c93-30e9-4161-ae55-553a6107a67f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ktl5r" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.762317 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6138a548-439a-4af8-ad4d-a6ea89f686b7-etcd-service-ca\") pod \"etcd-operator-b45778765-zm7f9\" (UID: \"6138a548-439a-4af8-ad4d-a6ea89f686b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zm7f9" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.762340 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-etcd-client\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.762358 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.762376 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.762387 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-xkq7s"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.762392 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2878f5c-d2e1-4561-acae-3ea4ed26a5c0-serving-cert\") pod \"apiserver-7bbb656c7d-d5xpl\" (UID: \"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.762435 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1993577b-4a86-4663-97e5-902753a07816-service-ca-bundle\") pod \"authentication-operator-69f744f599-bhjvl\" (UID: \"1993577b-4a86-4663-97e5-902753a07816\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bhjvl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.762455 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db966398-cd84-4fb6-bedf-f1f13c670ce8-serving-cert\") pod \"controller-manager-879f6c89f-6mpbx\" (UID: \"db966398-cd84-4fb6-bedf-f1f13c670ce8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.762510 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/84bf2582-c99c-4ad3-b53c-37f357ba2cc5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-72h56\" (UID: \"84bf2582-c99c-4ad3-b53c-37f357ba2cc5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-72h56" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.762678 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-etcd-serving-ca\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.762776 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-config\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.763223 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b94670ce-123d-4562-b9ae-7a7fe898bff7-images\") pod \"machine-api-operator-5694c8668f-hgxgb\" (UID: \"b94670ce-123d-4562-b9ae-7a7fe898bff7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgxgb" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.763740 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-td29c"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.763776 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.763837 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5475l\" (UniqueName: \"kubernetes.io/projected/1993577b-4a86-4663-97e5-902753a07816-kube-api-access-5475l\") pod \"authentication-operator-69f744f599-bhjvl\" (UID: \"1993577b-4a86-4663-97e5-902753a07816\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bhjvl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.763889 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bdc11055-ee85-488f-9812-536b5cd31e50-machine-approver-tls\") pod \"machine-approver-56656f9798-mwnmw\" (UID: \"bdc11055-ee85-488f-9812-536b5cd31e50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mwnmw" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.763923 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84bf2582-c99c-4ad3-b53c-37f357ba2cc5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-72h56\" (UID: \"84bf2582-c99c-4ad3-b53c-37f357ba2cc5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-72h56" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.763948 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5m6m\" (UniqueName: \"kubernetes.io/projected/f2878f5c-d2e1-4561-acae-3ea4ed26a5c0-kube-api-access-s5m6m\") pod \"apiserver-7bbb656c7d-d5xpl\" (UID: \"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.763995 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01a98c93-30e9-4161-ae55-553a6107a67f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ktl5r\" (UID: \"01a98c93-30e9-4161-ae55-553a6107a67f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ktl5r" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.764024 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5dc5ef77-d18a-4474-a523-473f27166095-console-oauth-config\") pod \"console-f9d7485db-xkq7s\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.764069 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d62191a1-c0bf-4b73-8a5b-9a084143772c-metrics-tls\") pod \"dns-operator-744455d44c-2gglt\" (UID: \"d62191a1-c0bf-4b73-8a5b-9a084143772c\") " pod="openshift-dns-operator/dns-operator-744455d44c-2gglt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.771665 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-node-pullsecrets\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.771762 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-image-import-ca\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.764089 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98l8z\" (UniqueName: \"kubernetes.io/projected/84bf2582-c99c-4ad3-b53c-37f357ba2cc5-kube-api-access-98l8z\") pod \"cluster-image-registry-operator-dc59b4c8b-72h56\" (UID: \"84bf2582-c99c-4ad3-b53c-37f357ba2cc5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-72h56" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.771839 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.771874 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-serving-cert\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.771898 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.771970 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed60e671-1c71-42fc-828d-51a4e85e3153-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-dn5hs\" (UID: \"ed60e671-1c71-42fc-828d-51a4e85e3153\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dn5hs" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.772005 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b145694-7add-48aa-9321-56703922b613-config\") pod \"route-controller-manager-6576b87f9c-lzd44\" (UID: \"7b145694-7add-48aa-9321-56703922b613\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.772027 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6138a548-439a-4af8-ad4d-a6ea89f686b7-config\") pod \"etcd-operator-b45778765-zm7f9\" (UID: \"6138a548-439a-4af8-ad4d-a6ea89f686b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zm7f9" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.772058 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/642bcc9a-edf9-4845-b6ad-0936618ec9b0-config\") pod \"console-operator-58897d9998-2p584\" (UID: \"642bcc9a-edf9-4845-b6ad-0936618ec9b0\") " pod="openshift-console-operator/console-operator-58897d9998-2p584" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.773083 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-audit\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.773402 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f2878f5c-d2e1-4561-acae-3ea4ed26a5c0-audit-policies\") pod \"apiserver-7bbb656c7d-d5xpl\" (UID: \"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.774237 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1993577b-4a86-4663-97e5-902753a07816-config\") pod \"authentication-operator-69f744f599-bhjvl\" (UID: \"1993577b-4a86-4663-97e5-902753a07816\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bhjvl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.775355 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db966398-cd84-4fb6-bedf-f1f13c670ce8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6mpbx\" (UID: \"db966398-cd84-4fb6-bedf-f1f13c670ce8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.777208 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b145694-7add-48aa-9321-56703922b613-config\") pod \"route-controller-manager-6576b87f9c-lzd44\" (UID: \"7b145694-7add-48aa-9321-56703922b613\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.779431 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db966398-cd84-4fb6-bedf-f1f13c670ce8-client-ca\") pod \"controller-manager-879f6c89f-6mpbx\" (UID: \"db966398-cd84-4fb6-bedf-f1f13c670ce8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.781904 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db966398-cd84-4fb6-bedf-f1f13c670ce8-config\") pod \"controller-manager-879f6c89f-6mpbx\" (UID: \"db966398-cd84-4fb6-bedf-f1f13c670ce8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.783461 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b145694-7add-48aa-9321-56703922b613-client-ca\") pod \"route-controller-manager-6576b87f9c-lzd44\" (UID: \"7b145694-7add-48aa-9321-56703922b613\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.785340 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1993577b-4a86-4663-97e5-902753a07816-service-ca-bundle\") pod \"authentication-operator-69f744f599-bhjvl\" (UID: \"1993577b-4a86-4663-97e5-902753a07816\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bhjvl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.786675 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b94670ce-123d-4562-b9ae-7a7fe898bff7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hgxgb\" (UID: \"b94670ce-123d-4562-b9ae-7a7fe898bff7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgxgb" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.786972 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2878f5c-d2e1-4561-acae-3ea4ed26a5c0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-d5xpl\" (UID: \"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.789010 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-etcd-client\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.793968 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-audit-dir\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.796514 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2878f5c-d2e1-4561-acae-3ea4ed26a5c0-serving-cert\") pod \"apiserver-7bbb656c7d-d5xpl\" (UID: \"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.797762 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f2878f5c-d2e1-4561-acae-3ea4ed26a5c0-audit-dir\") pod \"apiserver-7bbb656c7d-d5xpl\" (UID: \"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.797914 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-serving-cert\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.799616 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b94670ce-123d-4562-b9ae-7a7fe898bff7-config\") pod \"machine-api-operator-5694c8668f-hgxgb\" (UID: \"b94670ce-123d-4562-b9ae-7a7fe898bff7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgxgb" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.800332 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.800554 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f2878f5c-d2e1-4561-acae-3ea4ed26a5c0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-d5xpl\" (UID: \"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.800942 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-encryption-config\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.804649 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b145694-7add-48aa-9321-56703922b613-serving-cert\") pod \"route-controller-manager-6576b87f9c-lzd44\" (UID: \"7b145694-7add-48aa-9321-56703922b613\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.807724 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1993577b-4a86-4663-97e5-902753a07816-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bhjvl\" (UID: \"1993577b-4a86-4663-97e5-902753a07816\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bhjvl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.809724 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f2878f5c-d2e1-4561-acae-3ea4ed26a5c0-encryption-config\") pod \"apiserver-7bbb656c7d-d5xpl\" (UID: \"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.818536 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1993577b-4a86-4663-97e5-902753a07816-serving-cert\") pod \"authentication-operator-69f744f599-bhjvl\" (UID: \"1993577b-4a86-4663-97e5-902753a07816\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bhjvl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.814758 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5e0e773-1dac-4bc9-ac4c-5d6db746fee9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nwt6s\" (UID: \"a5e0e773-1dac-4bc9-ac4c-5d6db746fee9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwt6s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.818871 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db966398-cd84-4fb6-bedf-f1f13c670ce8-serving-cert\") pod \"controller-manager-879f6c89f-6mpbx\" (UID: \"db966398-cd84-4fb6-bedf-f1f13c670ce8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.815652 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.814385 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2gglt"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.817235 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5e0e773-1dac-4bc9-ac4c-5d6db746fee9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nwt6s\" (UID: \"a5e0e773-1dac-4bc9-ac4c-5d6db746fee9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwt6s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.816874 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.819046 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-nh5h5"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.815423 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f2878f5c-d2e1-4561-acae-3ea4ed26a5c0-etcd-client\") pod \"apiserver-7bbb656c7d-d5xpl\" (UID: \"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.819647 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.823071 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k4c7d"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.823504 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-72h56"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.824584 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5wlch"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.825638 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.826697 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.827123 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79btk"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.828679 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d6cv4"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.830359 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6mpbx"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.831945 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ktl5r"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.832983 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-98fj7"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.834268 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zm7f9"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.835435 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-nz4rs"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.836774 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8rsqx"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.842129 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjm9q"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.842163 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-drmmh"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.842175 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b2bw"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.842317 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.842671 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zqtkm"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.843553 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pdrj8"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.846561 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.846799 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8ql29"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.848110 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xt7ft"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.850691 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dn5hs"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.852696 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9k5xp"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.854287 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcgsw"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.855984 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2p584"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.856687 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mcz2j"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.859653 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pvx88"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.860734 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2kbzn"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.862208 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-k9hqk"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.863295 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553645-gw77z"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.864505 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8rsqx"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.865655 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-2zpp2"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.867356 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-cxgxb"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.867549 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2zpp2" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.869687 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-cxgxb"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.869803 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-cxgxb" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.869835 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2zpp2"] Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.872212 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876235 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6138a548-439a-4af8-ad4d-a6ea89f686b7-serving-cert\") pod \"etcd-operator-b45778765-zm7f9\" (UID: \"6138a548-439a-4af8-ad4d-a6ea89f686b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zm7f9" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876272 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gvmg\" (UniqueName: \"kubernetes.io/projected/642bcc9a-edf9-4845-b6ad-0936618ec9b0-kube-api-access-9gvmg\") pod \"console-operator-58897d9998-2p584\" (UID: \"642bcc9a-edf9-4845-b6ad-0936618ec9b0\") " pod="openshift-console-operator/console-operator-58897d9998-2p584" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876296 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876320 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876341 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876360 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5dc5ef77-d18a-4474-a523-473f27166095-trusted-ca-bundle\") pod \"console-f9d7485db-xkq7s\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876382 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876414 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdc11055-ee85-488f-9812-536b5cd31e50-config\") pod \"machine-approver-56656f9798-mwnmw\" (UID: \"bdc11055-ee85-488f-9812-536b5cd31e50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mwnmw" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876455 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2745m\" (UniqueName: \"kubernetes.io/projected/b1d0c791-1d0c-4e11-91ce-bb352ce3fce1-kube-api-access-2745m\") pod \"openshift-config-operator-7777fb866f-nh5h5\" (UID: \"b1d0c791-1d0c-4e11-91ce-bb352ce3fce1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nh5h5" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876496 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpnnj\" (UniqueName: \"kubernetes.io/projected/d62191a1-c0bf-4b73-8a5b-9a084143772c-kube-api-access-qpnnj\") pod \"dns-operator-744455d44c-2gglt\" (UID: \"d62191a1-c0bf-4b73-8a5b-9a084143772c\") " pod="openshift-dns-operator/dns-operator-744455d44c-2gglt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876518 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a98c93-30e9-4161-ae55-553a6107a67f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ktl5r\" (UID: \"01a98c93-30e9-4161-ae55-553a6107a67f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ktl5r" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876536 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6138a548-439a-4af8-ad4d-a6ea89f686b7-etcd-service-ca\") pod \"etcd-operator-b45778765-zm7f9\" (UID: \"6138a548-439a-4af8-ad4d-a6ea89f686b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zm7f9" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876559 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876581 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/84bf2582-c99c-4ad3-b53c-37f357ba2cc5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-72h56\" (UID: \"84bf2582-c99c-4ad3-b53c-37f357ba2cc5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-72h56" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876603 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876625 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bdc11055-ee85-488f-9812-536b5cd31e50-machine-approver-tls\") pod \"machine-approver-56656f9798-mwnmw\" (UID: \"bdc11055-ee85-488f-9812-536b5cd31e50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mwnmw" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876650 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9f2c017f-9b00-48e8-b2bf-28eec249be0a-certs\") pod \"machine-config-server-8pjx9\" (UID: \"9f2c017f-9b00-48e8-b2bf-28eec249be0a\") " pod="openshift-machine-config-operator/machine-config-server-8pjx9" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876679 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84bf2582-c99c-4ad3-b53c-37f357ba2cc5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-72h56\" (UID: \"84bf2582-c99c-4ad3-b53c-37f357ba2cc5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-72h56" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876698 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01a98c93-30e9-4161-ae55-553a6107a67f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ktl5r\" (UID: \"01a98c93-30e9-4161-ae55-553a6107a67f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ktl5r" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876718 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9f2c017f-9b00-48e8-b2bf-28eec249be0a-node-bootstrap-token\") pod \"machine-config-server-8pjx9\" (UID: \"9f2c017f-9b00-48e8-b2bf-28eec249be0a\") " pod="openshift-machine-config-operator/machine-config-server-8pjx9" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876747 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876766 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5dc5ef77-d18a-4474-a523-473f27166095-console-oauth-config\") pod \"console-f9d7485db-xkq7s\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876787 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d62191a1-c0bf-4b73-8a5b-9a084143772c-metrics-tls\") pod \"dns-operator-744455d44c-2gglt\" (UID: \"d62191a1-c0bf-4b73-8a5b-9a084143772c\") " pod="openshift-dns-operator/dns-operator-744455d44c-2gglt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876807 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98l8z\" (UniqueName: \"kubernetes.io/projected/84bf2582-c99c-4ad3-b53c-37f357ba2cc5-kube-api-access-98l8z\") pod \"cluster-image-registry-operator-dc59b4c8b-72h56\" (UID: \"84bf2582-c99c-4ad3-b53c-37f357ba2cc5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-72h56" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876834 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876855 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed60e671-1c71-42fc-828d-51a4e85e3153-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-dn5hs\" (UID: \"ed60e671-1c71-42fc-828d-51a4e85e3153\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dn5hs" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876880 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6138a548-439a-4af8-ad4d-a6ea89f686b7-config\") pod \"etcd-operator-b45778765-zm7f9\" (UID: \"6138a548-439a-4af8-ad4d-a6ea89f686b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zm7f9" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876905 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/642bcc9a-edf9-4845-b6ad-0936618ec9b0-config\") pod \"console-operator-58897d9998-2p584\" (UID: \"642bcc9a-edf9-4845-b6ad-0936618ec9b0\") " pod="openshift-console-operator/console-operator-58897d9998-2p584" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876924 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876944 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ftrs\" (UniqueName: \"kubernetes.io/projected/78462b75-44da-4862-88c5-5cf892a91058-kube-api-access-8ftrs\") pod \"downloads-7954f5f757-9k5xp\" (UID: \"78462b75-44da-4862-88c5-5cf892a91058\") " pod="openshift-console/downloads-7954f5f757-9k5xp" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876971 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84bf2582-c99c-4ad3-b53c-37f357ba2cc5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-72h56\" (UID: \"84bf2582-c99c-4ad3-b53c-37f357ba2cc5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-72h56" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.876994 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6138a548-439a-4af8-ad4d-a6ea89f686b7-etcd-ca\") pod \"etcd-operator-b45778765-zm7f9\" (UID: \"6138a548-439a-4af8-ad4d-a6ea89f686b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zm7f9" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.877022 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1d0c791-1d0c-4e11-91ce-bb352ce3fce1-serving-cert\") pod \"openshift-config-operator-7777fb866f-nh5h5\" (UID: \"b1d0c791-1d0c-4e11-91ce-bb352ce3fce1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nh5h5" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.877045 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/642bcc9a-edf9-4845-b6ad-0936618ec9b0-trusted-ca\") pod \"console-operator-58897d9998-2p584\" (UID: \"642bcc9a-edf9-4845-b6ad-0936618ec9b0\") " pod="openshift-console-operator/console-operator-58897d9998-2p584" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.877071 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5dc5ef77-d18a-4474-a523-473f27166095-console-serving-cert\") pod \"console-f9d7485db-xkq7s\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.877102 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcgh8\" (UniqueName: \"kubernetes.io/projected/01a98c93-30e9-4161-ae55-553a6107a67f-kube-api-access-zcgh8\") pod \"openshift-controller-manager-operator-756b6f6bc6-ktl5r\" (UID: \"01a98c93-30e9-4161-ae55-553a6107a67f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ktl5r" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.877121 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8kxm\" (UniqueName: \"kubernetes.io/projected/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-kube-api-access-b8kxm\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.877141 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/642bcc9a-edf9-4845-b6ad-0936618ec9b0-serving-cert\") pod \"console-operator-58897d9998-2p584\" (UID: \"642bcc9a-edf9-4845-b6ad-0936618ec9b0\") " pod="openshift-console-operator/console-operator-58897d9998-2p584" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.877163 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6138a548-439a-4af8-ad4d-a6ea89f686b7-etcd-client\") pod \"etcd-operator-b45778765-zm7f9\" (UID: \"6138a548-439a-4af8-ad4d-a6ea89f686b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zm7f9" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.877187 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-audit-dir\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.877205 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.877227 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcbfq\" (UniqueName: \"kubernetes.io/projected/5dc5ef77-d18a-4474-a523-473f27166095-kube-api-access-zcbfq\") pod \"console-f9d7485db-xkq7s\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.877244 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bqmv\" (UniqueName: \"kubernetes.io/projected/6138a548-439a-4af8-ad4d-a6ea89f686b7-kube-api-access-9bqmv\") pod \"etcd-operator-b45778765-zm7f9\" (UID: \"6138a548-439a-4af8-ad4d-a6ea89f686b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zm7f9" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.877236 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdc11055-ee85-488f-9812-536b5cd31e50-config\") pod \"machine-approver-56656f9798-mwnmw\" (UID: \"bdc11055-ee85-488f-9812-536b5cd31e50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mwnmw" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.877262 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5dc5ef77-d18a-4474-a523-473f27166095-service-ca\") pod \"console-f9d7485db-xkq7s\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.877290 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhn2b\" (UniqueName: \"kubernetes.io/projected/9f2c017f-9b00-48e8-b2bf-28eec249be0a-kube-api-access-hhn2b\") pod \"machine-config-server-8pjx9\" (UID: \"9f2c017f-9b00-48e8-b2bf-28eec249be0a\") " pod="openshift-machine-config-operator/machine-config-server-8pjx9" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.877312 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b1d0c791-1d0c-4e11-91ce-bb352ce3fce1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-nh5h5\" (UID: \"b1d0c791-1d0c-4e11-91ce-bb352ce3fce1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nh5h5" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.877317 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.877336 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpdhh\" (UniqueName: \"kubernetes.io/projected/bdc11055-ee85-488f-9812-536b5cd31e50-kube-api-access-wpdhh\") pod \"machine-approver-56656f9798-mwnmw\" (UID: \"bdc11055-ee85-488f-9812-536b5cd31e50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mwnmw" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.877360 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b97mp\" (UniqueName: \"kubernetes.io/projected/ed60e671-1c71-42fc-828d-51a4e85e3153-kube-api-access-b97mp\") pod \"cluster-samples-operator-665b6dd947-dn5hs\" (UID: \"ed60e671-1c71-42fc-828d-51a4e85e3153\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dn5hs" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.877380 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-audit-policies\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.877399 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.877436 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5dc5ef77-d18a-4474-a523-473f27166095-oauth-serving-cert\") pod \"console-f9d7485db-xkq7s\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.877501 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bdc11055-ee85-488f-9812-536b5cd31e50-auth-proxy-config\") pod \"machine-approver-56656f9798-mwnmw\" (UID: \"bdc11055-ee85-488f-9812-536b5cd31e50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mwnmw" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.877522 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5dc5ef77-d18a-4474-a523-473f27166095-console-config\") pod \"console-f9d7485db-xkq7s\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.878490 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/642bcc9a-edf9-4845-b6ad-0936618ec9b0-trusted-ca\") pod \"console-operator-58897d9998-2p584\" (UID: \"642bcc9a-edf9-4845-b6ad-0936618ec9b0\") " pod="openshift-console-operator/console-operator-58897d9998-2p584" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.880278 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5dc5ef77-d18a-4474-a523-473f27166095-console-config\") pod \"console-f9d7485db-xkq7s\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.880329 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.880375 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.880392 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.880823 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.882082 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b1d0c791-1d0c-4e11-91ce-bb352ce3fce1-available-featuregates\") pod \"openshift-config-operator-7777fb866f-nh5h5\" (UID: \"b1d0c791-1d0c-4e11-91ce-bb352ce3fce1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nh5h5" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.882418 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-audit-dir\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.882520 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-audit-policies\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.884749 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.885100 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.885754 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6138a548-439a-4af8-ad4d-a6ea89f686b7-etcd-ca\") pod \"etcd-operator-b45778765-zm7f9\" (UID: \"6138a548-439a-4af8-ad4d-a6ea89f686b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zm7f9" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.886930 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.886943 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6138a548-439a-4af8-ad4d-a6ea89f686b7-serving-cert\") pod \"etcd-operator-b45778765-zm7f9\" (UID: \"6138a548-439a-4af8-ad4d-a6ea89f686b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zm7f9" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.887161 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5dc5ef77-d18a-4474-a523-473f27166095-trusted-ca-bundle\") pod \"console-f9d7485db-xkq7s\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.887709 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.888418 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6138a548-439a-4af8-ad4d-a6ea89f686b7-config\") pod \"etcd-operator-b45778765-zm7f9\" (UID: \"6138a548-439a-4af8-ad4d-a6ea89f686b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zm7f9" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.889071 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/642bcc9a-edf9-4845-b6ad-0936618ec9b0-config\") pod \"console-operator-58897d9998-2p584\" (UID: \"642bcc9a-edf9-4845-b6ad-0936618ec9b0\") " pod="openshift-console-operator/console-operator-58897d9998-2p584" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.889142 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/642bcc9a-edf9-4845-b6ad-0936618ec9b0-serving-cert\") pod \"console-operator-58897d9998-2p584\" (UID: \"642bcc9a-edf9-4845-b6ad-0936618ec9b0\") " pod="openshift-console-operator/console-operator-58897d9998-2p584" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.889184 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a98c93-30e9-4161-ae55-553a6107a67f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ktl5r\" (UID: \"01a98c93-30e9-4161-ae55-553a6107a67f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ktl5r" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.889979 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.890357 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.890692 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6138a548-439a-4af8-ad4d-a6ea89f686b7-etcd-service-ca\") pod \"etcd-operator-b45778765-zm7f9\" (UID: \"6138a548-439a-4af8-ad4d-a6ea89f686b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zm7f9" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.891110 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bdc11055-ee85-488f-9812-536b5cd31e50-auth-proxy-config\") pod \"machine-approver-56656f9798-mwnmw\" (UID: \"bdc11055-ee85-488f-9812-536b5cd31e50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mwnmw" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.892429 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84bf2582-c99c-4ad3-b53c-37f357ba2cc5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-72h56\" (UID: \"84bf2582-c99c-4ad3-b53c-37f357ba2cc5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-72h56" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.893220 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/bdc11055-ee85-488f-9812-536b5cd31e50-machine-approver-tls\") pod \"machine-approver-56656f9798-mwnmw\" (UID: \"bdc11055-ee85-488f-9812-536b5cd31e50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mwnmw" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.894494 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5dc5ef77-d18a-4474-a523-473f27166095-oauth-serving-cert\") pod \"console-f9d7485db-xkq7s\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.895298 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/84bf2582-c99c-4ad3-b53c-37f357ba2cc5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-72h56\" (UID: \"84bf2582-c99c-4ad3-b53c-37f357ba2cc5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-72h56" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.899936 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d62191a1-c0bf-4b73-8a5b-9a084143772c-metrics-tls\") pod \"dns-operator-744455d44c-2gglt\" (UID: \"d62191a1-c0bf-4b73-8a5b-9a084143772c\") " pod="openshift-dns-operator/dns-operator-744455d44c-2gglt" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.900567 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1d0c791-1d0c-4e11-91ce-bb352ce3fce1-serving-cert\") pod \"openshift-config-operator-7777fb866f-nh5h5\" (UID: \"b1d0c791-1d0c-4e11-91ce-bb352ce3fce1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nh5h5" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.900691 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01a98c93-30e9-4161-ae55-553a6107a67f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ktl5r\" (UID: \"01a98c93-30e9-4161-ae55-553a6107a67f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ktl5r" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.900713 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed60e671-1c71-42fc-828d-51a4e85e3153-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-dn5hs\" (UID: \"ed60e671-1c71-42fc-828d-51a4e85e3153\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dn5hs" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.900919 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.902282 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6138a548-439a-4af8-ad4d-a6ea89f686b7-etcd-client\") pod \"etcd-operator-b45778765-zm7f9\" (UID: \"6138a548-439a-4af8-ad4d-a6ea89f686b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zm7f9" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.906524 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.911524 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5dc5ef77-d18a-4474-a523-473f27166095-console-oauth-config\") pod \"console-f9d7485db-xkq7s\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.925852 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.934532 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5dc5ef77-d18a-4474-a523-473f27166095-console-serving-cert\") pod \"console-f9d7485db-xkq7s\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.945737 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.953729 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5dc5ef77-d18a-4474-a523-473f27166095-service-ca\") pod \"console-f9d7485db-xkq7s\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.978376 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9f2c017f-9b00-48e8-b2bf-28eec249be0a-node-bootstrap-token\") pod \"machine-config-server-8pjx9\" (UID: \"9f2c017f-9b00-48e8-b2bf-28eec249be0a\") " pod="openshift-machine-config-operator/machine-config-server-8pjx9" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.978534 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhn2b\" (UniqueName: \"kubernetes.io/projected/9f2c017f-9b00-48e8-b2bf-28eec249be0a-kube-api-access-hhn2b\") pod \"machine-config-server-8pjx9\" (UID: \"9f2c017f-9b00-48e8-b2bf-28eec249be0a\") " pod="openshift-machine-config-operator/machine-config-server-8pjx9" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.978661 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9f2c017f-9b00-48e8-b2bf-28eec249be0a-certs\") pod \"machine-config-server-8pjx9\" (UID: \"9f2c017f-9b00-48e8-b2bf-28eec249be0a\") " pod="openshift-machine-config-operator/machine-config-server-8pjx9" Mar 11 08:59:41 crc kubenswrapper[4840]: I0311 08:59:41.986164 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.006591 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.026760 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.046152 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.067123 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.086535 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.106006 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.126603 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.155650 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.168156 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.187921 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.206836 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.226018 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.246843 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.266116 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.287315 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.306918 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.327818 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.347003 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.367805 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.386596 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.406923 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.427267 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.446798 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.466760 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.486769 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.506811 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.526808 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.547219 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.566045 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.586520 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.607059 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.626876 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.648086 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.666536 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.686543 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.707031 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.727419 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.747568 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.764802 4840 request.go:700] Waited for 1.017557698s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dmarketplace-trusted-ca&limit=500&resourceVersion=0 Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.776659 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.786186 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.806738 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.827354 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.847703 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.866456 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.888584 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.907576 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.928127 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.947017 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.967578 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 11 08:59:42 crc kubenswrapper[4840]: E0311 08:59:42.978821 4840 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Mar 11 08:59:42 crc kubenswrapper[4840]: E0311 08:59:42.978916 4840 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 11 08:59:42 crc kubenswrapper[4840]: E0311 08:59:42.979009 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f2c017f-9b00-48e8-b2bf-28eec249be0a-node-bootstrap-token podName:9f2c017f-9b00-48e8-b2bf-28eec249be0a nodeName:}" failed. No retries permitted until 2026-03-11 08:59:43.478965567 +0000 UTC m=+182.144635392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/9f2c017f-9b00-48e8-b2bf-28eec249be0a-node-bootstrap-token") pod "machine-config-server-8pjx9" (UID: "9f2c017f-9b00-48e8-b2bf-28eec249be0a") : failed to sync secret cache: timed out waiting for the condition Mar 11 08:59:42 crc kubenswrapper[4840]: E0311 08:59:42.979051 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f2c017f-9b00-48e8-b2bf-28eec249be0a-certs podName:9f2c017f-9b00-48e8-b2bf-28eec249be0a nodeName:}" failed. No retries permitted until 2026-03-11 08:59:43.479033249 +0000 UTC m=+182.144703324 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/9f2c017f-9b00-48e8-b2bf-28eec249be0a-certs") pod "machine-config-server-8pjx9" (UID: "9f2c017f-9b00-48e8-b2bf-28eec249be0a") : failed to sync secret cache: timed out waiting for the condition Mar 11 08:59:42 crc kubenswrapper[4840]: I0311 08:59:42.987648 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.007420 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.027676 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.047223 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.067293 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.087714 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.107499 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.127267 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.148588 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.167360 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.186766 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.207727 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.227163 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.246742 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.267287 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.287868 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.326734 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.366261 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.466966 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.486965 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.502595 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9f2c017f-9b00-48e8-b2bf-28eec249be0a-certs\") pod \"machine-config-server-8pjx9\" (UID: \"9f2c017f-9b00-48e8-b2bf-28eec249be0a\") " pod="openshift-machine-config-operator/machine-config-server-8pjx9" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.502677 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9f2c017f-9b00-48e8-b2bf-28eec249be0a-node-bootstrap-token\") pod \"machine-config-server-8pjx9\" (UID: \"9f2c017f-9b00-48e8-b2bf-28eec249be0a\") " pod="openshift-machine-config-operator/machine-config-server-8pjx9" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.506440 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.507709 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9f2c017f-9b00-48e8-b2bf-28eec249be0a-node-bootstrap-token\") pod \"machine-config-server-8pjx9\" (UID: \"9f2c017f-9b00-48e8-b2bf-28eec249be0a\") " pod="openshift-machine-config-operator/machine-config-server-8pjx9" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.512395 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9f2c017f-9b00-48e8-b2bf-28eec249be0a-certs\") pod \"machine-config-server-8pjx9\" (UID: \"9f2c017f-9b00-48e8-b2bf-28eec249be0a\") " pod="openshift-machine-config-operator/machine-config-server-8pjx9" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.546290 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.586455 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.606063 4840 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.626338 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.646743 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.666761 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.687937 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.707009 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.726255 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.746120 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.767074 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.785091 4840 request.go:700] Waited for 1.908491357s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/serviceaccounts/console-operator/token Mar 11 08:59:43 crc kubenswrapper[4840]: I0311 08:59:43.865697 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/84bf2582-c99c-4ad3-b53c-37f357ba2cc5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-72h56\" (UID: \"84bf2582-c99c-4ad3-b53c-37f357ba2cc5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-72h56" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.005895 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98l8z\" (UniqueName: \"kubernetes.io/projected/84bf2582-c99c-4ad3-b53c-37f357ba2cc5-kube-api-access-98l8z\") pod \"cluster-image-registry-operator-dc59b4c8b-72h56\" (UID: \"84bf2582-c99c-4ad3-b53c-37f357ba2cc5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-72h56" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.067751 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.087161 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.101412 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhn2b\" (UniqueName: \"kubernetes.io/projected/9f2c017f-9b00-48e8-b2bf-28eec249be0a-kube-api-access-hhn2b\") pod \"machine-config-server-8pjx9\" (UID: \"9f2c017f-9b00-48e8-b2bf-28eec249be0a\") " pod="openshift-machine-config-operator/machine-config-server-8pjx9" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.107160 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.125944 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.146687 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.165396 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.185546 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.187614 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8pjx9" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.215529 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpkzb\" (UniqueName: \"kubernetes.io/projected/6ca4d667-0d35-40e6-b681-ec69524cfc2f-kube-api-access-bpkzb\") pod \"ingress-operator-5b745b69d9-zqtkm\" (UID: \"6ca4d667-0d35-40e6-b681-ec69524cfc2f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqtkm" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.216307 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444a5f5c-fbf0-4c73-8623-081182f78861-config\") pod \"kube-controller-manager-operator-78b949d7b-k4c7d\" (UID: \"444a5f5c-fbf0-4c73-8623-081182f78861\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k4c7d" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.217048 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ca4d667-0d35-40e6-b681-ec69524cfc2f-metrics-tls\") pod \"ingress-operator-5b745b69d9-zqtkm\" (UID: \"6ca4d667-0d35-40e6-b681-ec69524cfc2f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqtkm" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.217188 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2hvx\" (UniqueName: \"kubernetes.io/projected/a821fe36-3bdd-4b59-9dd4-004985404023-kube-api-access-f2hvx\") pod \"collect-profiles-29553645-gw77z\" (UID: \"a821fe36-3bdd-4b59-9dd4-004985404023\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-gw77z" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.217311 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40a6df27-50b3-452a-940a-aab6b087cdb2-installation-pull-secrets\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.217603 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4c602adf-1ed4-4779-a4f5-5ff24d9ee648-stats-auth\") pod \"router-default-5444994796-wwr6r\" (UID: \"4c602adf-1ed4-4779-a4f5-5ff24d9ee648\") " pod="openshift-ingress/router-default-5444994796-wwr6r" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.218411 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3d07b65-1690-44b1-a232-5a9d4187e89d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2kbzn\" (UID: \"e3d07b65-1690-44b1-a232-5a9d4187e89d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2kbzn" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.220010 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/444a5f5c-fbf0-4c73-8623-081182f78861-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-k4c7d\" (UID: \"444a5f5c-fbf0-4c73-8623-081182f78861\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k4c7d" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.220083 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/444a5f5c-fbf0-4c73-8623-081182f78861-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-k4c7d\" (UID: \"444a5f5c-fbf0-4c73-8623-081182f78861\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k4c7d" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.220385 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcae32d8-1c29-45c8-aadc-332b3ad1dd5c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rjm9q\" (UID: \"dcae32d8-1c29-45c8-aadc-332b3ad1dd5c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjm9q" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.220976 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7pwp\" (UniqueName: \"kubernetes.io/projected/e3d07b65-1690-44b1-a232-5a9d4187e89d-kube-api-access-g7pwp\") pod \"package-server-manager-789f6589d5-2kbzn\" (UID: \"e3d07b65-1690-44b1-a232-5a9d4187e89d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2kbzn" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.221353 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lrwl\" (UniqueName: \"kubernetes.io/projected/40a6df27-50b3-452a-940a-aab6b087cdb2-kube-api-access-9lrwl\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.221763 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33326f34-f442-42be-9bd2-39cf5627b953-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d6cv4\" (UID: \"33326f34-f442-42be-9bd2-39cf5627b953\") " pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.221931 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c602adf-1ed4-4779-a4f5-5ff24d9ee648-service-ca-bundle\") pod \"router-default-5444994796-wwr6r\" (UID: \"4c602adf-1ed4-4779-a4f5-5ff24d9ee648\") " pod="openshift-ingress/router-default-5444994796-wwr6r" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.221997 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4b3c8f62-4252-41b4-8e96-8788caae161b-images\") pod \"machine-config-operator-74547568cd-drmmh\" (UID: \"4b3c8f62-4252-41b4-8e96-8788caae161b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-drmmh" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.222056 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4c602adf-1ed4-4779-a4f5-5ff24d9ee648-default-certificate\") pod \"router-default-5444994796-wwr6r\" (UID: \"4c602adf-1ed4-4779-a4f5-5ff24d9ee648\") " pod="openshift-ingress/router-default-5444994796-wwr6r" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.222159 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ca4d667-0d35-40e6-b681-ec69524cfc2f-trusted-ca\") pod \"ingress-operator-5b745b69d9-zqtkm\" (UID: \"6ca4d667-0d35-40e6-b681-ec69524cfc2f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqtkm" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.222418 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a821fe36-3bdd-4b59-9dd4-004985404023-config-volume\") pod \"collect-profiles-29553645-gw77z\" (UID: \"a821fe36-3bdd-4b59-9dd4-004985404023\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-gw77z" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.222617 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3be5fb83-d2b0-4173-8977-6681ddeab581-srv-cert\") pod \"olm-operator-6b444d44fb-pcgsw\" (UID: \"3be5fb83-d2b0-4173-8977-6681ddeab581\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcgsw" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.222731 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ca4d667-0d35-40e6-b681-ec69524cfc2f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zqtkm\" (UID: \"6ca4d667-0d35-40e6-b681-ec69524cfc2f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqtkm" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.222853 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b9c9ed82-0f76-48b1-8cb7-af788bf40902-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xt7ft\" (UID: \"b9c9ed82-0f76-48b1-8cb7-af788bf40902\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xt7ft" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.222968 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5b9q\" (UniqueName: \"kubernetes.io/projected/511c45a2-2ea8-4e84-9f1d-2901e39e7e36-kube-api-access-l5b9q\") pod \"multus-admission-controller-857f4d67dd-mcz2j\" (UID: \"511c45a2-2ea8-4e84-9f1d-2901e39e7e36\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mcz2j" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.223098 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r444p\" (UniqueName: \"kubernetes.io/projected/3be5fb83-d2b0-4173-8977-6681ddeab581-kube-api-access-r444p\") pod \"olm-operator-6b444d44fb-pcgsw\" (UID: \"3be5fb83-d2b0-4173-8977-6681ddeab581\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcgsw" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.223154 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcae32d8-1c29-45c8-aadc-332b3ad1dd5c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rjm9q\" (UID: \"dcae32d8-1c29-45c8-aadc-332b3ad1dd5c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjm9q" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.223192 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40a6df27-50b3-452a-940a-aab6b087cdb2-registry-certificates\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.223226 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vws9f\" (UniqueName: \"kubernetes.io/projected/b9c9ed82-0f76-48b1-8cb7-af788bf40902-kube-api-access-vws9f\") pod \"machine-config-controller-84d6567774-xt7ft\" (UID: \"b9c9ed82-0f76-48b1-8cb7-af788bf40902\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xt7ft" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.223284 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40a6df27-50b3-452a-940a-aab6b087cdb2-ca-trust-extracted\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.223336 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkh5m\" (UniqueName: \"kubernetes.io/projected/4b3c8f62-4252-41b4-8e96-8788caae161b-kube-api-access-zkh5m\") pod \"machine-config-operator-74547568cd-drmmh\" (UID: \"4b3c8f62-4252-41b4-8e96-8788caae161b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-drmmh" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.223431 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40a6df27-50b3-452a-940a-aab6b087cdb2-trusted-ca\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.223696 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4b3c8f62-4252-41b4-8e96-8788caae161b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-drmmh\" (UID: \"4b3c8f62-4252-41b4-8e96-8788caae161b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-drmmh" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.223742 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcae32d8-1c29-45c8-aadc-332b3ad1dd5c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rjm9q\" (UID: \"dcae32d8-1c29-45c8-aadc-332b3ad1dd5c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjm9q" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.223836 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c602adf-1ed4-4779-a4f5-5ff24d9ee648-metrics-certs\") pod \"router-default-5444994796-wwr6r\" (UID: \"4c602adf-1ed4-4779-a4f5-5ff24d9ee648\") " pod="openshift-ingress/router-default-5444994796-wwr6r" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.224341 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf8sf\" (UniqueName: \"kubernetes.io/projected/289c41a4-7b72-4bf9-9fc3-31f532ec6bf6-kube-api-access-lf8sf\") pod \"migrator-59844c95c7-nz4rs\" (UID: \"289c41a4-7b72-4bf9-9fc3-31f532ec6bf6\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nz4rs" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.224555 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40a6df27-50b3-452a-940a-aab6b087cdb2-registry-tls\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.224869 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: E0311 08:59:44.225410 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:44.725386154 +0000 UTC m=+183.391056189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.225731 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b9c9ed82-0f76-48b1-8cb7-af788bf40902-proxy-tls\") pod \"machine-config-controller-84d6567774-xt7ft\" (UID: \"b9c9ed82-0f76-48b1-8cb7-af788bf40902\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xt7ft" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.226075 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40a6df27-50b3-452a-940a-aab6b087cdb2-bound-sa-token\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.226141 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3be5fb83-d2b0-4173-8977-6681ddeab581-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pcgsw\" (UID: \"3be5fb83-d2b0-4173-8977-6681ddeab581\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcgsw" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.226178 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/33326f34-f442-42be-9bd2-39cf5627b953-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d6cv4\" (UID: \"33326f34-f442-42be-9bd2-39cf5627b953\") " pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.226224 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.226323 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/511c45a2-2ea8-4e84-9f1d-2901e39e7e36-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mcz2j\" (UID: \"511c45a2-2ea8-4e84-9f1d-2901e39e7e36\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mcz2j" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.226436 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq6d2\" (UniqueName: \"kubernetes.io/projected/4c602adf-1ed4-4779-a4f5-5ff24d9ee648-kube-api-access-qq6d2\") pod \"router-default-5444994796-wwr6r\" (UID: \"4c602adf-1ed4-4779-a4f5-5ff24d9ee648\") " pod="openshift-ingress/router-default-5444994796-wwr6r" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.226511 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qggmx\" (UniqueName: \"kubernetes.io/projected/33326f34-f442-42be-9bd2-39cf5627b953-kube-api-access-qggmx\") pod \"marketplace-operator-79b997595-d6cv4\" (UID: \"33326f34-f442-42be-9bd2-39cf5627b953\") " pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.226568 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a821fe36-3bdd-4b59-9dd4-004985404023-secret-volume\") pod \"collect-profiles-29553645-gw77z\" (UID: \"a821fe36-3bdd-4b59-9dd4-004985404023\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-gw77z" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.226634 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4b3c8f62-4252-41b4-8e96-8788caae161b-proxy-tls\") pod \"machine-config-operator-74547568cd-drmmh\" (UID: \"4b3c8f62-4252-41b4-8e96-8788caae161b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-drmmh" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.246282 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.266740 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.287350 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.307076 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.328063 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.329101 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.329299 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9b7b\" (UniqueName: \"kubernetes.io/projected/c9dcb2e3-326d-461e-9b69-4afb097f812d-kube-api-access-r9b7b\") pod \"packageserver-d55dfcdfc-pvx88\" (UID: \"c9dcb2e3-326d-461e-9b69-4afb097f812d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pvx88" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.329351 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b9c9ed82-0f76-48b1-8cb7-af788bf40902-proxy-tls\") pod \"machine-config-controller-84d6567774-xt7ft\" (UID: \"b9c9ed82-0f76-48b1-8cb7-af788bf40902\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xt7ft" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.329396 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08979e0d-51f2-42ed-acf4-afc014830489-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5wlch\" (UID: \"08979e0d-51f2-42ed-acf4-afc014830489\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5wlch" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.329538 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c9dcb2e3-326d-461e-9b69-4afb097f812d-webhook-cert\") pod \"packageserver-d55dfcdfc-pvx88\" (UID: \"c9dcb2e3-326d-461e-9b69-4afb097f812d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pvx88" Mar 11 08:59:44 crc kubenswrapper[4840]: E0311 08:59:44.329671 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:44.829545951 +0000 UTC m=+183.495215936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.329778 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eccdf140-9410-4209-be5a-eb865704291a-cert\") pod \"ingress-canary-2zpp2\" (UID: \"eccdf140-9410-4209-be5a-eb865704291a\") " pod="openshift-ingress-canary/ingress-canary-2zpp2" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.329892 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40a6df27-50b3-452a-940a-aab6b087cdb2-bound-sa-token\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.330038 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3be5fb83-d2b0-4173-8977-6681ddeab581-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pcgsw\" (UID: \"3be5fb83-d2b0-4173-8977-6681ddeab581\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcgsw" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.330094 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/33326f34-f442-42be-9bd2-39cf5627b953-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d6cv4\" (UID: \"33326f34-f442-42be-9bd2-39cf5627b953\") " pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.330154 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c0687081-fb8b-4b87-945d-b107b2f86966-plugins-dir\") pod \"csi-hostpathplugin-8rsqx\" (UID: \"c0687081-fb8b-4b87-945d-b107b2f86966\") " pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.330195 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/511c45a2-2ea8-4e84-9f1d-2901e39e7e36-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mcz2j\" (UID: \"511c45a2-2ea8-4e84-9f1d-2901e39e7e36\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mcz2j" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.330226 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e87442b-4d54-472c-bad6-e2086c95df50-metrics-certs\") pod \"network-metrics-daemon-gjgkz\" (UID: \"6e87442b-4d54-472c-bad6-e2086c95df50\") " pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.330273 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c9dcb2e3-326d-461e-9b69-4afb097f812d-apiservice-cert\") pod \"packageserver-d55dfcdfc-pvx88\" (UID: \"c9dcb2e3-326d-461e-9b69-4afb097f812d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pvx88" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.330306 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq6d2\" (UniqueName: \"kubernetes.io/projected/4c602adf-1ed4-4779-a4f5-5ff24d9ee648-kube-api-access-qq6d2\") pod \"router-default-5444994796-wwr6r\" (UID: \"4c602adf-1ed4-4779-a4f5-5ff24d9ee648\") " pod="openshift-ingress/router-default-5444994796-wwr6r" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.330332 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a821fe36-3bdd-4b59-9dd4-004985404023-secret-volume\") pod \"collect-profiles-29553645-gw77z\" (UID: \"a821fe36-3bdd-4b59-9dd4-004985404023\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-gw77z" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.330359 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qggmx\" (UniqueName: \"kubernetes.io/projected/33326f34-f442-42be-9bd2-39cf5627b953-kube-api-access-qggmx\") pod \"marketplace-operator-79b997595-d6cv4\" (UID: \"33326f34-f442-42be-9bd2-39cf5627b953\") " pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.330402 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4b3c8f62-4252-41b4-8e96-8788caae161b-proxy-tls\") pod \"machine-config-operator-74547568cd-drmmh\" (UID: \"4b3c8f62-4252-41b4-8e96-8788caae161b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-drmmh" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.330451 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf6jb\" (UniqueName: \"kubernetes.io/projected/d33549b3-2af3-444d-9988-1fce8d590d8a-kube-api-access-nf6jb\") pod \"service-ca-operator-777779d784-8ql29\" (UID: \"d33549b3-2af3-444d-9988-1fce8d590d8a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ql29" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.330521 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpkzb\" (UniqueName: \"kubernetes.io/projected/6ca4d667-0d35-40e6-b681-ec69524cfc2f-kube-api-access-bpkzb\") pod \"ingress-operator-5b745b69d9-zqtkm\" (UID: \"6ca4d667-0d35-40e6-b681-ec69524cfc2f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqtkm" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.330552 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pzgc\" (UniqueName: \"kubernetes.io/projected/a16bb49a-f4f4-43df-9775-8cea5ed8e4e5-kube-api-access-4pzgc\") pod \"catalog-operator-68c6474976-6b2bw\" (UID: \"a16bb49a-f4f4-43df-9775-8cea5ed8e4e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b2bw" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.330622 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444a5f5c-fbf0-4c73-8623-081182f78861-config\") pod \"kube-controller-manager-operator-78b949d7b-k4c7d\" (UID: \"444a5f5c-fbf0-4c73-8623-081182f78861\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k4c7d" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.330672 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ca4d667-0d35-40e6-b681-ec69524cfc2f-metrics-tls\") pod \"ingress-operator-5b745b69d9-zqtkm\" (UID: \"6ca4d667-0d35-40e6-b681-ec69524cfc2f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqtkm" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.330711 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psmfr\" (UniqueName: \"kubernetes.io/projected/2f093229-f88a-4571-a575-8efa44a1b8dd-kube-api-access-psmfr\") pod \"service-ca-9c57cc56f-k9hqk\" (UID: \"2f093229-f88a-4571-a575-8efa44a1b8dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-k9hqk" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.330743 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40a6df27-50b3-452a-940a-aab6b087cdb2-installation-pull-secrets\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.330769 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2hvx\" (UniqueName: \"kubernetes.io/projected/a821fe36-3bdd-4b59-9dd4-004985404023-kube-api-access-f2hvx\") pod \"collect-profiles-29553645-gw77z\" (UID: \"a821fe36-3bdd-4b59-9dd4-004985404023\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-gw77z" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.330798 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a16bb49a-f4f4-43df-9775-8cea5ed8e4e5-profile-collector-cert\") pod \"catalog-operator-68c6474976-6b2bw\" (UID: \"a16bb49a-f4f4-43df-9775-8cea5ed8e4e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b2bw" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.330845 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e7e387f-48ae-4f10-8fb5-d163b751b3fa-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-98fj7\" (UID: \"5e7e387f-48ae-4f10-8fb5-d163b751b3fa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-98fj7" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.330878 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4c602adf-1ed4-4779-a4f5-5ff24d9ee648-stats-auth\") pod \"router-default-5444994796-wwr6r\" (UID: \"4c602adf-1ed4-4779-a4f5-5ff24d9ee648\") " pod="openshift-ingress/router-default-5444994796-wwr6r" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.330934 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c0687081-fb8b-4b87-945d-b107b2f86966-csi-data-dir\") pod \"csi-hostpathplugin-8rsqx\" (UID: \"c0687081-fb8b-4b87-945d-b107b2f86966\") " pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.330961 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3d07b65-1690-44b1-a232-5a9d4187e89d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2kbzn\" (UID: \"e3d07b65-1690-44b1-a232-5a9d4187e89d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2kbzn" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.330988 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2f093229-f88a-4571-a575-8efa44a1b8dd-signing-cabundle\") pod \"service-ca-9c57cc56f-k9hqk\" (UID: \"2f093229-f88a-4571-a575-8efa44a1b8dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-k9hqk" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331035 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/444a5f5c-fbf0-4c73-8623-081182f78861-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-k4c7d\" (UID: \"444a5f5c-fbf0-4c73-8623-081182f78861\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k4c7d" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331061 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c9dcb2e3-326d-461e-9b69-4afb097f812d-tmpfs\") pod \"packageserver-d55dfcdfc-pvx88\" (UID: \"c9dcb2e3-326d-461e-9b69-4afb097f812d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pvx88" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331106 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/444a5f5c-fbf0-4c73-8623-081182f78861-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-k4c7d\" (UID: \"444a5f5c-fbf0-4c73-8623-081182f78861\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k4c7d" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331146 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcae32d8-1c29-45c8-aadc-332b3ad1dd5c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rjm9q\" (UID: \"dcae32d8-1c29-45c8-aadc-332b3ad1dd5c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjm9q" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331170 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7pwp\" (UniqueName: \"kubernetes.io/projected/e3d07b65-1690-44b1-a232-5a9d4187e89d-kube-api-access-g7pwp\") pod \"package-server-manager-789f6589d5-2kbzn\" (UID: \"e3d07b65-1690-44b1-a232-5a9d4187e89d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2kbzn" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331203 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lrwl\" (UniqueName: \"kubernetes.io/projected/40a6df27-50b3-452a-940a-aab6b087cdb2-kube-api-access-9lrwl\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331231 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dl84\" (UniqueName: \"kubernetes.io/projected/cdd63dfa-6787-4eca-937b-758358444ffc-kube-api-access-7dl84\") pod \"dns-default-cxgxb\" (UID: \"cdd63dfa-6787-4eca-937b-758358444ffc\") " pod="openshift-dns/dns-default-cxgxb" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331270 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33326f34-f442-42be-9bd2-39cf5627b953-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d6cv4\" (UID: \"33326f34-f442-42be-9bd2-39cf5627b953\") " pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331294 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2f093229-f88a-4571-a575-8efa44a1b8dd-signing-key\") pod \"service-ca-9c57cc56f-k9hqk\" (UID: \"2f093229-f88a-4571-a575-8efa44a1b8dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-k9hqk" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331318 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4b3c8f62-4252-41b4-8e96-8788caae161b-images\") pod \"machine-config-operator-74547568cd-drmmh\" (UID: \"4b3c8f62-4252-41b4-8e96-8788caae161b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-drmmh" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331341 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c602adf-1ed4-4779-a4f5-5ff24d9ee648-service-ca-bundle\") pod \"router-default-5444994796-wwr6r\" (UID: \"4c602adf-1ed4-4779-a4f5-5ff24d9ee648\") " pod="openshift-ingress/router-default-5444994796-wwr6r" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331365 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c0687081-fb8b-4b87-945d-b107b2f86966-mountpoint-dir\") pod \"csi-hostpathplugin-8rsqx\" (UID: \"c0687081-fb8b-4b87-945d-b107b2f86966\") " pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331391 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4c602adf-1ed4-4779-a4f5-5ff24d9ee648-default-certificate\") pod \"router-default-5444994796-wwr6r\" (UID: \"4c602adf-1ed4-4779-a4f5-5ff24d9ee648\") " pod="openshift-ingress/router-default-5444994796-wwr6r" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331417 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a16bb49a-f4f4-43df-9775-8cea5ed8e4e5-srv-cert\") pod \"catalog-operator-68c6474976-6b2bw\" (UID: \"a16bb49a-f4f4-43df-9775-8cea5ed8e4e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b2bw" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331440 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c0687081-fb8b-4b87-945d-b107b2f86966-socket-dir\") pod \"csi-hostpathplugin-8rsqx\" (UID: \"c0687081-fb8b-4b87-945d-b107b2f86966\") " pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331496 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ca4d667-0d35-40e6-b681-ec69524cfc2f-trusted-ca\") pod \"ingress-operator-5b745b69d9-zqtkm\" (UID: \"6ca4d667-0d35-40e6-b681-ec69524cfc2f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqtkm" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331539 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c0687081-fb8b-4b87-945d-b107b2f86966-registration-dir\") pod \"csi-hostpathplugin-8rsqx\" (UID: \"c0687081-fb8b-4b87-945d-b107b2f86966\") " pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331577 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a821fe36-3bdd-4b59-9dd4-004985404023-config-volume\") pod \"collect-profiles-29553645-gw77z\" (UID: \"a821fe36-3bdd-4b59-9dd4-004985404023\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-gw77z" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331603 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3be5fb83-d2b0-4173-8977-6681ddeab581-srv-cert\") pod \"olm-operator-6b444d44fb-pcgsw\" (UID: \"3be5fb83-d2b0-4173-8977-6681ddeab581\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcgsw" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331641 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5b9q\" (UniqueName: \"kubernetes.io/projected/511c45a2-2ea8-4e84-9f1d-2901e39e7e36-kube-api-access-l5b9q\") pod \"multus-admission-controller-857f4d67dd-mcz2j\" (UID: \"511c45a2-2ea8-4e84-9f1d-2901e39e7e36\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mcz2j" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331666 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ca4d667-0d35-40e6-b681-ec69524cfc2f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zqtkm\" (UID: \"6ca4d667-0d35-40e6-b681-ec69524cfc2f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqtkm" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331688 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b9c9ed82-0f76-48b1-8cb7-af788bf40902-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xt7ft\" (UID: \"b9c9ed82-0f76-48b1-8cb7-af788bf40902\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xt7ft" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331725 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcae32d8-1c29-45c8-aadc-332b3ad1dd5c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rjm9q\" (UID: \"dcae32d8-1c29-45c8-aadc-332b3ad1dd5c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjm9q" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331757 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08979e0d-51f2-42ed-acf4-afc014830489-config\") pod \"kube-apiserver-operator-766d6c64bb-5wlch\" (UID: \"08979e0d-51f2-42ed-acf4-afc014830489\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5wlch" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331789 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r444p\" (UniqueName: \"kubernetes.io/projected/3be5fb83-d2b0-4173-8977-6681ddeab581-kube-api-access-r444p\") pod \"olm-operator-6b444d44fb-pcgsw\" (UID: \"3be5fb83-d2b0-4173-8977-6681ddeab581\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcgsw" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331819 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40a6df27-50b3-452a-940a-aab6b087cdb2-ca-trust-extracted\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331843 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40a6df27-50b3-452a-940a-aab6b087cdb2-registry-certificates\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331868 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vws9f\" (UniqueName: \"kubernetes.io/projected/b9c9ed82-0f76-48b1-8cb7-af788bf40902-kube-api-access-vws9f\") pod \"machine-config-controller-84d6567774-xt7ft\" (UID: \"b9c9ed82-0f76-48b1-8cb7-af788bf40902\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xt7ft" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331891 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kv89\" (UniqueName: \"kubernetes.io/projected/eccdf140-9410-4209-be5a-eb865704291a-kube-api-access-5kv89\") pod \"ingress-canary-2zpp2\" (UID: \"eccdf140-9410-4209-be5a-eb865704291a\") " pod="openshift-ingress-canary/ingress-canary-2zpp2" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331917 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkh5m\" (UniqueName: \"kubernetes.io/projected/4b3c8f62-4252-41b4-8e96-8788caae161b-kube-api-access-zkh5m\") pod \"machine-config-operator-74547568cd-drmmh\" (UID: \"4b3c8f62-4252-41b4-8e96-8788caae161b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-drmmh" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331959 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e7e387f-48ae-4f10-8fb5-d163b751b3fa-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-98fj7\" (UID: \"5e7e387f-48ae-4f10-8fb5-d163b751b3fa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-98fj7" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.331997 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cdd63dfa-6787-4eca-937b-758358444ffc-metrics-tls\") pod \"dns-default-cxgxb\" (UID: \"cdd63dfa-6787-4eca-937b-758358444ffc\") " pod="openshift-dns/dns-default-cxgxb" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.332034 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d33549b3-2af3-444d-9988-1fce8d590d8a-serving-cert\") pod \"service-ca-operator-777779d784-8ql29\" (UID: \"d33549b3-2af3-444d-9988-1fce8d590d8a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ql29" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.332070 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40a6df27-50b3-452a-940a-aab6b087cdb2-trusted-ca\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.332107 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9kft\" (UniqueName: \"kubernetes.io/projected/26477d16-ea37-4c74-b5d0-b537147ab754-kube-api-access-q9kft\") pod \"control-plane-machine-set-operator-78cbb6b69f-79btk\" (UID: \"26477d16-ea37-4c74-b5d0-b537147ab754\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79btk" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.332143 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4b3c8f62-4252-41b4-8e96-8788caae161b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-drmmh\" (UID: \"4b3c8f62-4252-41b4-8e96-8788caae161b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-drmmh" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.332171 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcae32d8-1c29-45c8-aadc-332b3ad1dd5c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rjm9q\" (UID: \"dcae32d8-1c29-45c8-aadc-332b3ad1dd5c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjm9q" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.332205 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c602adf-1ed4-4779-a4f5-5ff24d9ee648-metrics-certs\") pod \"router-default-5444994796-wwr6r\" (UID: \"4c602adf-1ed4-4779-a4f5-5ff24d9ee648\") " pod="openshift-ingress/router-default-5444994796-wwr6r" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.332236 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdd63dfa-6787-4eca-937b-758358444ffc-config-volume\") pod \"dns-default-cxgxb\" (UID: \"cdd63dfa-6787-4eca-937b-758358444ffc\") " pod="openshift-dns/dns-default-cxgxb" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.332264 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/26477d16-ea37-4c74-b5d0-b537147ab754-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-79btk\" (UID: \"26477d16-ea37-4c74-b5d0-b537147ab754\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79btk" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.332291 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf8sf\" (UniqueName: \"kubernetes.io/projected/289c41a4-7b72-4bf9-9fc3-31f532ec6bf6-kube-api-access-lf8sf\") pod \"migrator-59844c95c7-nz4rs\" (UID: \"289c41a4-7b72-4bf9-9fc3-31f532ec6bf6\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nz4rs" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.332315 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlpbk\" (UniqueName: \"kubernetes.io/projected/5e7e387f-48ae-4f10-8fb5-d163b751b3fa-kube-api-access-qlpbk\") pod \"kube-storage-version-migrator-operator-b67b599dd-98fj7\" (UID: \"5e7e387f-48ae-4f10-8fb5-d163b751b3fa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-98fj7" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.332350 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.332381 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40a6df27-50b3-452a-940a-aab6b087cdb2-registry-tls\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.332416 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d33549b3-2af3-444d-9988-1fce8d590d8a-config\") pod \"service-ca-operator-777779d784-8ql29\" (UID: \"d33549b3-2af3-444d-9988-1fce8d590d8a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ql29" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.332462 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w8q5\" (UniqueName: \"kubernetes.io/projected/c0687081-fb8b-4b87-945d-b107b2f86966-kube-api-access-9w8q5\") pod \"csi-hostpathplugin-8rsqx\" (UID: \"c0687081-fb8b-4b87-945d-b107b2f86966\") " pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.332527 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08979e0d-51f2-42ed-acf4-afc014830489-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5wlch\" (UID: \"08979e0d-51f2-42ed-acf4-afc014830489\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5wlch" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.333491 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcae32d8-1c29-45c8-aadc-332b3ad1dd5c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rjm9q\" (UID: \"dcae32d8-1c29-45c8-aadc-332b3ad1dd5c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjm9q" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.334323 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3be5fb83-d2b0-4173-8977-6681ddeab581-profile-collector-cert\") pod \"olm-operator-6b444d44fb-pcgsw\" (UID: \"3be5fb83-d2b0-4173-8977-6681ddeab581\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcgsw" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.334649 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b9c9ed82-0f76-48b1-8cb7-af788bf40902-proxy-tls\") pod \"machine-config-controller-84d6567774-xt7ft\" (UID: \"b9c9ed82-0f76-48b1-8cb7-af788bf40902\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xt7ft" Mar 11 08:59:44 crc kubenswrapper[4840]: E0311 08:59:44.335342 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:44.835328156 +0000 UTC m=+183.500997971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.335917 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444a5f5c-fbf0-4c73-8623-081182f78861-config\") pod \"kube-controller-manager-operator-78b949d7b-k4c7d\" (UID: \"444a5f5c-fbf0-4c73-8623-081182f78861\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k4c7d" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.336306 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33326f34-f442-42be-9bd2-39cf5627b953-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d6cv4\" (UID: \"33326f34-f442-42be-9bd2-39cf5627b953\") " pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.337192 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40a6df27-50b3-452a-940a-aab6b087cdb2-registry-certificates\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.338036 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4b3c8f62-4252-41b4-8e96-8788caae161b-proxy-tls\") pod \"machine-config-operator-74547568cd-drmmh\" (UID: \"4b3c8f62-4252-41b4-8e96-8788caae161b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-drmmh" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.338301 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40a6df27-50b3-452a-940a-aab6b087cdb2-registry-tls\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.338771 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40a6df27-50b3-452a-940a-aab6b087cdb2-installation-pull-secrets\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.339100 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a821fe36-3bdd-4b59-9dd4-004985404023-config-volume\") pod \"collect-profiles-29553645-gw77z\" (UID: \"a821fe36-3bdd-4b59-9dd4-004985404023\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-gw77z" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.339144 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b9c9ed82-0f76-48b1-8cb7-af788bf40902-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xt7ft\" (UID: \"b9c9ed82-0f76-48b1-8cb7-af788bf40902\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xt7ft" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.340292 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4b3c8f62-4252-41b4-8e96-8788caae161b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-drmmh\" (UID: \"4b3c8f62-4252-41b4-8e96-8788caae161b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-drmmh" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.341128 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4c602adf-1ed4-4779-a4f5-5ff24d9ee648-stats-auth\") pod \"router-default-5444994796-wwr6r\" (UID: \"4c602adf-1ed4-4779-a4f5-5ff24d9ee648\") " pod="openshift-ingress/router-default-5444994796-wwr6r" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.341333 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/511c45a2-2ea8-4e84-9f1d-2901e39e7e36-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mcz2j\" (UID: \"511c45a2-2ea8-4e84-9f1d-2901e39e7e36\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mcz2j" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.341427 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c602adf-1ed4-4779-a4f5-5ff24d9ee648-service-ca-bundle\") pod \"router-default-5444994796-wwr6r\" (UID: \"4c602adf-1ed4-4779-a4f5-5ff24d9ee648\") " pod="openshift-ingress/router-default-5444994796-wwr6r" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.341506 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40a6df27-50b3-452a-940a-aab6b087cdb2-ca-trust-extracted\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.342078 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3be5fb83-d2b0-4173-8977-6681ddeab581-srv-cert\") pod \"olm-operator-6b444d44fb-pcgsw\" (UID: \"3be5fb83-d2b0-4173-8977-6681ddeab581\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcgsw" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.342237 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcae32d8-1c29-45c8-aadc-332b3ad1dd5c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rjm9q\" (UID: \"dcae32d8-1c29-45c8-aadc-332b3ad1dd5c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjm9q" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.342350 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/444a5f5c-fbf0-4c73-8623-081182f78861-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-k4c7d\" (UID: \"444a5f5c-fbf0-4c73-8623-081182f78861\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k4c7d" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.342569 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a821fe36-3bdd-4b59-9dd4-004985404023-secret-volume\") pod \"collect-profiles-29553645-gw77z\" (UID: \"a821fe36-3bdd-4b59-9dd4-004985404023\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-gw77z" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.344166 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4b3c8f62-4252-41b4-8e96-8788caae161b-images\") pod \"machine-config-operator-74547568cd-drmmh\" (UID: \"4b3c8f62-4252-41b4-8e96-8788caae161b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-drmmh" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.345108 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40a6df27-50b3-452a-940a-aab6b087cdb2-trusted-ca\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.346188 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/e3d07b65-1690-44b1-a232-5a9d4187e89d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-2kbzn\" (UID: \"e3d07b65-1690-44b1-a232-5a9d4187e89d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2kbzn" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.346591 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ca4d667-0d35-40e6-b681-ec69524cfc2f-trusted-ca\") pod \"ingress-operator-5b745b69d9-zqtkm\" (UID: \"6ca4d667-0d35-40e6-b681-ec69524cfc2f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqtkm" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.347132 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/33326f34-f442-42be-9bd2-39cf5627b953-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d6cv4\" (UID: \"33326f34-f442-42be-9bd2-39cf5627b953\") " pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.347922 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c602adf-1ed4-4779-a4f5-5ff24d9ee648-metrics-certs\") pod \"router-default-5444994796-wwr6r\" (UID: \"4c602adf-1ed4-4779-a4f5-5ff24d9ee648\") " pod="openshift-ingress/router-default-5444994796-wwr6r" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.349339 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.356077 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ca4d667-0d35-40e6-b681-ec69524cfc2f-metrics-tls\") pod \"ingress-operator-5b745b69d9-zqtkm\" (UID: \"6ca4d667-0d35-40e6-b681-ec69524cfc2f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqtkm" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.358037 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4c602adf-1ed4-4779-a4f5-5ff24d9ee648-default-certificate\") pod \"router-default-5444994796-wwr6r\" (UID: \"4c602adf-1ed4-4779-a4f5-5ff24d9ee648\") " pod="openshift-ingress/router-default-5444994796-wwr6r" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.366335 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.404379 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.405918 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.409802 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-72h56" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.433765 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:44 crc kubenswrapper[4840]: E0311 08:59:44.433925 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:44.933905683 +0000 UTC m=+183.599575498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.434300 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/26477d16-ea37-4c74-b5d0-b537147ab754-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-79btk\" (UID: \"26477d16-ea37-4c74-b5d0-b537147ab754\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79btk" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.434401 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlpbk\" (UniqueName: \"kubernetes.io/projected/5e7e387f-48ae-4f10-8fb5-d163b751b3fa-kube-api-access-qlpbk\") pod \"kube-storage-version-migrator-operator-b67b599dd-98fj7\" (UID: \"5e7e387f-48ae-4f10-8fb5-d163b751b3fa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-98fj7" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.434513 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.434599 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d33549b3-2af3-444d-9988-1fce8d590d8a-config\") pod \"service-ca-operator-777779d784-8ql29\" (UID: \"d33549b3-2af3-444d-9988-1fce8d590d8a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ql29" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.434691 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w8q5\" (UniqueName: \"kubernetes.io/projected/c0687081-fb8b-4b87-945d-b107b2f86966-kube-api-access-9w8q5\") pod \"csi-hostpathplugin-8rsqx\" (UID: \"c0687081-fb8b-4b87-945d-b107b2f86966\") " pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.434781 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08979e0d-51f2-42ed-acf4-afc014830489-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5wlch\" (UID: \"08979e0d-51f2-42ed-acf4-afc014830489\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5wlch" Mar 11 08:59:44 crc kubenswrapper[4840]: E0311 08:59:44.434785 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:44.934777518 +0000 UTC m=+183.600447333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.434924 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9b7b\" (UniqueName: \"kubernetes.io/projected/c9dcb2e3-326d-461e-9b69-4afb097f812d-kube-api-access-r9b7b\") pod \"packageserver-d55dfcdfc-pvx88\" (UID: \"c9dcb2e3-326d-461e-9b69-4afb097f812d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pvx88" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.435039 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08979e0d-51f2-42ed-acf4-afc014830489-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5wlch\" (UID: \"08979e0d-51f2-42ed-acf4-afc014830489\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5wlch" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.435132 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c9dcb2e3-326d-461e-9b69-4afb097f812d-webhook-cert\") pod \"packageserver-d55dfcdfc-pvx88\" (UID: \"c9dcb2e3-326d-461e-9b69-4afb097f812d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pvx88" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.435235 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eccdf140-9410-4209-be5a-eb865704291a-cert\") pod \"ingress-canary-2zpp2\" (UID: \"eccdf140-9410-4209-be5a-eb865704291a\") " pod="openshift-ingress-canary/ingress-canary-2zpp2" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.435289 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d33549b3-2af3-444d-9988-1fce8d590d8a-config\") pod \"service-ca-operator-777779d784-8ql29\" (UID: \"d33549b3-2af3-444d-9988-1fce8d590d8a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ql29" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.435310 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c0687081-fb8b-4b87-945d-b107b2f86966-plugins-dir\") pod \"csi-hostpathplugin-8rsqx\" (UID: \"c0687081-fb8b-4b87-945d-b107b2f86966\") " pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.435444 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c9dcb2e3-326d-461e-9b69-4afb097f812d-apiservice-cert\") pod \"packageserver-d55dfcdfc-pvx88\" (UID: \"c9dcb2e3-326d-461e-9b69-4afb097f812d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pvx88" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.435584 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf6jb\" (UniqueName: \"kubernetes.io/projected/d33549b3-2af3-444d-9988-1fce8d590d8a-kube-api-access-nf6jb\") pod \"service-ca-operator-777779d784-8ql29\" (UID: \"d33549b3-2af3-444d-9988-1fce8d590d8a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ql29" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.435667 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pzgc\" (UniqueName: \"kubernetes.io/projected/a16bb49a-f4f4-43df-9775-8cea5ed8e4e5-kube-api-access-4pzgc\") pod \"catalog-operator-68c6474976-6b2bw\" (UID: \"a16bb49a-f4f4-43df-9775-8cea5ed8e4e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b2bw" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.435751 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psmfr\" (UniqueName: \"kubernetes.io/projected/2f093229-f88a-4571-a575-8efa44a1b8dd-kube-api-access-psmfr\") pod \"service-ca-9c57cc56f-k9hqk\" (UID: \"2f093229-f88a-4571-a575-8efa44a1b8dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-k9hqk" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.435793 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a16bb49a-f4f4-43df-9775-8cea5ed8e4e5-profile-collector-cert\") pod \"catalog-operator-68c6474976-6b2bw\" (UID: \"a16bb49a-f4f4-43df-9775-8cea5ed8e4e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b2bw" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.435836 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e7e387f-48ae-4f10-8fb5-d163b751b3fa-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-98fj7\" (UID: \"5e7e387f-48ae-4f10-8fb5-d163b751b3fa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-98fj7" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.435865 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c0687081-fb8b-4b87-945d-b107b2f86966-csi-data-dir\") pod \"csi-hostpathplugin-8rsqx\" (UID: \"c0687081-fb8b-4b87-945d-b107b2f86966\") " pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.435922 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2f093229-f88a-4571-a575-8efa44a1b8dd-signing-cabundle\") pod \"service-ca-9c57cc56f-k9hqk\" (UID: \"2f093229-f88a-4571-a575-8efa44a1b8dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-k9hqk" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.435968 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c9dcb2e3-326d-461e-9b69-4afb097f812d-tmpfs\") pod \"packageserver-d55dfcdfc-pvx88\" (UID: \"c9dcb2e3-326d-461e-9b69-4afb097f812d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pvx88" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.436078 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dl84\" (UniqueName: \"kubernetes.io/projected/cdd63dfa-6787-4eca-937b-758358444ffc-kube-api-access-7dl84\") pod \"dns-default-cxgxb\" (UID: \"cdd63dfa-6787-4eca-937b-758358444ffc\") " pod="openshift-dns/dns-default-cxgxb" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.436100 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c0687081-fb8b-4b87-945d-b107b2f86966-csi-data-dir\") pod \"csi-hostpathplugin-8rsqx\" (UID: \"c0687081-fb8b-4b87-945d-b107b2f86966\") " pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.436131 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2f093229-f88a-4571-a575-8efa44a1b8dd-signing-key\") pod \"service-ca-9c57cc56f-k9hqk\" (UID: \"2f093229-f88a-4571-a575-8efa44a1b8dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-k9hqk" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.436184 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c0687081-fb8b-4b87-945d-b107b2f86966-mountpoint-dir\") pod \"csi-hostpathplugin-8rsqx\" (UID: \"c0687081-fb8b-4b87-945d-b107b2f86966\") " pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.436212 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a16bb49a-f4f4-43df-9775-8cea5ed8e4e5-srv-cert\") pod \"catalog-operator-68c6474976-6b2bw\" (UID: \"a16bb49a-f4f4-43df-9775-8cea5ed8e4e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b2bw" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.436240 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c0687081-fb8b-4b87-945d-b107b2f86966-socket-dir\") pod \"csi-hostpathplugin-8rsqx\" (UID: \"c0687081-fb8b-4b87-945d-b107b2f86966\") " pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.436283 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c0687081-fb8b-4b87-945d-b107b2f86966-registration-dir\") pod \"csi-hostpathplugin-8rsqx\" (UID: \"c0687081-fb8b-4b87-945d-b107b2f86966\") " pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.436364 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08979e0d-51f2-42ed-acf4-afc014830489-config\") pod \"kube-apiserver-operator-766d6c64bb-5wlch\" (UID: \"08979e0d-51f2-42ed-acf4-afc014830489\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5wlch" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.436407 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kv89\" (UniqueName: \"kubernetes.io/projected/eccdf140-9410-4209-be5a-eb865704291a-kube-api-access-5kv89\") pod \"ingress-canary-2zpp2\" (UID: \"eccdf140-9410-4209-be5a-eb865704291a\") " pod="openshift-ingress-canary/ingress-canary-2zpp2" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.436442 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e7e387f-48ae-4f10-8fb5-d163b751b3fa-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-98fj7\" (UID: \"5e7e387f-48ae-4f10-8fb5-d163b751b3fa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-98fj7" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.436486 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cdd63dfa-6787-4eca-937b-758358444ffc-metrics-tls\") pod \"dns-default-cxgxb\" (UID: \"cdd63dfa-6787-4eca-937b-758358444ffc\") " pod="openshift-dns/dns-default-cxgxb" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.436517 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d33549b3-2af3-444d-9988-1fce8d590d8a-serving-cert\") pod \"service-ca-operator-777779d784-8ql29\" (UID: \"d33549b3-2af3-444d-9988-1fce8d590d8a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ql29" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.436555 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9kft\" (UniqueName: \"kubernetes.io/projected/26477d16-ea37-4c74-b5d0-b537147ab754-kube-api-access-q9kft\") pod \"control-plane-machine-set-operator-78cbb6b69f-79btk\" (UID: \"26477d16-ea37-4c74-b5d0-b537147ab754\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79btk" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.436600 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c9dcb2e3-326d-461e-9b69-4afb097f812d-tmpfs\") pod \"packageserver-d55dfcdfc-pvx88\" (UID: \"c9dcb2e3-326d-461e-9b69-4afb097f812d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pvx88" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.437437 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2f093229-f88a-4571-a575-8efa44a1b8dd-signing-cabundle\") pod \"service-ca-9c57cc56f-k9hqk\" (UID: \"2f093229-f88a-4571-a575-8efa44a1b8dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-k9hqk" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.436606 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdd63dfa-6787-4eca-937b-758358444ffc-config-volume\") pod \"dns-default-cxgxb\" (UID: \"cdd63dfa-6787-4eca-937b-758358444ffc\") " pod="openshift-dns/dns-default-cxgxb" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.437847 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c0687081-fb8b-4b87-945d-b107b2f86966-plugins-dir\") pod \"csi-hostpathplugin-8rsqx\" (UID: \"c0687081-fb8b-4b87-945d-b107b2f86966\") " pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.438165 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdd63dfa-6787-4eca-937b-758358444ffc-config-volume\") pod \"dns-default-cxgxb\" (UID: \"cdd63dfa-6787-4eca-937b-758358444ffc\") " pod="openshift-dns/dns-default-cxgxb" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.438643 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08979e0d-51f2-42ed-acf4-afc014830489-config\") pod \"kube-apiserver-operator-766d6c64bb-5wlch\" (UID: \"08979e0d-51f2-42ed-acf4-afc014830489\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5wlch" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.438772 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c0687081-fb8b-4b87-945d-b107b2f86966-socket-dir\") pod \"csi-hostpathplugin-8rsqx\" (UID: \"c0687081-fb8b-4b87-945d-b107b2f86966\") " pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.438887 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c0687081-fb8b-4b87-945d-b107b2f86966-registration-dir\") pod \"csi-hostpathplugin-8rsqx\" (UID: \"c0687081-fb8b-4b87-945d-b107b2f86966\") " pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.439152 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c0687081-fb8b-4b87-945d-b107b2f86966-mountpoint-dir\") pod \"csi-hostpathplugin-8rsqx\" (UID: \"c0687081-fb8b-4b87-945d-b107b2f86966\") " pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.440664 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eccdf140-9410-4209-be5a-eb865704291a-cert\") pod \"ingress-canary-2zpp2\" (UID: \"eccdf140-9410-4209-be5a-eb865704291a\") " pod="openshift-ingress-canary/ingress-canary-2zpp2" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.440708 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c9dcb2e3-326d-461e-9b69-4afb097f812d-webhook-cert\") pod \"packageserver-d55dfcdfc-pvx88\" (UID: \"c9dcb2e3-326d-461e-9b69-4afb097f812d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pvx88" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.440715 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/26477d16-ea37-4c74-b5d0-b537147ab754-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-79btk\" (UID: \"26477d16-ea37-4c74-b5d0-b537147ab754\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79btk" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.440944 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08979e0d-51f2-42ed-acf4-afc014830489-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5wlch\" (UID: \"08979e0d-51f2-42ed-acf4-afc014830489\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5wlch" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.441236 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e7e387f-48ae-4f10-8fb5-d163b751b3fa-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-98fj7\" (UID: \"5e7e387f-48ae-4f10-8fb5-d163b751b3fa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-98fj7" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.441241 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a16bb49a-f4f4-43df-9775-8cea5ed8e4e5-srv-cert\") pod \"catalog-operator-68c6474976-6b2bw\" (UID: \"a16bb49a-f4f4-43df-9775-8cea5ed8e4e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b2bw" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.441333 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e7e387f-48ae-4f10-8fb5-d163b751b3fa-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-98fj7\" (UID: \"5e7e387f-48ae-4f10-8fb5-d163b751b3fa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-98fj7" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.442255 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2f093229-f88a-4571-a575-8efa44a1b8dd-signing-key\") pod \"service-ca-9c57cc56f-k9hqk\" (UID: \"2f093229-f88a-4571-a575-8efa44a1b8dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-k9hqk" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.442319 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c9dcb2e3-326d-461e-9b69-4afb097f812d-apiservice-cert\") pod \"packageserver-d55dfcdfc-pvx88\" (UID: \"c9dcb2e3-326d-461e-9b69-4afb097f812d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pvx88" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.442705 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d33549b3-2af3-444d-9988-1fce8d590d8a-serving-cert\") pod \"service-ca-operator-777779d784-8ql29\" (UID: \"d33549b3-2af3-444d-9988-1fce8d590d8a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ql29" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.443069 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a16bb49a-f4f4-43df-9775-8cea5ed8e4e5-profile-collector-cert\") pod \"catalog-operator-68c6474976-6b2bw\" (UID: \"a16bb49a-f4f4-43df-9775-8cea5ed8e4e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b2bw" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.443408 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cdd63dfa-6787-4eca-937b-758358444ffc-metrics-tls\") pod \"dns-default-cxgxb\" (UID: \"cdd63dfa-6787-4eca-937b-758358444ffc\") " pod="openshift-dns/dns-default-cxgxb" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.446036 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.457866 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzfks\" (UniqueName: \"kubernetes.io/projected/b94670ce-123d-4562-b9ae-7a7fe898bff7-kube-api-access-bzfks\") pod \"machine-api-operator-5694c8668f-hgxgb\" (UID: \"b94670ce-123d-4562-b9ae-7a7fe898bff7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hgxgb" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.468960 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.479269 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9cjl\" (UniqueName: \"kubernetes.io/projected/db966398-cd84-4fb6-bedf-f1f13c670ce8-kube-api-access-k9cjl\") pod \"controller-manager-879f6c89f-6mpbx\" (UID: \"db966398-cd84-4fb6-bedf-f1f13c670ce8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.486293 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.505908 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5475l\" (UniqueName: \"kubernetes.io/projected/1993577b-4a86-4663-97e5-902753a07816-kube-api-access-5475l\") pod \"authentication-operator-69f744f599-bhjvl\" (UID: \"1993577b-4a86-4663-97e5-902753a07816\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bhjvl" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.507194 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.520273 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmr8k\" (UniqueName: \"kubernetes.io/projected/a5e0e773-1dac-4bc9-ac4c-5d6db746fee9-kube-api-access-cmr8k\") pod \"openshift-apiserver-operator-796bbdcf4f-nwt6s\" (UID: \"a5e0e773-1dac-4bc9-ac4c-5d6db746fee9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwt6s" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.527045 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.537526 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hrnh\" (UniqueName: \"kubernetes.io/projected/e7e0d2ae-153e-43e7-b23d-e60aaeabb85c-kube-api-access-4hrnh\") pod \"apiserver-76f77b778f-j4zs8\" (UID: \"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c\") " pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.538165 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:44 crc kubenswrapper[4840]: E0311 08:59:44.538656 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:45.038635446 +0000 UTC m=+183.704305271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.547006 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.552372 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5m6m\" (UniqueName: \"kubernetes.io/projected/f2878f5c-d2e1-4561-acae-3ea4ed26a5c0-kube-api-access-s5m6m\") pod \"apiserver-7bbb656c7d-d5xpl\" (UID: \"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.566683 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.578435 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvklz\" (UniqueName: \"kubernetes.io/projected/7b145694-7add-48aa-9321-56703922b613-kube-api-access-gvklz\") pod \"route-controller-manager-6576b87f9c-lzd44\" (UID: \"7b145694-7add-48aa-9321-56703922b613\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.586585 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.593257 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gvmg\" (UniqueName: \"kubernetes.io/projected/642bcc9a-edf9-4845-b6ad-0936618ec9b0-kube-api-access-9gvmg\") pod \"console-operator-58897d9998-2p584\" (UID: \"642bcc9a-edf9-4845-b6ad-0936618ec9b0\") " pod="openshift-console-operator/console-operator-58897d9998-2p584" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.606657 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-72h56"] Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.606852 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.616774 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpnnj\" (UniqueName: \"kubernetes.io/projected/d62191a1-c0bf-4b73-8a5b-9a084143772c-kube-api-access-qpnnj\") pod \"dns-operator-744455d44c-2gglt\" (UID: \"d62191a1-c0bf-4b73-8a5b-9a084143772c\") " pod="openshift-dns-operator/dns-operator-744455d44c-2gglt" Mar 11 08:59:44 crc kubenswrapper[4840]: W0311 08:59:44.619071 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84bf2582_c99c_4ad3_b53c_37f357ba2cc5.slice/crio-ad6af6121ca497616995b8303e5c98371e6c68284be8703307b9407ab5045636 WatchSource:0}: Error finding container ad6af6121ca497616995b8303e5c98371e6c68284be8703307b9407ab5045636: Status 404 returned error can't find the container with id ad6af6121ca497616995b8303e5c98371e6c68284be8703307b9407ab5045636 Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.626029 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.633861 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2745m\" (UniqueName: \"kubernetes.io/projected/b1d0c791-1d0c-4e11-91ce-bb352ce3fce1-kube-api-access-2745m\") pod \"openshift-config-operator-7777fb866f-nh5h5\" (UID: \"b1d0c791-1d0c-4e11-91ce-bb352ce3fce1\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-nh5h5" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.639993 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: E0311 08:59:44.640412 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:45.140353662 +0000 UTC m=+183.806023477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.645670 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.658251 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcgh8\" (UniqueName: \"kubernetes.io/projected/01a98c93-30e9-4161-ae55-553a6107a67f-kube-api-access-zcgh8\") pod \"openshift-controller-manager-operator-756b6f6bc6-ktl5r\" (UID: \"01a98c93-30e9-4161-ae55-553a6107a67f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ktl5r" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.666693 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.673790 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8kxm\" (UniqueName: \"kubernetes.io/projected/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-kube-api-access-b8kxm\") pod \"oauth-openshift-558db77b4-td29c\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.700304 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40a6df27-50b3-452a-940a-aab6b087cdb2-bound-sa-token\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.721300 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/444a5f5c-fbf0-4c73-8623-081182f78861-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-k4c7d\" (UID: \"444a5f5c-fbf0-4c73-8623-081182f78861\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k4c7d" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.740430 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7pwp\" (UniqueName: \"kubernetes.io/projected/e3d07b65-1690-44b1-a232-5a9d4187e89d-kube-api-access-g7pwp\") pod \"package-server-manager-789f6589d5-2kbzn\" (UID: \"e3d07b65-1690-44b1-a232-5a9d4187e89d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2kbzn" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.740739 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:44 crc kubenswrapper[4840]: E0311 08:59:44.740886 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:45.240850834 +0000 UTC m=+183.906520649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.741081 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: E0311 08:59:44.741366 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:45.241359149 +0000 UTC m=+183.907028964 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.759960 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lrwl\" (UniqueName: \"kubernetes.io/projected/40a6df27-50b3-452a-940a-aab6b087cdb2-kube-api-access-9lrwl\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.781000 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2hvx\" (UniqueName: \"kubernetes.io/projected/a821fe36-3bdd-4b59-9dd4-004985404023-kube-api-access-f2hvx\") pod \"collect-profiles-29553645-gw77z\" (UID: \"a821fe36-3bdd-4b59-9dd4-004985404023\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-gw77z" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.801093 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qggmx\" (UniqueName: \"kubernetes.io/projected/33326f34-f442-42be-9bd2-39cf5627b953-kube-api-access-qggmx\") pod \"marketplace-operator-79b997595-d6cv4\" (UID: \"33326f34-f442-42be-9bd2-39cf5627b953\") " pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.821095 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq6d2\" (UniqueName: \"kubernetes.io/projected/4c602adf-1ed4-4779-a4f5-5ff24d9ee648-kube-api-access-qq6d2\") pod \"router-default-5444994796-wwr6r\" (UID: \"4c602adf-1ed4-4779-a4f5-5ff24d9ee648\") " pod="openshift-ingress/router-default-5444994796-wwr6r" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.841942 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpkzb\" (UniqueName: \"kubernetes.io/projected/6ca4d667-0d35-40e6-b681-ec69524cfc2f-kube-api-access-bpkzb\") pod \"ingress-operator-5b745b69d9-zqtkm\" (UID: \"6ca4d667-0d35-40e6-b681-ec69524cfc2f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqtkm" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.842324 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:44 crc kubenswrapper[4840]: E0311 08:59:44.842732 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:45.342625583 +0000 UTC m=+184.008295398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.843159 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: E0311 08:59:44.843661 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:45.343642182 +0000 UTC m=+184.009311987 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.862429 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dcae32d8-1c29-45c8-aadc-332b3ad1dd5c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rjm9q\" (UID: \"dcae32d8-1c29-45c8-aadc-332b3ad1dd5c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjm9q" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.866921 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.880206 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e87442b-4d54-472c-bad6-e2086c95df50-metrics-certs\") pod \"network-metrics-daemon-gjgkz\" (UID: \"6e87442b-4d54-472c-bad6-e2086c95df50\") " pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.905817 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkh5m\" (UniqueName: \"kubernetes.io/projected/4b3c8f62-4252-41b4-8e96-8788caae161b-kube-api-access-zkh5m\") pod \"machine-config-operator-74547568cd-drmmh\" (UID: \"4b3c8f62-4252-41b4-8e96-8788caae161b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-drmmh" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.922191 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r444p\" (UniqueName: \"kubernetes.io/projected/3be5fb83-d2b0-4173-8977-6681ddeab581-kube-api-access-r444p\") pod \"olm-operator-6b444d44fb-pcgsw\" (UID: \"3be5fb83-d2b0-4173-8977-6681ddeab581\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcgsw" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.924502 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k4c7d" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.941068 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5b9q\" (UniqueName: \"kubernetes.io/projected/511c45a2-2ea8-4e84-9f1d-2901e39e7e36-kube-api-access-l5b9q\") pod \"multus-admission-controller-857f4d67dd-mcz2j\" (UID: \"511c45a2-2ea8-4e84-9f1d-2901e39e7e36\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mcz2j" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.944356 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:44 crc kubenswrapper[4840]: E0311 08:59:44.945650 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:45.444669519 +0000 UTC m=+184.110339404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.945994 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:44 crc kubenswrapper[4840]: E0311 08:59:44.946448 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:45.446412929 +0000 UTC m=+184.112082744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.966281 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ca4d667-0d35-40e6-b681-ec69524cfc2f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zqtkm\" (UID: \"6ca4d667-0d35-40e6-b681-ec69524cfc2f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqtkm" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.976207 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wwr6r" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.981959 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8pjx9" event={"ID":"9f2c017f-9b00-48e8-b2bf-28eec249be0a","Type":"ContainerStarted","Data":"7786a4f04ca4d444dec1de27cc66cdedef884f25765f096adc5f55902ac28abc"} Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.982048 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8pjx9" event={"ID":"9f2c017f-9b00-48e8-b2bf-28eec249be0a","Type":"ContainerStarted","Data":"af680b61b716837a50ef4e55a0a21c341334b8bedac8cb8276cfbc67e4f6a9b6"} Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.983860 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-72h56" event={"ID":"84bf2582-c99c-4ad3-b53c-37f357ba2cc5","Type":"ContainerStarted","Data":"e0b1bc3cccdc75250e7bc594a88774dba4e547edcccf9aa3c19c7d6656f3299b"} Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.983919 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-72h56" event={"ID":"84bf2582-c99c-4ad3-b53c-37f357ba2cc5","Type":"ContainerStarted","Data":"ad6af6121ca497616995b8303e5c98371e6c68284be8703307b9407ab5045636"} Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.984357 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf8sf\" (UniqueName: \"kubernetes.io/projected/289c41a4-7b72-4bf9-9fc3-31f532ec6bf6-kube-api-access-lf8sf\") pod \"migrator-59844c95c7-nz4rs\" (UID: \"289c41a4-7b72-4bf9-9fc3-31f532ec6bf6\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nz4rs" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.986365 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjm9q" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.986497 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 11 08:59:44 crc kubenswrapper[4840]: I0311 08:59:44.998671 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2kbzn" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.001344 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bqmv\" (UniqueName: \"kubernetes.io/projected/6138a548-439a-4af8-ad4d-a6ea89f686b7-kube-api-access-9bqmv\") pod \"etcd-operator-b45778765-zm7f9\" (UID: \"6138a548-439a-4af8-ad4d-a6ea89f686b7\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zm7f9" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.003638 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.012443 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nz4rs" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.026849 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vws9f\" (UniqueName: \"kubernetes.io/projected/b9c9ed82-0f76-48b1-8cb7-af788bf40902-kube-api-access-vws9f\") pod \"machine-config-controller-84d6567774-xt7ft\" (UID: \"b9c9ed82-0f76-48b1-8cb7-af788bf40902\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xt7ft" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.028091 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.038070 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpdhh\" (UniqueName: \"kubernetes.io/projected/bdc11055-ee85-488f-9812-536b5cd31e50-kube-api-access-wpdhh\") pod \"machine-approver-56656f9798-mwnmw\" (UID: \"bdc11055-ee85-488f-9812-536b5cd31e50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mwnmw" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.040411 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcgsw" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.047446 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.047544 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 11 08:59:45 crc kubenswrapper[4840]: E0311 08:59:45.047948 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:45.547920439 +0000 UTC m=+184.213590434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.048690 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:45 crc kubenswrapper[4840]: E0311 08:59:45.049149 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:45.549134994 +0000 UTC m=+184.214804809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.056535 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xt7ft" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.062873 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-drmmh" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.063858 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b97mp\" (UniqueName: \"kubernetes.io/projected/ed60e671-1c71-42fc-828d-51a4e85e3153-kube-api-access-b97mp\") pod \"cluster-samples-operator-665b6dd947-dn5hs\" (UID: \"ed60e671-1c71-42fc-828d-51a4e85e3153\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dn5hs" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.066163 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.072920 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-mcz2j" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.077658 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcbfq\" (UniqueName: \"kubernetes.io/projected/5dc5ef77-d18a-4474-a523-473f27166095-kube-api-access-zcbfq\") pod \"console-f9d7485db-xkq7s\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.078759 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-gw77z" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.080851 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ftrs\" (UniqueName: \"kubernetes.io/projected/78462b75-44da-4862-88c5-5cf892a91058-kube-api-access-8ftrs\") pod \"downloads-7954f5f757-9k5xp\" (UID: \"78462b75-44da-4862-88c5-5cf892a91058\") " pod="openshift-console/downloads-7954f5f757-9k5xp" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.123539 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlpbk\" (UniqueName: \"kubernetes.io/projected/5e7e387f-48ae-4f10-8fb5-d163b751b3fa-kube-api-access-qlpbk\") pod \"kube-storage-version-migrator-operator-b67b599dd-98fj7\" (UID: \"5e7e387f-48ae-4f10-8fb5-d163b751b3fa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-98fj7" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.131763 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k4c7d"] Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.142797 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w8q5\" (UniqueName: \"kubernetes.io/projected/c0687081-fb8b-4b87-945d-b107b2f86966-kube-api-access-9w8q5\") pod \"csi-hostpathplugin-8rsqx\" (UID: \"c0687081-fb8b-4b87-945d-b107b2f86966\") " pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.153250 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:45 crc kubenswrapper[4840]: E0311 08:59:45.153432 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:45.653402884 +0000 UTC m=+184.319072709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.153573 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:45 crc kubenswrapper[4840]: E0311 08:59:45.153927 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:45.653912738 +0000 UTC m=+184.319582553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.172928 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08979e0d-51f2-42ed-acf4-afc014830489-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5wlch\" (UID: \"08979e0d-51f2-42ed-acf4-afc014830489\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5wlch" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.183162 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9b7b\" (UniqueName: \"kubernetes.io/projected/c9dcb2e3-326d-461e-9b69-4afb097f812d-kube-api-access-r9b7b\") pod \"packageserver-d55dfcdfc-pvx88\" (UID: \"c9dcb2e3-326d-461e-9b69-4afb097f812d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pvx88" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.209078 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf6jb\" (UniqueName: \"kubernetes.io/projected/d33549b3-2af3-444d-9988-1fce8d590d8a-kube-api-access-nf6jb\") pod \"service-ca-operator-777779d784-8ql29\" (UID: \"d33549b3-2af3-444d-9988-1fce8d590d8a\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ql29" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.243040 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqtkm" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.246288 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pzgc\" (UniqueName: \"kubernetes.io/projected/a16bb49a-f4f4-43df-9775-8cea5ed8e4e5-kube-api-access-4pzgc\") pod \"catalog-operator-68c6474976-6b2bw\" (UID: \"a16bb49a-f4f4-43df-9775-8cea5ed8e4e5\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b2bw" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.246889 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psmfr\" (UniqueName: \"kubernetes.io/projected/2f093229-f88a-4571-a575-8efa44a1b8dd-kube-api-access-psmfr\") pod \"service-ca-9c57cc56f-k9hqk\" (UID: \"2f093229-f88a-4571-a575-8efa44a1b8dd\") " pod="openshift-service-ca/service-ca-9c57cc56f-k9hqk" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.255119 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.256002 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5wlch" Mar 11 08:59:45 crc kubenswrapper[4840]: E0311 08:59:45.256181 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:45.75615744 +0000 UTC m=+184.421827255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.264888 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dl84\" (UniqueName: \"kubernetes.io/projected/cdd63dfa-6787-4eca-937b-758358444ffc-kube-api-access-7dl84\") pod \"dns-default-cxgxb\" (UID: \"cdd63dfa-6787-4eca-937b-758358444ffc\") " pod="openshift-dns/dns-default-cxgxb" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.276298 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-98fj7" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.298692 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9kft\" (UniqueName: \"kubernetes.io/projected/26477d16-ea37-4c74-b5d0-b537147ab754-kube-api-access-q9kft\") pod \"control-plane-machine-set-operator-78cbb6b69f-79btk\" (UID: \"26477d16-ea37-4c74-b5d0-b537147ab754\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79btk" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.308354 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.310228 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.310350 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kv89\" (UniqueName: \"kubernetes.io/projected/eccdf140-9410-4209-be5a-eb865704291a-kube-api-access-5kv89\") pod \"ingress-canary-2zpp2\" (UID: \"eccdf140-9410-4209-be5a-eb865704291a\") " pod="openshift-ingress-canary/ingress-canary-2zpp2" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.320640 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79btk" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.328399 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.330061 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ql29" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.331723 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.347122 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.349605 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pvx88" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.354203 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hgxgb" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.356884 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:45 crc kubenswrapper[4840]: E0311 08:59:45.357807 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:45.857208578 +0000 UTC m=+184.522878383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.372824 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.387591 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bhjvl" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.391523 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.393629 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.395359 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-k9hqk" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.402500 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b2bw" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.408641 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.408922 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.421295 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.424958 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2zpp2" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.428838 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.430754 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwt6s" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.431109 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-cxgxb" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.450856 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.456229 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nh5h5" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.462520 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:45 crc kubenswrapper[4840]: E0311 08:59:45.462660 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:45.96262701 +0000 UTC m=+184.628296825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.462933 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:45 crc kubenswrapper[4840]: E0311 08:59:45.463735 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:45.963714241 +0000 UTC m=+184.629384066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.473248 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.476687 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2p584" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.487956 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.500839 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.507927 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.510707 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-2gglt" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.516256 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553645-gw77z"] Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.516310 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjm9q"] Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.525266 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d6cv4"] Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.530740 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.530966 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ktl5r" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.540593 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2kbzn"] Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.553912 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.565113 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mwnmw" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.568453 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:45 crc kubenswrapper[4840]: E0311 08:59:45.569264 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:46.069240687 +0000 UTC m=+184.734910502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.587016 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.594669 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gjgkz" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.606273 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.606993 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dn5hs" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.626377 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.628199 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcgsw"] Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.633364 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-9k5xp" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.633591 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-nz4rs"] Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.647539 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.653102 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zm7f9" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.670353 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.670792 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 11 08:59:45 crc kubenswrapper[4840]: E0311 08:59:45.671208 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:46.17119103 +0000 UTC m=+184.836860845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.678668 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.734083 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-drmmh"] Mar 11 08:59:45 crc kubenswrapper[4840]: W0311 08:59:45.755545 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod289c41a4_7b72_4bf9_9fc3_31f532ec6bf6.slice/crio-dac466502cc473662809040e703a4d5cafb0ea246317be942b3873ceec9851a6 WatchSource:0}: Error finding container dac466502cc473662809040e703a4d5cafb0ea246317be942b3873ceec9851a6: Status 404 returned error can't find the container with id dac466502cc473662809040e703a4d5cafb0ea246317be942b3873ceec9851a6 Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.773152 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:45 crc kubenswrapper[4840]: E0311 08:59:45.773836 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:46.273811773 +0000 UTC m=+184.939481588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.792338 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mcz2j"] Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.801104 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xt7ft"] Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.832378 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5wlch"] Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.874675 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:45 crc kubenswrapper[4840]: E0311 08:59:45.875052 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:46.375040616 +0000 UTC m=+185.040710421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:45 crc kubenswrapper[4840]: W0311 08:59:45.894339 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b3c8f62_4252_41b4_8e96_8788caae161b.slice/crio-8fa70ce7007468a940c6dfe6556f0722d477f1aee43731870756a5cbd07e6909 WatchSource:0}: Error finding container 8fa70ce7007468a940c6dfe6556f0722d477f1aee43731870756a5cbd07e6909: Status 404 returned error can't find the container with id 8fa70ce7007468a940c6dfe6556f0722d477f1aee43731870756a5cbd07e6909 Mar 11 08:59:45 crc kubenswrapper[4840]: I0311 08:59:45.975956 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:45 crc kubenswrapper[4840]: E0311 08:59:45.976613 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:46.476594828 +0000 UTC m=+185.142264643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.052229 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79btk"] Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.057605 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-j4zs8"] Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.078737 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:46 crc kubenswrapper[4840]: E0311 08:59:46.079281 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:46.579260091 +0000 UTC m=+185.244929906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.181190 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:46 crc kubenswrapper[4840]: E0311 08:59:46.183067 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:46.683042387 +0000 UTC m=+185.348712202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.240362 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zqtkm"] Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.240413 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2kbzn" event={"ID":"e3d07b65-1690-44b1-a232-5a9d4187e89d","Type":"ContainerStarted","Data":"b6dca5efcd900b4a21ec4b5d2ac61fb9ce8aba123c32d5d891b20cedec94a8e3"} Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.240440 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6mpbx"] Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.240454 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-98fj7"] Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.240496 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-drmmh" event={"ID":"4b3c8f62-4252-41b4-8e96-8788caae161b","Type":"ContainerStarted","Data":"8fa70ce7007468a940c6dfe6556f0722d477f1aee43731870756a5cbd07e6909"} Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.240510 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k4c7d" event={"ID":"444a5f5c-fbf0-4c73-8623-081182f78861","Type":"ContainerStarted","Data":"e8379fc7d705b6ef571c1929d5384b4787a5976fa44ee9085a044dd740c1d8c2"} Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.240521 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k4c7d" event={"ID":"444a5f5c-fbf0-4c73-8623-081182f78861","Type":"ContainerStarted","Data":"04cb226034111361d39b2d7cf8f55e27de8526d860e6476e1adea5610df110f1"} Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.240531 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mwnmw" event={"ID":"bdc11055-ee85-488f-9812-536b5cd31e50","Type":"ContainerStarted","Data":"53db93570e69116a24fef4ae10ee1d99a56035eadc2e09d5fe040c373401e34c"} Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.240541 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nz4rs" event={"ID":"289c41a4-7b72-4bf9-9fc3-31f532ec6bf6","Type":"ContainerStarted","Data":"dac466502cc473662809040e703a4d5cafb0ea246317be942b3873ceec9851a6"} Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.240557 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mcz2j" event={"ID":"511c45a2-2ea8-4e84-9f1d-2901e39e7e36","Type":"ContainerStarted","Data":"9f997deeb01fe49a75651818d0e58cf44050d7d2b098f2c0caf299bef34c9889"} Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.240577 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" event={"ID":"33326f34-f442-42be-9bd2-39cf5627b953","Type":"ContainerStarted","Data":"e11a1cbeed6b55553941d85a844deb666a33ccf8b1b1770c2252af8ed4a25ae1"} Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.240590 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wwr6r" event={"ID":"4c602adf-1ed4-4779-a4f5-5ff24d9ee648","Type":"ContainerStarted","Data":"20b27a2c1becfb4361eb31a455e15514a8b308689d71a3730dc2e229c9e66997"} Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.240604 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wwr6r" event={"ID":"4c602adf-1ed4-4779-a4f5-5ff24d9ee648","Type":"ContainerStarted","Data":"50ab6dc593d61c0e3310749e1d40ce7cc9eccac5c7bf7867ce6f8a838b0dcdfe"} Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.240615 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjm9q" event={"ID":"dcae32d8-1c29-45c8-aadc-332b3ad1dd5c","Type":"ContainerStarted","Data":"e8f97b1c150b27908eb629a14d416240707a0887006a2210b7a963484f063242"} Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.240627 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-gw77z" event={"ID":"a821fe36-3bdd-4b59-9dd4-004985404023","Type":"ContainerStarted","Data":"f160e0f23d0641448e40fc76aba0686e03a04ac4c3c34959a82863737355a95b"} Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.240638 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcgsw" event={"ID":"3be5fb83-d2b0-4173-8977-6681ddeab581","Type":"ContainerStarted","Data":"ced206e042a9dc611c92c958e328686991bbf64d9e771250deeaf39f8d8659e4"} Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.282796 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:46 crc kubenswrapper[4840]: E0311 08:59:46.284520 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:46.784503987 +0000 UTC m=+185.450173802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:46 crc kubenswrapper[4840]: W0311 08:59:46.331217 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26477d16_ea37_4c74_b5d0_b537147ab754.slice/crio-35017a518e9776d93c401fc9f80d44bf8df51282d4931d3c17a95e28da7d8eb6 WatchSource:0}: Error finding container 35017a518e9776d93c401fc9f80d44bf8df51282d4931d3c17a95e28da7d8eb6: Status 404 returned error can't find the container with id 35017a518e9776d93c401fc9f80d44bf8df51282d4931d3c17a95e28da7d8eb6 Mar 11 08:59:46 crc kubenswrapper[4840]: W0311 08:59:46.337148 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ca4d667_0d35_40e6_b681_ec69524cfc2f.slice/crio-f98fd36f4042cd28132fc28204a3494e1a78101851e2b549b1c762e129f07e64 WatchSource:0}: Error finding container f98fd36f4042cd28132fc28204a3494e1a78101851e2b549b1c762e129f07e64: Status 404 returned error can't find the container with id f98fd36f4042cd28132fc28204a3494e1a78101851e2b549b1c762e129f07e64 Mar 11 08:59:46 crc kubenswrapper[4840]: W0311 08:59:46.343386 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7e0d2ae_153e_43e7_b23d_e60aaeabb85c.slice/crio-913e105dec4389388847c0b46a45e9aaa371312c93a6b5574d095fb4a8c625b2 WatchSource:0}: Error finding container 913e105dec4389388847c0b46a45e9aaa371312c93a6b5574d095fb4a8c625b2: Status 404 returned error can't find the container with id 913e105dec4389388847c0b46a45e9aaa371312c93a6b5574d095fb4a8c625b2 Mar 11 08:59:46 crc kubenswrapper[4840]: W0311 08:59:46.344252 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e7e387f_48ae_4f10_8fb5_d163b751b3fa.slice/crio-96b4c52ab2ef885d234fbb4262beae1c313a82b39d5800df94a44ed96197031a WatchSource:0}: Error finding container 96b4c52ab2ef885d234fbb4262beae1c313a82b39d5800df94a44ed96197031a: Status 404 returned error can't find the container with id 96b4c52ab2ef885d234fbb4262beae1c313a82b39d5800df94a44ed96197031a Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.385504 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:46 crc kubenswrapper[4840]: E0311 08:59:46.386196 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:46.886177612 +0000 UTC m=+185.551847427 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.485742 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pvx88"] Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.487216 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:46 crc kubenswrapper[4840]: E0311 08:59:46.487655 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:46.987642302 +0000 UTC m=+185.653312117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.544605 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bhjvl"] Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.589766 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:46 crc kubenswrapper[4840]: E0311 08:59:46.590377 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:47.090358027 +0000 UTC m=+185.756027842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:46 crc kubenswrapper[4840]: W0311 08:59:46.604201 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9dcb2e3_326d_461e_9b69_4afb097f812d.slice/crio-eff4b08deb2c8dc31f9d0a989350f4d82eeb908e40e280cf50e4d36604bacd26 WatchSource:0}: Error finding container eff4b08deb2c8dc31f9d0a989350f4d82eeb908e40e280cf50e4d36604bacd26: Status 404 returned error can't find the container with id eff4b08deb2c8dc31f9d0a989350f4d82eeb908e40e280cf50e4d36604bacd26 Mar 11 08:59:46 crc kubenswrapper[4840]: W0311 08:59:46.609179 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1993577b_4a86_4663_97e5_902753a07816.slice/crio-3dd2b77afe2880c6ab0ab42a008aaa3a14f7387999b0000ba5c03b0178f300ea WatchSource:0}: Error finding container 3dd2b77afe2880c6ab0ab42a008aaa3a14f7387999b0000ba5c03b0178f300ea: Status 404 returned error can't find the container with id 3dd2b77afe2880c6ab0ab42a008aaa3a14f7387999b0000ba5c03b0178f300ea Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.647140 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8ql29"] Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.656889 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hgxgb"] Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.691968 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:46 crc kubenswrapper[4840]: E0311 08:59:46.693888 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:47.193871515 +0000 UTC m=+185.859541330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:46 crc kubenswrapper[4840]: W0311 08:59:46.746619 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb94670ce_123d_4562_b9ae_7a7fe898bff7.slice/crio-e9e57843b9ccce2e1e746d818ca2814079fbed9d6540341d347d91ce3869cff3 WatchSource:0}: Error finding container e9e57843b9ccce2e1e746d818ca2814079fbed9d6540341d347d91ce3869cff3: Status 404 returned error can't find the container with id e9e57843b9ccce2e1e746d818ca2814079fbed9d6540341d347d91ce3869cff3 Mar 11 08:59:46 crc kubenswrapper[4840]: W0311 08:59:46.791884 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd33549b3_2af3_444d_9988_1fce8d590d8a.slice/crio-e0eaaa3fdfa3258d7bb1e8e91bc2ddc064dc40fa2fe83d6b0ad76951862e67c6 WatchSource:0}: Error finding container e0eaaa3fdfa3258d7bb1e8e91bc2ddc064dc40fa2fe83d6b0ad76951862e67c6: Status 404 returned error can't find the container with id e0eaaa3fdfa3258d7bb1e8e91bc2ddc064dc40fa2fe83d6b0ad76951862e67c6 Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.792806 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:46 crc kubenswrapper[4840]: E0311 08:59:46.793132 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:47.293111721 +0000 UTC m=+185.958781536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.895561 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:46 crc kubenswrapper[4840]: E0311 08:59:46.895915 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:47.395902197 +0000 UTC m=+186.061572012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.980124 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-wwr6r" Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.987839 4840 patch_prober.go:28] interesting pod/router-default-5444994796-wwr6r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 11 08:59:46 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Mar 11 08:59:46 crc kubenswrapper[4840]: [+]process-running ok Mar 11 08:59:46 crc kubenswrapper[4840]: healthz check failed Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.987881 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwr6r" podUID="4c602adf-1ed4-4779-a4f5-5ff24d9ee648" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.992181 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-nh5h5"] Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.997891 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:46 crc kubenswrapper[4840]: E0311 08:59:46.998074 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:47.498046066 +0000 UTC m=+186.163715881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:46 crc kubenswrapper[4840]: I0311 08:59:46.998172 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:46 crc kubenswrapper[4840]: E0311 08:59:46.998542 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:47.49853105 +0000 UTC m=+186.164200855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.003996 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-cxgxb"] Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.004456 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl"] Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.015565 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8rsqx"] Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.026726 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-k9hqk"] Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.051429 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b2bw"] Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.060207 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2zpp2"] Mar 11 08:59:47 crc kubenswrapper[4840]: W0311 08:59:47.090127 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda16bb49a_f4f4_43df_9775_8cea5ed8e4e5.slice/crio-a2d807a7e9240519bb752062250e3d95060f4a2514e54bfe63536cbfcbdb8e22 WatchSource:0}: Error finding container a2d807a7e9240519bb752062250e3d95060f4a2514e54bfe63536cbfcbdb8e22: Status 404 returned error can't find the container with id a2d807a7e9240519bb752062250e3d95060f4a2514e54bfe63536cbfcbdb8e22 Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.100893 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:47 crc kubenswrapper[4840]: E0311 08:59:47.109128 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:47.60909435 +0000 UTC m=+186.274764205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.144904 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44"] Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.168397 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-2gglt"] Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.174130 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gjgkz"] Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.195859 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwt6s"] Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.202032 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-td29c"] Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.202549 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:47 crc kubenswrapper[4840]: E0311 08:59:47.203166 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:47.703154658 +0000 UTC m=+186.368824473 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.204137 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2p584"] Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.223011 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-9k5xp"] Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.231369 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zm7f9"] Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.234929 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pvx88" event={"ID":"c9dcb2e3-326d-461e-9b69-4afb097f812d","Type":"ContainerStarted","Data":"f9ee661f570da1e50ae73bab0cdeaee3ae9aef9b5c7bcf242fb5dfc54211506a"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.234965 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pvx88" event={"ID":"c9dcb2e3-326d-461e-9b69-4afb097f812d","Type":"ContainerStarted","Data":"eff4b08deb2c8dc31f9d0a989350f4d82eeb908e40e280cf50e4d36604bacd26"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.236641 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pvx88" Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.237838 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nh5h5" event={"ID":"b1d0c791-1d0c-4e11-91ce-bb352ce3fce1","Type":"ContainerStarted","Data":"e05a71939faa7f18b035a2b979ae52c562dff3710410b67930a1f7e027fc9780"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.238479 4840 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-pvx88 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.238519 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pvx88" podUID="c9dcb2e3-326d-461e-9b69-4afb097f812d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.240241 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-xkq7s"] Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.250164 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-98fj7" event={"ID":"5e7e387f-48ae-4f10-8fb5-d163b751b3fa","Type":"ContainerStarted","Data":"c2dd610da19293efb01a2648f123c3db9f34c73c16db1a658dfcc2dc4d20323b"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.250216 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-98fj7" event={"ID":"5e7e387f-48ae-4f10-8fb5-d163b751b3fa","Type":"ContainerStarted","Data":"96b4c52ab2ef885d234fbb4262beae1c313a82b39d5800df94a44ed96197031a"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.259142 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-72h56" podStartSLOduration=117.259123437 podStartE2EDuration="1m57.259123437s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:47.254898466 +0000 UTC m=+185.920568271" watchObservedRunningTime="2026-03-11 08:59:47.259123437 +0000 UTC m=+185.924793252" Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.259901 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-gw77z" event={"ID":"a821fe36-3bdd-4b59-9dd4-004985404023","Type":"ContainerStarted","Data":"51416be935c7d35e2e945c8d54b076b719c0e25edf4fc27069b07ab19ca93b38"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.263354 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcgsw" event={"ID":"3be5fb83-d2b0-4173-8977-6681ddeab581","Type":"ContainerStarted","Data":"2ad554da10f1d3f56d609aa9989d1e203afb4e635a142564dd1c5ea839d33267"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.263650 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcgsw" Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.266553 4840 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-pcgsw container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.266603 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcgsw" podUID="3be5fb83-d2b0-4173-8977-6681ddeab581" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.269031 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mwnmw" event={"ID":"bdc11055-ee85-488f-9812-536b5cd31e50","Type":"ContainerStarted","Data":"7da778268e7bf96de58e91c8ddc0d545a12557d090acd0f313499e9dd1bfdf7e"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.270860 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xt7ft" event={"ID":"b9c9ed82-0f76-48b1-8cb7-af788bf40902","Type":"ContainerStarted","Data":"bd601cfa09d8b31dded8f7e70d3d9c9e650c11783fa62a92b1062cd1bd4774ea"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.270886 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xt7ft" event={"ID":"b9c9ed82-0f76-48b1-8cb7-af788bf40902","Type":"ContainerStarted","Data":"b8af1ed877b53546f6c60c5dbea58bcf08fd7d765c61be8dddebf63f89c9fcca"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.272961 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" event={"ID":"33326f34-f442-42be-9bd2-39cf5627b953","Type":"ContainerStarted","Data":"f5b71f97eb6b7af7a3e8c7066dd678dfc244df8ee1e9045fd01005014d052509"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.273846 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.276611 4840 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-d6cv4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.276660 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" podUID="33326f34-f442-42be-9bd2-39cf5627b953" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.279323 4840 generic.go:334] "Generic (PLEG): container finished" podID="e7e0d2ae-153e-43e7-b23d-e60aaeabb85c" containerID="679a21ef94588d2c5c9d1f595b060a273cca16865d07d5742295abdc137dac85" exitCode=0 Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.279391 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" event={"ID":"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c","Type":"ContainerDied","Data":"679a21ef94588d2c5c9d1f595b060a273cca16865d07d5742295abdc137dac85"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.279423 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" event={"ID":"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c","Type":"ContainerStarted","Data":"913e105dec4389388847c0b46a45e9aaa371312c93a6b5574d095fb4a8c625b2"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.281680 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" event={"ID":"db966398-cd84-4fb6-bedf-f1f13c670ce8","Type":"ContainerStarted","Data":"8a7b64590ad588a1fd9b8fe50a9debb1e9ad38bfade3c36d0d9b8217fd2a169e"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.281722 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" event={"ID":"db966398-cd84-4fb6-bedf-f1f13c670ce8","Type":"ContainerStarted","Data":"d16bca9f0091a0051d01207a0a26194086fa3bf31d89d82e0795e93ad7f8c571"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.282765 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.285748 4840 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-6mpbx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.285815 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" podUID="db966398-cd84-4fb6-bedf-f1f13c670ce8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.293621 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5wlch" event={"ID":"08979e0d-51f2-42ed-acf4-afc014830489","Type":"ContainerStarted","Data":"ed5938714835f1b2905d40513b23bbe8cb888af225ea5c5011294d045302e251"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.293694 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5wlch" event={"ID":"08979e0d-51f2-42ed-acf4-afc014830489","Type":"ContainerStarted","Data":"65b3f2dd646efbbdf2dc256aa71e1d9f7662e86e6807730c2352b003bb0f8c39"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.295770 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" event={"ID":"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0","Type":"ContainerStarted","Data":"bc668896c37e116918d2b047124b2a8e1526a0f03631f36fc9dc0ec36d6c11ab"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.298452 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-k9hqk" event={"ID":"2f093229-f88a-4571-a575-8efa44a1b8dd","Type":"ContainerStarted","Data":"8f7403fe6ef28dca4a646f14d74a8aadaf3bbd07fb6baab9205298832949c548"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.303734 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:47 crc kubenswrapper[4840]: E0311 08:59:47.307570 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:47.803874516 +0000 UTC m=+186.469544331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.308533 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k4c7d" podStartSLOduration=117.308500918 podStartE2EDuration="1m57.308500918s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:47.300074417 +0000 UTC m=+185.965744242" watchObservedRunningTime="2026-03-11 08:59:47.308500918 +0000 UTC m=+185.974170733" Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.323642 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-drmmh" event={"ID":"4b3c8f62-4252-41b4-8e96-8788caae161b","Type":"ContainerStarted","Data":"99a7a00e032c258f2bdada239b5e5692fc44cee14f60399c8cb78b3d0d48c4a8"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.326347 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:47 crc kubenswrapper[4840]: E0311 08:59:47.329190 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:47.829159348 +0000 UTC m=+186.494829343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.339949 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bhjvl" event={"ID":"1993577b-4a86-4663-97e5-902753a07816","Type":"ContainerStarted","Data":"239f10503d180df55292b1125b22553e40ebf25dfedd1dcbd9bc508019c221fb"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.340020 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bhjvl" event={"ID":"1993577b-4a86-4663-97e5-902753a07816","Type":"ContainerStarted","Data":"3dd2b77afe2880c6ab0ab42a008aaa3a14f7387999b0000ba5c03b0178f300ea"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.359064 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hgxgb" event={"ID":"b94670ce-123d-4562-b9ae-7a7fe898bff7","Type":"ContainerStarted","Data":"e9e57843b9ccce2e1e746d818ca2814079fbed9d6540341d347d91ce3869cff3"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.363517 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-8pjx9" podStartSLOduration=6.363490109 podStartE2EDuration="6.363490109s" podCreationTimestamp="2026-03-11 08:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:47.344526467 +0000 UTC m=+186.010196282" watchObservedRunningTime="2026-03-11 08:59:47.363490109 +0000 UTC m=+186.029159924" Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.376338 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ktl5r"] Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.389346 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b2bw" event={"ID":"a16bb49a-f4f4-43df-9775-8cea5ed8e4e5","Type":"ContainerStarted","Data":"a2d807a7e9240519bb752062250e3d95060f4a2514e54bfe63536cbfcbdb8e22"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.393397 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nz4rs" event={"ID":"289c41a4-7b72-4bf9-9fc3-31f532ec6bf6","Type":"ContainerStarted","Data":"2687303aa082422ff2afc9c2c20e16cc0a2b872e5deae567a2d36d5ca2189d10"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.394704 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dn5hs"] Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.400668 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-wwr6r" podStartSLOduration=117.400654941 podStartE2EDuration="1m57.400654941s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:47.395836664 +0000 UTC m=+186.061506479" watchObservedRunningTime="2026-03-11 08:59:47.400654941 +0000 UTC m=+186.066324756" Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.408966 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2zpp2" event={"ID":"eccdf140-9410-4209-be5a-eb865704291a","Type":"ContainerStarted","Data":"51a4ec52674400be5124b070b4a7613ee953ec9c3a585901626f13c697f80484"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.432707 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79btk" event={"ID":"26477d16-ea37-4c74-b5d0-b537147ab754","Type":"ContainerStarted","Data":"13f3d4f2cd61050d9d770835f74d63394e79bfff4071a48bbb073fa0069321ea"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.432765 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79btk" event={"ID":"26477d16-ea37-4c74-b5d0-b537147ab754","Type":"ContainerStarted","Data":"35017a518e9776d93c401fc9f80d44bf8df51282d4931d3c17a95e28da7d8eb6"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.433774 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:47 crc kubenswrapper[4840]: E0311 08:59:47.435004 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:47.934970622 +0000 UTC m=+186.600640437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.441235 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mcz2j" event={"ID":"511c45a2-2ea8-4e84-9f1d-2901e39e7e36","Type":"ContainerStarted","Data":"6124c1e33eb85b9b3ac385b5cd9f06b26302c357dbb853c2eab0c4ffd40c4279"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.451919 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" event={"ID":"c0687081-fb8b-4b87-945d-b107b2f86966","Type":"ContainerStarted","Data":"e3bf716c9d96aa66a013dd0c700bed131751778cf05b7f3c3dd87ac8346b1870"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.457491 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-gw77z" podStartSLOduration=117.457452904 podStartE2EDuration="1m57.457452904s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:47.456635501 +0000 UTC m=+186.122305316" watchObservedRunningTime="2026-03-11 08:59:47.457452904 +0000 UTC m=+186.123122719" Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.458113 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" podStartSLOduration=117.458107393 podStartE2EDuration="1m57.458107393s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:47.424019429 +0000 UTC m=+186.089689244" watchObservedRunningTime="2026-03-11 08:59:47.458107393 +0000 UTC m=+186.123777208" Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.464656 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjm9q" event={"ID":"dcae32d8-1c29-45c8-aadc-332b3ad1dd5c","Type":"ContainerStarted","Data":"50e858387bd0d5e069d273d8ca9cb62e0957959d27f857b5fd5ba44658d60e9b"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.467310 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-cxgxb" event={"ID":"cdd63dfa-6787-4eca-937b-758358444ffc","Type":"ContainerStarted","Data":"36822268e5b0a85ba57ca7bd4806fbf20a73a9ad326a719fa69ed4f8586377d3"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.500681 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2kbzn" event={"ID":"e3d07b65-1690-44b1-a232-5a9d4187e89d","Type":"ContainerStarted","Data":"c3a4a301c3ce47604a51ac5009f55452468b4fbfa2620d7ba92fd76b6e7264b5"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.508248 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pvx88" podStartSLOduration=117.508230506 podStartE2EDuration="1m57.508230506s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:47.505645582 +0000 UTC m=+186.171315397" watchObservedRunningTime="2026-03-11 08:59:47.508230506 +0000 UTC m=+186.173900321" Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.522692 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqtkm" event={"ID":"6ca4d667-0d35-40e6-b681-ec69524cfc2f","Type":"ContainerStarted","Data":"33bfeb7277c64fa24c66721f125155996a15f1a318ef8b3942259142fd4e8318"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.522743 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqtkm" event={"ID":"6ca4d667-0d35-40e6-b681-ec69524cfc2f","Type":"ContainerStarted","Data":"f98fd36f4042cd28132fc28204a3494e1a78101851e2b549b1c762e129f07e64"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.537001 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:47 crc kubenswrapper[4840]: E0311 08:59:47.537292 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:48.037281396 +0000 UTC m=+186.702951211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.537271 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ql29" event={"ID":"d33549b3-2af3-444d-9988-1fce8d590d8a","Type":"ContainerStarted","Data":"e0eaaa3fdfa3258d7bb1e8e91bc2ddc064dc40fa2fe83d6b0ad76951862e67c6"} Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.562118 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5wlch" podStartSLOduration=117.562099275 podStartE2EDuration="1m57.562099275s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:47.536286217 +0000 UTC m=+186.201956032" watchObservedRunningTime="2026-03-11 08:59:47.562099275 +0000 UTC m=+186.227769090" Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.627593 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" podStartSLOduration=117.627577586 podStartE2EDuration="1m57.627577586s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:47.583994361 +0000 UTC m=+186.249664176" watchObservedRunningTime="2026-03-11 08:59:47.627577586 +0000 UTC m=+186.293247401" Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.628521 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-bhjvl" podStartSLOduration=117.628513643 podStartE2EDuration="1m57.628513643s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:47.627192955 +0000 UTC m=+186.292862770" watchObservedRunningTime="2026-03-11 08:59:47.628513643 +0000 UTC m=+186.294183458" Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.638183 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:47 crc kubenswrapper[4840]: E0311 08:59:47.638960 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:48.13892514 +0000 UTC m=+186.804594955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.709595 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-98fj7" podStartSLOduration=117.709569729 podStartE2EDuration="1m57.709569729s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:47.656890694 +0000 UTC m=+186.322560509" watchObservedRunningTime="2026-03-11 08:59:47.709569729 +0000 UTC m=+186.375239554" Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.740098 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:47 crc kubenswrapper[4840]: E0311 08:59:47.745769 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:48.245744843 +0000 UTC m=+186.911414658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.781341 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcgsw" podStartSLOduration=117.78132326 podStartE2EDuration="1m57.78132326s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:47.744856847 +0000 UTC m=+186.410526662" watchObservedRunningTime="2026-03-11 08:59:47.78132326 +0000 UTC m=+186.446993075" Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.841015 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:47 crc kubenswrapper[4840]: E0311 08:59:47.841106 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:48.341088367 +0000 UTC m=+187.006758182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.841318 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:47 crc kubenswrapper[4840]: E0311 08:59:47.841649 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:48.341642993 +0000 UTC m=+187.007312808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.943029 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:47 crc kubenswrapper[4840]: E0311 08:59:47.943786 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:48.443768192 +0000 UTC m=+187.109438007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.983014 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rjm9q" podStartSLOduration=117.982998663 podStartE2EDuration="1m57.982998663s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:47.96785695 +0000 UTC m=+186.633526765" watchObservedRunningTime="2026-03-11 08:59:47.982998663 +0000 UTC m=+186.648668478" Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.996935 4840 patch_prober.go:28] interesting pod/router-default-5444994796-wwr6r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 11 08:59:47 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Mar 11 08:59:47 crc kubenswrapper[4840]: [+]process-running ok Mar 11 08:59:47 crc kubenswrapper[4840]: healthz check failed Mar 11 08:59:47 crc kubenswrapper[4840]: I0311 08:59:47.996987 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwr6r" podUID="4c602adf-1ed4-4779-a4f5-5ff24d9ee648" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.045785 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:48 crc kubenswrapper[4840]: E0311 08:59:48.046182 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:48.546168028 +0000 UTC m=+187.211837843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.147162 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:48 crc kubenswrapper[4840]: E0311 08:59:48.147657 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:48.647639548 +0000 UTC m=+187.313309363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.198967 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ql29" podStartSLOduration=117.198941394 podStartE2EDuration="1m57.198941394s" podCreationTimestamp="2026-03-11 08:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:48.14492686 +0000 UTC m=+186.810596665" watchObservedRunningTime="2026-03-11 08:59:48.198941394 +0000 UTC m=+186.864611209" Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.249552 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:48 crc kubenswrapper[4840]: E0311 08:59:48.249840 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:48.749826108 +0000 UTC m=+187.415495923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.250770 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-79btk" podStartSLOduration=118.250760285 podStartE2EDuration="1m58.250760285s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:48.200679773 +0000 UTC m=+186.866349588" watchObservedRunningTime="2026-03-11 08:59:48.250760285 +0000 UTC m=+186.916430100" Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.352578 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:48 crc kubenswrapper[4840]: E0311 08:59:48.353129 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:48.853105999 +0000 UTC m=+187.518775814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.456296 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:48 crc kubenswrapper[4840]: E0311 08:59:48.457281 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:48.957267096 +0000 UTC m=+187.622936911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.557814 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:48 crc kubenswrapper[4840]: E0311 08:59:48.558253 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:49.058220191 +0000 UTC m=+187.723890006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.618731 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9k5xp" event={"ID":"78462b75-44da-4862-88c5-5cf892a91058","Type":"ContainerStarted","Data":"8a5cc321639bda1132ce54928b9a51784e7e98a3e7ebd45844c77c109ce16c71"} Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.630527 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mwnmw" event={"ID":"bdc11055-ee85-488f-9812-536b5cd31e50","Type":"ContainerStarted","Data":"342e07a0b06725156eea21bb1173c76b8c067f76b41a43c4ff96ab9eb9f5c644"} Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.642930 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xkq7s" event={"ID":"5dc5ef77-d18a-4474-a523-473f27166095","Type":"ContainerStarted","Data":"f2ee17afeca58eab14b40b421909a77ddf225bae5a23626ae5fd3ead80633d3b"} Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.656058 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2p584" event={"ID":"642bcc9a-edf9-4845-b6ad-0936618ec9b0","Type":"ContainerStarted","Data":"f2c79058239e53c9cd874e65a3abc1d0195aa3b253eed6609707b3c72bc25951"} Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.658339 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mwnmw" podStartSLOduration=118.658326541 podStartE2EDuration="1m58.658326541s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:48.656364825 +0000 UTC m=+187.322034630" watchObservedRunningTime="2026-03-11 08:59:48.658326541 +0000 UTC m=+187.323996356" Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.660306 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:48 crc kubenswrapper[4840]: E0311 08:59:48.661093 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:49.16107959 +0000 UTC m=+187.826749405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.679994 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nz4rs" event={"ID":"289c41a4-7b72-4bf9-9fc3-31f532ec6bf6","Type":"ContainerStarted","Data":"c0d556d69f5b00284617d7e5a4f95dc5829d3fb62320830bf94c7f86353e970a"} Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.683291 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwt6s" event={"ID":"a5e0e773-1dac-4bc9-ac4c-5d6db746fee9","Type":"ContainerStarted","Data":"ae2dd4ebbebad70149e1289898df05a348c4eb29e561b7d682180e2c9822686a"} Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.690549 4840 generic.go:334] "Generic (PLEG): container finished" podID="b1d0c791-1d0c-4e11-91ce-bb352ce3fce1" containerID="18b85b6bb3912c2147bb89dbbe88c7209aff1ae5fef1a7cf16bf0d91d5aec310" exitCode=0 Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.690595 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nh5h5" event={"ID":"b1d0c791-1d0c-4e11-91ce-bb352ce3fce1","Type":"ContainerDied","Data":"18b85b6bb3912c2147bb89dbbe88c7209aff1ae5fef1a7cf16bf0d91d5aec310"} Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.692797 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dn5hs" event={"ID":"ed60e671-1c71-42fc-828d-51a4e85e3153","Type":"ContainerStarted","Data":"3ba149ce8681a7329cdad73cc45a96e9e0418d0379d41463d4d35e3127555c9b"} Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.694505 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8ql29" event={"ID":"d33549b3-2af3-444d-9988-1fce8d590d8a","Type":"ContainerStarted","Data":"e33fcbaf1d1f0856b5cca449b85692c2af7e4097030ed0d04549892cb418e5d5"} Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.709194 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nz4rs" podStartSLOduration=118.709165944 podStartE2EDuration="1m58.709165944s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:48.705912111 +0000 UTC m=+187.371581926" watchObservedRunningTime="2026-03-11 08:59:48.709165944 +0000 UTC m=+187.374835759" Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.763801 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:48 crc kubenswrapper[4840]: E0311 08:59:48.766134 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:49.266107531 +0000 UTC m=+187.931777346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.791588 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2kbzn" event={"ID":"e3d07b65-1690-44b1-a232-5a9d4187e89d","Type":"ContainerStarted","Data":"0dc1d17fda888159c26638880971bf1fde2c111f610c47eeb9854d8ae89a8f51"} Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.791797 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2kbzn" Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.867859 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:48 crc kubenswrapper[4840]: E0311 08:59:48.868514 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:49.368499127 +0000 UTC m=+188.034168932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.900346 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-k9hqk" event={"ID":"2f093229-f88a-4571-a575-8efa44a1b8dd","Type":"ContainerStarted","Data":"9508f7654b37ff2bc4eac4a83f973aa13dc8f80c1edb343c3359e6a470176510"} Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.907293 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-td29c" event={"ID":"2b03405c-0fe1-4c7f-ad2a-dcd0db280109","Type":"ContainerStarted","Data":"30ce7f734186175713f8e79759d08450ecf8a6ef868c3d35255362a87b7816c9"} Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.973080 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:48 crc kubenswrapper[4840]: E0311 08:59:48.973785 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:49.473740925 +0000 UTC m=+188.139410730 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.974359 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:48 crc kubenswrapper[4840]: E0311 08:59:48.974936 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:49.474916118 +0000 UTC m=+188.140585933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:48 crc kubenswrapper[4840]: I0311 08:59:48.990618 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-k9hqk" podStartSLOduration=117.990595627 podStartE2EDuration="1m57.990595627s" podCreationTimestamp="2026-03-11 08:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:48.973315353 +0000 UTC m=+187.638985168" watchObservedRunningTime="2026-03-11 08:59:48.990595627 +0000 UTC m=+187.656265442" Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.014095 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2kbzn" podStartSLOduration=119.014072447 podStartE2EDuration="1m59.014072447s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:48.85143062 +0000 UTC m=+187.517100435" watchObservedRunningTime="2026-03-11 08:59:49.014072447 +0000 UTC m=+187.679742262" Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.020517 4840 patch_prober.go:28] interesting pod/router-default-5444994796-wwr6r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 11 08:59:49 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Mar 11 08:59:49 crc kubenswrapper[4840]: [+]process-running ok Mar 11 08:59:49 crc kubenswrapper[4840]: healthz check failed Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.020633 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwr6r" podUID="4c602adf-1ed4-4779-a4f5-5ff24d9ee648" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.020897 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-drmmh" event={"ID":"4b3c8f62-4252-41b4-8e96-8788caae161b","Type":"ContainerStarted","Data":"00c76e9bea8e19c3eec406e987244752b4d9615b3d989dec065f23858f984e78"} Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.068025 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gjgkz" event={"ID":"6e87442b-4d54-472c-bad6-e2086c95df50","Type":"ContainerStarted","Data":"cfe24c4c202f0b31de536b0427d5964b5da2dec30e990eddc2636840fa6a1aa4"} Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.083159 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:49 crc kubenswrapper[4840]: E0311 08:59:49.084536 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:49.584518531 +0000 UTC m=+188.250188346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.091886 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2gglt" event={"ID":"d62191a1-c0bf-4b73-8a5b-9a084143772c","Type":"ContainerStarted","Data":"de15efae47f1cac7cc69f4586d0c2bbfd0e0920261c776823a4bc83eb42f060f"} Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.092868 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-drmmh" podStartSLOduration=119.092856619 podStartE2EDuration="1m59.092856619s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:49.092363115 +0000 UTC m=+187.758032950" watchObservedRunningTime="2026-03-11 08:59:49.092856619 +0000 UTC m=+187.758526454" Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.099593 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xt7ft" event={"ID":"b9c9ed82-0f76-48b1-8cb7-af788bf40902","Type":"ContainerStarted","Data":"9562dcb8183bcaaa130c2e4a13e6cdbc937b891c1d9ca76a521d111141caff78"} Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.158266 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xt7ft" podStartSLOduration=119.158251207 podStartE2EDuration="1m59.158251207s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:49.1562226 +0000 UTC m=+187.821892415" watchObservedRunningTime="2026-03-11 08:59:49.158251207 +0000 UTC m=+187.823921022" Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.162799 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b2bw" event={"ID":"a16bb49a-f4f4-43df-9775-8cea5ed8e4e5","Type":"ContainerStarted","Data":"6fb363fd7516312748925fa024d8d7b0fef991df1da8949149448139080618b6"} Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.163947 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b2bw" Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.177203 4840 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-6b2bw container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.177257 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b2bw" podUID="a16bb49a-f4f4-43df-9775-8cea5ed8e4e5" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.185278 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:49 crc kubenswrapper[4840]: E0311 08:59:49.185868 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:49.685848456 +0000 UTC m=+188.351518271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.221351 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqtkm" event={"ID":"6ca4d667-0d35-40e6-b681-ec69524cfc2f","Type":"ContainerStarted","Data":"071db8e3aeeb2b04cfdc986327a41bb1ffacce2b5cba8f3679f2d481bbaf3e60"} Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.228991 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zm7f9" event={"ID":"6138a548-439a-4af8-ad4d-a6ea89f686b7","Type":"ContainerStarted","Data":"473cd90c0919c09fe4d7e0c1bcfe76345959728244a15455804a6698bf5b3f4b"} Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.236390 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-cxgxb" event={"ID":"cdd63dfa-6787-4eca-937b-758358444ffc","Type":"ContainerStarted","Data":"53bbefefa8afba45067a1a8b0b06826a431f5b3ebbe0ea9108bee2a756433b74"} Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.246583 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b2bw" podStartSLOduration=119.246561361 podStartE2EDuration="1m59.246561361s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:49.202022018 +0000 UTC m=+187.867691833" watchObservedRunningTime="2026-03-11 08:59:49.246561361 +0000 UTC m=+187.912231176" Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.248841 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hgxgb" event={"ID":"b94670ce-123d-4562-b9ae-7a7fe898bff7","Type":"ContainerStarted","Data":"6d16f90e2914d15d7377338706342ca4c837f6626a4a84eb7d65b30b14e3629d"} Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.286546 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:49 crc kubenswrapper[4840]: E0311 08:59:49.288414 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:49.788388606 +0000 UTC m=+188.454058421 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.302309 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-hgxgb" podStartSLOduration=119.302290154 podStartE2EDuration="1m59.302290154s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:49.302130549 +0000 UTC m=+187.967800364" watchObservedRunningTime="2026-03-11 08:59:49.302290154 +0000 UTC m=+187.967959979" Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.303089 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zqtkm" podStartSLOduration=119.303085316 podStartE2EDuration="1m59.303085316s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:49.250189685 +0000 UTC m=+187.915859500" watchObservedRunningTime="2026-03-11 08:59:49.303085316 +0000 UTC m=+187.968755131" Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.319318 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44" event={"ID":"7b145694-7add-48aa-9321-56703922b613","Type":"ContainerStarted","Data":"c334ec72f8fe37526a35446d89e4ea8dfbafde226f2ffda5cfc5f3410f4358fa"} Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.320398 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44" Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.322987 4840 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-lzd44 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.323023 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44" podUID="7b145694-7add-48aa-9321-56703922b613" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.352418 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ktl5r" event={"ID":"01a98c93-30e9-4161-ae55-553a6107a67f","Type":"ContainerStarted","Data":"2cb4f6d307204c2c3694c6c6ac392c7766b447453ddb4e7a3a5b3e6f72d08c60"} Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.354614 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44" podStartSLOduration=118.354594388 podStartE2EDuration="1m58.354594388s" podCreationTimestamp="2026-03-11 08:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:49.352997463 +0000 UTC m=+188.018667278" watchObservedRunningTime="2026-03-11 08:59:49.354594388 +0000 UTC m=+188.020264203" Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.371790 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.378874 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.388794 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:49 crc kubenswrapper[4840]: E0311 08:59:49.389131 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:49.889119105 +0000 UTC m=+188.554788920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.412375 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pvx88" Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.419210 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-pcgsw" Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.447493 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ktl5r" podStartSLOduration=119.447459232 podStartE2EDuration="1m59.447459232s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:49.393405308 +0000 UTC m=+188.059075113" watchObservedRunningTime="2026-03-11 08:59:49.447459232 +0000 UTC m=+188.113129047" Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.489871 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:49 crc kubenswrapper[4840]: E0311 08:59:49.502260 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:50.002226567 +0000 UTC m=+188.667896552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.559986 4840 ???:1] "http: TLS handshake error from 192.168.126.11:47620: no serving certificate available for the kubelet" Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.613011 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:49 crc kubenswrapper[4840]: E0311 08:59:49.613402 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:50.113389034 +0000 UTC m=+188.779058849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.714027 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:49 crc kubenswrapper[4840]: E0311 08:59:49.714307 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:50.214288807 +0000 UTC m=+188.879958622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.797112 4840 ???:1] "http: TLS handshake error from 192.168.126.11:47624: no serving certificate available for the kubelet" Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.815329 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:49 crc kubenswrapper[4840]: E0311 08:59:49.815674 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:50.315663334 +0000 UTC m=+188.981333149 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.916117 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:49 crc kubenswrapper[4840]: E0311 08:59:49.916739 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:50.416716842 +0000 UTC m=+189.082386657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.941845 4840 ???:1] "http: TLS handshake error from 192.168.126.11:47638: no serving certificate available for the kubelet" Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.982827 4840 patch_prober.go:28] interesting pod/router-default-5444994796-wwr6r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 11 08:59:49 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Mar 11 08:59:49 crc kubenswrapper[4840]: [+]process-running ok Mar 11 08:59:49 crc kubenswrapper[4840]: healthz check failed Mar 11 08:59:49 crc kubenswrapper[4840]: I0311 08:59:49.982894 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwr6r" podUID="4c602adf-1ed4-4779-a4f5-5ff24d9ee648" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.019645 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:50 crc kubenswrapper[4840]: E0311 08:59:50.019989 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:50.519974373 +0000 UTC m=+189.185644188 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.031124 4840 ???:1] "http: TLS handshake error from 192.168.126.11:47654: no serving certificate available for the kubelet" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.129380 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:50 crc kubenswrapper[4840]: E0311 08:59:50.129687 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:50.629668767 +0000 UTC m=+189.295338582 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.149971 4840 ???:1] "http: TLS handshake error from 192.168.126.11:47664: no serving certificate available for the kubelet" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.230956 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:50 crc kubenswrapper[4840]: E0311 08:59:50.231875 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:50.731858518 +0000 UTC m=+189.397528333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.266076 4840 ???:1] "http: TLS handshake error from 192.168.126.11:47678: no serving certificate available for the kubelet" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.332802 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:50 crc kubenswrapper[4840]: E0311 08:59:50.333009 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:50.832974117 +0000 UTC m=+189.498643932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.333240 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:50 crc kubenswrapper[4840]: E0311 08:59:50.333681 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:50.833671897 +0000 UTC m=+189.499341712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.386426 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2p584" event={"ID":"642bcc9a-edf9-4845-b6ad-0936618ec9b0","Type":"ContainerStarted","Data":"fbad94f27540d2e10f1540f23085310065de3922a7e72e55b19419d05f4054f8"} Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.387090 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-2p584" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.388854 4840 patch_prober.go:28] interesting pod/console-operator-58897d9998-2p584 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/readyz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.388915 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-2p584" podUID="642bcc9a-edf9-4845-b6ad-0936618ec9b0" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/readyz\": dial tcp 10.217.0.36:8443: connect: connection refused" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.395691 4840 ???:1] "http: TLS handshake error from 192.168.126.11:47688: no serving certificate available for the kubelet" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.397942 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zm7f9" event={"ID":"6138a548-439a-4af8-ad4d-a6ea89f686b7","Type":"ContainerStarted","Data":"960089a786ad72c30284b48559ddbb941293ee988db8a763f9fce63444a5b4fe"} Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.418754 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-2p584" podStartSLOduration=120.418733937 podStartE2EDuration="2m0.418733937s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:50.413493607 +0000 UTC m=+189.079163422" watchObservedRunningTime="2026-03-11 08:59:50.418733937 +0000 UTC m=+189.084403752" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.421038 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gjgkz" event={"ID":"6e87442b-4d54-472c-bad6-e2086c95df50","Type":"ContainerStarted","Data":"795ff6b6eb8a29072859145db347de13d13f347efdc7781e912dfedc6bd70d54"} Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.421085 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gjgkz" event={"ID":"6e87442b-4d54-472c-bad6-e2086c95df50","Type":"ContainerStarted","Data":"17348271f95afa7d09071fd592fa03a362d125672c1c925916ecbbb5338ea1b4"} Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.424146 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xkq7s" event={"ID":"5dc5ef77-d18a-4474-a523-473f27166095","Type":"ContainerStarted","Data":"df68a8e392c8a43765a966fb9a267c99c8a89529df63a671b25f0f11998125d8"} Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.435021 4840 generic.go:334] "Generic (PLEG): container finished" podID="f2878f5c-d2e1-4561-acae-3ea4ed26a5c0" containerID="119e08b018511bc1a3aa17e8fd5db9a5f42c2a6a55d2c0968fc1682b67df585c" exitCode=0 Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.435124 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" event={"ID":"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0","Type":"ContainerDied","Data":"119e08b018511bc1a3aa17e8fd5db9a5f42c2a6a55d2c0968fc1682b67df585c"} Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.436116 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:50 crc kubenswrapper[4840]: E0311 08:59:50.437727 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:50.937696069 +0000 UTC m=+189.603365884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.447230 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ktl5r" event={"ID":"01a98c93-30e9-4161-ae55-553a6107a67f","Type":"ContainerStarted","Data":"939edf7e5c34377cdb196b0714c6899b5de1d594052eb006e3d6d39fa01b654f"} Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.451508 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-9k5xp" event={"ID":"78462b75-44da-4862-88c5-5cf892a91058","Type":"ContainerStarted","Data":"e72cb1e01dee98ebebd0f5469f0bad2d699b6fdcafe6ea7e917bb24949f09df4"} Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.452735 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-9k5xp" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.456615 4840 patch_prober.go:28] interesting pod/downloads-7954f5f757-9k5xp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.456670 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9k5xp" podUID="78462b75-44da-4862-88c5-5cf892a91058" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.462722 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-zm7f9" podStartSLOduration=120.462705653 podStartE2EDuration="2m0.462705653s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:50.46187485 +0000 UTC m=+189.127544665" watchObservedRunningTime="2026-03-11 08:59:50.462705653 +0000 UTC m=+189.128375468" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.514751 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" event={"ID":"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c","Type":"ContainerStarted","Data":"53c67103b7dd2609f6f2e8d42aec23d4a69402bd866e48cf8974a56bc5beb945"} Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.517544 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-xkq7s" podStartSLOduration=120.51752438 podStartE2EDuration="2m0.51752438s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:50.513962138 +0000 UTC m=+189.179631953" watchObservedRunningTime="2026-03-11 08:59:50.51752438 +0000 UTC m=+189.183194195" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.532086 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2gglt" event={"ID":"d62191a1-c0bf-4b73-8a5b-9a084143772c","Type":"ContainerStarted","Data":"b0495c5599bd392eaa4c4ccdb3305d443b5d99662073a04c92bb6e5479f4ef08"} Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.532137 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-2gglt" event={"ID":"d62191a1-c0bf-4b73-8a5b-9a084143772c","Type":"ContainerStarted","Data":"c64b54b39bf60271d50e1254282e472c9dd6500789987be66068c75ed0fb7601"} Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.541692 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:50 crc kubenswrapper[4840]: E0311 08:59:50.541987 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:51.041976239 +0000 UTC m=+189.707646054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.542729 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-gjgkz" podStartSLOduration=120.54271042 podStartE2EDuration="2m0.54271042s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:50.54026409 +0000 UTC m=+189.205933905" watchObservedRunningTime="2026-03-11 08:59:50.54271042 +0000 UTC m=+189.208380235" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.546301 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2zpp2" event={"ID":"eccdf140-9410-4209-be5a-eb865704291a","Type":"ContainerStarted","Data":"27f2a260eb6790b8072fb96fad78babe15769f09dcdef12823ea3a57e5e9fa40"} Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.562083 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" event={"ID":"c0687081-fb8b-4b87-945d-b107b2f86966","Type":"ContainerStarted","Data":"fcfa7a6066b84ee7bdc6314d7f3ed0f33d93805d5109e6fbfc75daa93bea1f1b"} Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.563706 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nh5h5" event={"ID":"b1d0c791-1d0c-4e11-91ce-bb352ce3fce1","Type":"ContainerStarted","Data":"5f5e44c91e5cf53d985356445754bd6f3813f0747b9851f4087cca63c768271e"} Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.564443 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nh5h5" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.565723 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hgxgb" event={"ID":"b94670ce-123d-4562-b9ae-7a7fe898bff7","Type":"ContainerStarted","Data":"edaa5b93d8903b96acf4f090e32c1552391544b6cd1edcd56a20318dd5dd0bd2"} Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.567181 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-cxgxb" event={"ID":"cdd63dfa-6787-4eca-937b-758358444ffc","Type":"ContainerStarted","Data":"0035aa726e2228f2cff01a5a91d7d85e29540a17c478b3de9741c2047c2d0ebc"} Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.567591 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-cxgxb" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.568385 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44" event={"ID":"7b145694-7add-48aa-9321-56703922b613","Type":"ContainerStarted","Data":"c537d7b99d81101c0962a538ed2bf490f755945cf5c5868ed33eaff91e86473d"} Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.580989 4840 ???:1] "http: TLS handshake error from 192.168.126.11:47690: no serving certificate available for the kubelet" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.583899 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mcz2j" event={"ID":"511c45a2-2ea8-4e84-9f1d-2901e39e7e36","Type":"ContainerStarted","Data":"8cdc84c4332411cb88f2dcc886904a3985fa15e947066e72d0024362c2f9b97d"} Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.586521 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-td29c" event={"ID":"2b03405c-0fe1-4c7f-ad2a-dcd0db280109","Type":"ContainerStarted","Data":"d775be480d48c7842e249c6e4ab37dc71b61fa37a144303846c126cf0dfb38b3"} Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.587073 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.589481 4840 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-td29c container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.589528 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-td29c" podUID="2b03405c-0fe1-4c7f-ad2a-dcd0db280109" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.589900 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwt6s" event={"ID":"a5e0e773-1dac-4bc9-ac4c-5d6db746fee9","Type":"ContainerStarted","Data":"80bb4f9648a2bd6b6767d541672928f67a911119fa9c238d8c30ef0fc8d88dae"} Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.604231 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dn5hs" event={"ID":"ed60e671-1c71-42fc-828d-51a4e85e3153","Type":"ContainerStarted","Data":"79797fec693fd3d17d83a85686dbac5d0729dd77d17097d04b1d5b2afe93ef2f"} Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.604294 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dn5hs" event={"ID":"ed60e671-1c71-42fc-828d-51a4e85e3153","Type":"ContainerStarted","Data":"06de138b4a663334b05b401a5c18f27c18e2a2763a16fa3a4f612809988c8aa9"} Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.624973 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b2bw" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.642064 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:50 crc kubenswrapper[4840]: E0311 08:59:50.644343 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:51.144324274 +0000 UTC m=+189.809994089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.658943 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-9k5xp" podStartSLOduration=120.658921151 podStartE2EDuration="2m0.658921151s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:50.637027875 +0000 UTC m=+189.302697690" watchObservedRunningTime="2026-03-11 08:59:50.658921151 +0000 UTC m=+189.324590966" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.678876 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.747309 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:50 crc kubenswrapper[4840]: E0311 08:59:50.747825 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:51.247813711 +0000 UTC m=+189.913483516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.775964 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nh5h5" podStartSLOduration=120.775931594 podStartE2EDuration="2m0.775931594s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:50.728819638 +0000 UTC m=+189.394489453" watchObservedRunningTime="2026-03-11 08:59:50.775931594 +0000 UTC m=+189.441601409" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.836526 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" podStartSLOduration=120.836505315 podStartE2EDuration="2m0.836505315s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:50.834299962 +0000 UTC m=+189.499969777" watchObservedRunningTime="2026-03-11 08:59:50.836505315 +0000 UTC m=+189.502175130" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.849421 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:50 crc kubenswrapper[4840]: E0311 08:59:50.849703 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:51.349678842 +0000 UTC m=+190.015348667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.861011 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.862443 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.875557 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.875780 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.889967 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwt6s" podStartSLOduration=120.889945753 podStartE2EDuration="2m0.889945753s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:50.875337355 +0000 UTC m=+189.541007170" watchObservedRunningTime="2026-03-11 08:59:50.889945753 +0000 UTC m=+189.555615568" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.894363 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.928369 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-mcz2j" podStartSLOduration=120.92835244 podStartE2EDuration="2m0.92835244s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:50.926049254 +0000 UTC m=+189.591719069" watchObservedRunningTime="2026-03-11 08:59:50.92835244 +0000 UTC m=+189.594022255" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.955509 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e88f7571-e0b3-4144-bec6-59c893a672d1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e88f7571-e0b3-4144-bec6-59c893a672d1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.955576 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e88f7571-e0b3-4144-bec6-59c893a672d1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e88f7571-e0b3-4144-bec6-59c893a672d1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.955609 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:50 crc kubenswrapper[4840]: E0311 08:59:50.955885 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:51.455873567 +0000 UTC m=+190.121543372 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:50 crc kubenswrapper[4840]: I0311 08:59:50.981334 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-cxgxb" podStartSLOduration=9.981320844 podStartE2EDuration="9.981320844s" podCreationTimestamp="2026-03-11 08:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:50.980081838 +0000 UTC m=+189.645751653" watchObservedRunningTime="2026-03-11 08:59:50.981320844 +0000 UTC m=+189.646990659" Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.012014 4840 patch_prober.go:28] interesting pod/router-default-5444994796-wwr6r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 11 08:59:51 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Mar 11 08:59:51 crc kubenswrapper[4840]: [+]process-running ok Mar 11 08:59:51 crc kubenswrapper[4840]: healthz check failed Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.012105 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwr6r" podUID="4c602adf-1ed4-4779-a4f5-5ff24d9ee648" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.058211 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.058366 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e88f7571-e0b3-4144-bec6-59c893a672d1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e88f7571-e0b3-4144-bec6-59c893a672d1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.058422 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e88f7571-e0b3-4144-bec6-59c893a672d1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e88f7571-e0b3-4144-bec6-59c893a672d1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 11 08:59:51 crc kubenswrapper[4840]: E0311 08:59:51.058730 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:51.558671854 +0000 UTC m=+190.224341719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.058817 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e88f7571-e0b3-4144-bec6-59c893a672d1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e88f7571-e0b3-4144-bec6-59c893a672d1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.134787 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-td29c" podStartSLOduration=121.134756428 podStartE2EDuration="2m1.134756428s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:51.091019559 +0000 UTC m=+189.756689364" watchObservedRunningTime="2026-03-11 08:59:51.134756428 +0000 UTC m=+189.800426253" Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.137801 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-2zpp2" podStartSLOduration=10.137775555 podStartE2EDuration="10.137775555s" podCreationTimestamp="2026-03-11 08:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:51.135769927 +0000 UTC m=+189.801439742" watchObservedRunningTime="2026-03-11 08:59:51.137775555 +0000 UTC m=+189.803445370" Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.145460 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e88f7571-e0b3-4144-bec6-59c893a672d1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e88f7571-e0b3-4144-bec6-59c893a672d1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.165865 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:51 crc kubenswrapper[4840]: E0311 08:59:51.166415 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:51.666396123 +0000 UTC m=+190.332065938 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.219892 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.240389 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-dn5hs" podStartSLOduration=121.240362656 podStartE2EDuration="2m1.240362656s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:51.187914167 +0000 UTC m=+189.853583982" watchObservedRunningTime="2026-03-11 08:59:51.240362656 +0000 UTC m=+189.906032471" Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.271147 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:51 crc kubenswrapper[4840]: E0311 08:59:51.271810 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:51.771789034 +0000 UTC m=+190.437458849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.316884 4840 ???:1] "http: TLS handshake error from 192.168.126.11:47706: no serving certificate available for the kubelet" Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.342340 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-2gglt" podStartSLOduration=121.34232001 podStartE2EDuration="2m1.34232001s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:51.239814151 +0000 UTC m=+189.905483966" watchObservedRunningTime="2026-03-11 08:59:51.34232001 +0000 UTC m=+190.007989825" Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.373289 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:51 crc kubenswrapper[4840]: E0311 08:59:51.373678 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:51.873664586 +0000 UTC m=+190.539334401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.474084 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:51 crc kubenswrapper[4840]: E0311 08:59:51.474375 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:51.974336442 +0000 UTC m=+190.640006257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.474697 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:51 crc kubenswrapper[4840]: E0311 08:59:51.475077 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:51.975060643 +0000 UTC m=+190.640730458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.580853 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:51 crc kubenswrapper[4840]: E0311 08:59:51.581554 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:52.081536006 +0000 UTC m=+190.747205821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.620790 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" event={"ID":"f2878f5c-d2e1-4561-acae-3ea4ed26a5c0","Type":"ContainerStarted","Data":"6af9abacd7a5ecbb9a828250ecb0e2ba4638dde8324ff02b8f18738b84a1928e"} Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.641358 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" event={"ID":"e7e0d2ae-153e-43e7-b23d-e60aaeabb85c","Type":"ContainerStarted","Data":"89884cefbd3e777d546f115e55003fb598efb5a2d8dba4af607b7d025f154117"} Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.663350 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" podStartSLOduration=121.663330803 podStartE2EDuration="2m1.663330803s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:51.66216849 +0000 UTC m=+190.327838305" watchObservedRunningTime="2026-03-11 08:59:51.663330803 +0000 UTC m=+190.329000618" Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.663412 4840 generic.go:334] "Generic (PLEG): container finished" podID="a821fe36-3bdd-4b59-9dd4-004985404023" containerID="51416be935c7d35e2e945c8d54b076b719c0e25edf4fc27069b07ab19ca93b38" exitCode=0 Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.664289 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-gw77z" event={"ID":"a821fe36-3bdd-4b59-9dd4-004985404023","Type":"ContainerDied","Data":"51416be935c7d35e2e945c8d54b076b719c0e25edf4fc27069b07ab19ca93b38"} Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.671267 4840 patch_prober.go:28] interesting pod/downloads-7954f5f757-9k5xp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.671302 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9k5xp" podUID="78462b75-44da-4862-88c5-5cf892a91058" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.712798 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:51 crc kubenswrapper[4840]: E0311 08:59:51.730260 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:52.230237875 +0000 UTC m=+190.895907690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.807814 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.814548 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:51 crc kubenswrapper[4840]: E0311 08:59:51.817562 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:52.317528 +0000 UTC m=+190.983197815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.920357 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:51 crc kubenswrapper[4840]: E0311 08:59:51.921022 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:52.420993806 +0000 UTC m=+191.086663801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.970540 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.994685 4840 patch_prober.go:28] interesting pod/router-default-5444994796-wwr6r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 11 08:59:51 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Mar 11 08:59:51 crc kubenswrapper[4840]: [+]process-running ok Mar 11 08:59:51 crc kubenswrapper[4840]: healthz check failed Mar 11 08:59:51 crc kubenswrapper[4840]: I0311 08:59:51.994761 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwr6r" podUID="4c602adf-1ed4-4779-a4f5-5ff24d9ee648" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.022243 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:52 crc kubenswrapper[4840]: E0311 08:59:52.022808 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:52.522784905 +0000 UTC m=+191.188454720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.077362 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6wxnb"] Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.078566 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6wxnb" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.082445 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.087623 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6wxnb"] Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.124121 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f-utilities\") pod \"certified-operators-6wxnb\" (UID: \"edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f\") " pod="openshift-marketplace/certified-operators-6wxnb" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.124220 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv94k\" (UniqueName: \"kubernetes.io/projected/edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f-kube-api-access-cv94k\") pod \"certified-operators-6wxnb\" (UID: \"edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f\") " pod="openshift-marketplace/certified-operators-6wxnb" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.124245 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.124274 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f-catalog-content\") pod \"certified-operators-6wxnb\" (UID: \"edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f\") " pod="openshift-marketplace/certified-operators-6wxnb" Mar 11 08:59:52 crc kubenswrapper[4840]: E0311 08:59:52.124688 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:52.624669807 +0000 UTC m=+191.290339622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.225553 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.225826 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f-utilities\") pod \"certified-operators-6wxnb\" (UID: \"edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f\") " pod="openshift-marketplace/certified-operators-6wxnb" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.225871 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv94k\" (UniqueName: \"kubernetes.io/projected/edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f-kube-api-access-cv94k\") pod \"certified-operators-6wxnb\" (UID: \"edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f\") " pod="openshift-marketplace/certified-operators-6wxnb" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.225907 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f-catalog-content\") pod \"certified-operators-6wxnb\" (UID: \"edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f\") " pod="openshift-marketplace/certified-operators-6wxnb" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.226338 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f-catalog-content\") pod \"certified-operators-6wxnb\" (UID: \"edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f\") " pod="openshift-marketplace/certified-operators-6wxnb" Mar 11 08:59:52 crc kubenswrapper[4840]: E0311 08:59:52.226417 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:52.726397174 +0000 UTC m=+191.392066989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.226638 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f-utilities\") pod \"certified-operators-6wxnb\" (UID: \"edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f\") " pod="openshift-marketplace/certified-operators-6wxnb" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.254504 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h578g"] Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.256643 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h578g" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.260849 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.280891 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv94k\" (UniqueName: \"kubernetes.io/projected/edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f-kube-api-access-cv94k\") pod \"certified-operators-6wxnb\" (UID: \"edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f\") " pod="openshift-marketplace/certified-operators-6wxnb" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.289337 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h578g"] Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.327387 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6-utilities\") pod \"community-operators-h578g\" (UID: \"b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6\") " pod="openshift-marketplace/community-operators-h578g" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.327497 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.327550 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6-catalog-content\") pod \"community-operators-h578g\" (UID: \"b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6\") " pod="openshift-marketplace/community-operators-h578g" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.327571 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5f94\" (UniqueName: \"kubernetes.io/projected/b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6-kube-api-access-v5f94\") pod \"community-operators-h578g\" (UID: \"b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6\") " pod="openshift-marketplace/community-operators-h578g" Mar 11 08:59:52 crc kubenswrapper[4840]: E0311 08:59:52.327982 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:52.827965556 +0000 UTC m=+191.493635371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.403917 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-2p584" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.404763 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6wxnb" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.430270 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.430549 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6-catalog-content\") pod \"community-operators-h578g\" (UID: \"b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6\") " pod="openshift-marketplace/community-operators-h578g" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.430588 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5f94\" (UniqueName: \"kubernetes.io/projected/b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6-kube-api-access-v5f94\") pod \"community-operators-h578g\" (UID: \"b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6\") " pod="openshift-marketplace/community-operators-h578g" Mar 11 08:59:52 crc kubenswrapper[4840]: E0311 08:59:52.430711 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:52.930674141 +0000 UTC m=+191.596344136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.430895 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6-utilities\") pod \"community-operators-h578g\" (UID: \"b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6\") " pod="openshift-marketplace/community-operators-h578g" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.431112 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.431328 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6-catalog-content\") pod \"community-operators-h578g\" (UID: \"b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6\") " pod="openshift-marketplace/community-operators-h578g" Mar 11 08:59:52 crc kubenswrapper[4840]: E0311 08:59:52.431577 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:52.931566417 +0000 UTC m=+191.597236232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.431607 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6-utilities\") pod \"community-operators-h578g\" (UID: \"b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6\") " pod="openshift-marketplace/community-operators-h578g" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.470569 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7w69s"] Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.471602 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7w69s" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.489373 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5f94\" (UniqueName: \"kubernetes.io/projected/b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6-kube-api-access-v5f94\") pod \"community-operators-h578g\" (UID: \"b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6\") " pod="openshift-marketplace/community-operators-h578g" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.492599 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7w69s"] Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.534657 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.534930 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b9fv\" (UniqueName: \"kubernetes.io/projected/996d3b36-77c7-4e8f-a472-ac032aabd836-kube-api-access-6b9fv\") pod \"certified-operators-7w69s\" (UID: \"996d3b36-77c7-4e8f-a472-ac032aabd836\") " pod="openshift-marketplace/certified-operators-7w69s" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.534978 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/996d3b36-77c7-4e8f-a472-ac032aabd836-catalog-content\") pod \"certified-operators-7w69s\" (UID: \"996d3b36-77c7-4e8f-a472-ac032aabd836\") " pod="openshift-marketplace/certified-operators-7w69s" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.535015 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/996d3b36-77c7-4e8f-a472-ac032aabd836-utilities\") pod \"certified-operators-7w69s\" (UID: \"996d3b36-77c7-4e8f-a472-ac032aabd836\") " pod="openshift-marketplace/certified-operators-7w69s" Mar 11 08:59:52 crc kubenswrapper[4840]: E0311 08:59:52.535126 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:53.035109906 +0000 UTC m=+191.700779721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.617939 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h578g" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.637357 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/996d3b36-77c7-4e8f-a472-ac032aabd836-catalog-content\") pod \"certified-operators-7w69s\" (UID: \"996d3b36-77c7-4e8f-a472-ac032aabd836\") " pod="openshift-marketplace/certified-operators-7w69s" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.637416 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/996d3b36-77c7-4e8f-a472-ac032aabd836-utilities\") pod \"certified-operators-7w69s\" (UID: \"996d3b36-77c7-4e8f-a472-ac032aabd836\") " pod="openshift-marketplace/certified-operators-7w69s" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.637929 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b9fv\" (UniqueName: \"kubernetes.io/projected/996d3b36-77c7-4e8f-a472-ac032aabd836-kube-api-access-6b9fv\") pod \"certified-operators-7w69s\" (UID: \"996d3b36-77c7-4e8f-a472-ac032aabd836\") " pod="openshift-marketplace/certified-operators-7w69s" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.637958 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:52 crc kubenswrapper[4840]: E0311 08:59:52.639513 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:53.139499919 +0000 UTC m=+191.805169724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.643698 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/996d3b36-77c7-4e8f-a472-ac032aabd836-utilities\") pod \"certified-operators-7w69s\" (UID: \"996d3b36-77c7-4e8f-a472-ac032aabd836\") " pod="openshift-marketplace/certified-operators-7w69s" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.643749 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/996d3b36-77c7-4e8f-a472-ac032aabd836-catalog-content\") pod \"certified-operators-7w69s\" (UID: \"996d3b36-77c7-4e8f-a472-ac032aabd836\") " pod="openshift-marketplace/certified-operators-7w69s" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.670658 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-chbkv"] Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.673795 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chbkv" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.675584 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-chbkv"] Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.687779 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b9fv\" (UniqueName: \"kubernetes.io/projected/996d3b36-77c7-4e8f-a472-ac032aabd836-kube-api-access-6b9fv\") pod \"certified-operators-7w69s\" (UID: \"996d3b36-77c7-4e8f-a472-ac032aabd836\") " pod="openshift-marketplace/certified-operators-7w69s" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.729022 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" event={"ID":"c0687081-fb8b-4b87-945d-b107b2f86966","Type":"ContainerStarted","Data":"d341b4ceeba06be07f0bf59f83a92b3906bda595843ebbe4e0a0a84ff2a57696"} Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.729064 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" event={"ID":"c0687081-fb8b-4b87-945d-b107b2f86966","Type":"ContainerStarted","Data":"60a70ebf22237472261fbd03e8ba82cc98cf0efbad3dab575bbc8ca4ef4fdb35"} Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.775666 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:52 crc kubenswrapper[4840]: E0311 08:59:52.780701 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:53.280628102 +0000 UTC m=+191.946297927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.792711 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f-utilities\") pod \"community-operators-chbkv\" (UID: \"ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f\") " pod="openshift-marketplace/community-operators-chbkv" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.793006 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.793174 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jcf7\" (UniqueName: \"kubernetes.io/projected/ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f-kube-api-access-6jcf7\") pod \"community-operators-chbkv\" (UID: \"ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f\") " pod="openshift-marketplace/community-operators-chbkv" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.793248 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f-catalog-content\") pod \"community-operators-chbkv\" (UID: \"ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f\") " pod="openshift-marketplace/community-operators-chbkv" Mar 11 08:59:52 crc kubenswrapper[4840]: E0311 08:59:52.794102 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:53.294080846 +0000 UTC m=+191.959750661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.795189 4840 ???:1] "http: TLS handshake error from 192.168.126.11:42508: no serving certificate available for the kubelet" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.846662 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7w69s" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.848021 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e88f7571-e0b3-4144-bec6-59c893a672d1","Type":"ContainerStarted","Data":"de7d799f0da17d3979b1e89962fea23e890b64a99628cdef1f0cb18ddfc6de72"} Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.848060 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e88f7571-e0b3-4144-bec6-59c893a672d1","Type":"ContainerStarted","Data":"010eaef79796145c2131f501d0ec3ab6fe59b2918524c8822ae2b581c3bc8be1"} Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.862116 4840 patch_prober.go:28] interesting pod/downloads-7954f5f757-9k5xp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.862194 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9k5xp" podUID="78462b75-44da-4862-88c5-5cf892a91058" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.882258 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nh5h5" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.894378 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.895429 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f-catalog-content\") pod \"community-operators-chbkv\" (UID: \"ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f\") " pod="openshift-marketplace/community-operators-chbkv" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.899121 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f-catalog-content\") pod \"community-operators-chbkv\" (UID: \"ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f\") " pod="openshift-marketplace/community-operators-chbkv" Mar 11 08:59:52 crc kubenswrapper[4840]: E0311 08:59:52.901849 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:53.401801464 +0000 UTC m=+192.067471279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.909780 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f-utilities\") pod \"community-operators-chbkv\" (UID: \"ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f\") " pod="openshift-marketplace/community-operators-chbkv" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.910584 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.910981 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jcf7\" (UniqueName: \"kubernetes.io/projected/ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f-kube-api-access-6jcf7\") pod \"community-operators-chbkv\" (UID: \"ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f\") " pod="openshift-marketplace/community-operators-chbkv" Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.911077 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f-utilities\") pod \"community-operators-chbkv\" (UID: \"ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f\") " pod="openshift-marketplace/community-operators-chbkv" Mar 11 08:59:52 crc kubenswrapper[4840]: E0311 08:59:52.911454 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:53.411426849 +0000 UTC m=+192.077096664 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:52 crc kubenswrapper[4840]: I0311 08:59:52.962445 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.962423707 podStartE2EDuration="2.962423707s" podCreationTimestamp="2026-03-11 08:59:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:52.952059011 +0000 UTC m=+191.617728826" watchObservedRunningTime="2026-03-11 08:59:52.962423707 +0000 UTC m=+191.628093522" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.005971 4840 patch_prober.go:28] interesting pod/router-default-5444994796-wwr6r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 11 08:59:53 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Mar 11 08:59:53 crc kubenswrapper[4840]: [+]process-running ok Mar 11 08:59:53 crc kubenswrapper[4840]: healthz check failed Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.006055 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwr6r" podUID="4c602adf-1ed4-4779-a4f5-5ff24d9ee648" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.018863 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:53 crc kubenswrapper[4840]: E0311 08:59:53.054087 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:53.554043295 +0000 UTC m=+192.219713110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.055169 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:53 crc kubenswrapper[4840]: E0311 08:59:53.055860 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:53.555833206 +0000 UTC m=+192.221503221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.073332 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jcf7\" (UniqueName: \"kubernetes.io/projected/ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f-kube-api-access-6jcf7\") pod \"community-operators-chbkv\" (UID: \"ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f\") " pod="openshift-marketplace/community-operators-chbkv" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.096638 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6wxnb"] Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.158716 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:53 crc kubenswrapper[4840]: E0311 08:59:53.159114 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:53.659094897 +0000 UTC m=+192.324764712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.206426 4840 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.255682 4840 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-11T08:59:53.206460041Z","Handler":null,"Name":""} Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.260824 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:53 crc kubenswrapper[4840]: E0311 08:59:53.261229 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:53.761215975 +0000 UTC m=+192.426885790 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.273291 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6mpbx"] Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.273619 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" podUID="db966398-cd84-4fb6-bedf-f1f13c670ce8" containerName="controller-manager" containerID="cri-o://8a7b64590ad588a1fd9b8fe50a9debb1e9ad38bfade3c36d0d9b8217fd2a169e" gracePeriod=30 Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.277820 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.278547 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.287215 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.290732 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.290916 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.348883 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h578g"] Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.364065 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.364207 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a193fea6-fbc1-4fc6-8b65-c5a1928d8be1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a193fea6-fbc1-4fc6-8b65-c5a1928d8be1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.364255 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a193fea6-fbc1-4fc6-8b65-c5a1928d8be1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a193fea6-fbc1-4fc6-8b65-c5a1928d8be1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 11 08:59:53 crc kubenswrapper[4840]: E0311 08:59:53.364408 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-11 08:59:53.864385993 +0000 UTC m=+192.530055808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.372731 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chbkv" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.381701 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44"] Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.473026 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a193fea6-fbc1-4fc6-8b65-c5a1928d8be1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a193fea6-fbc1-4fc6-8b65-c5a1928d8be1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.474304 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a193fea6-fbc1-4fc6-8b65-c5a1928d8be1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a193fea6-fbc1-4fc6-8b65-c5a1928d8be1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.473080 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a193fea6-fbc1-4fc6-8b65-c5a1928d8be1-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a193fea6-fbc1-4fc6-8b65-c5a1928d8be1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.491201 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:53 crc kubenswrapper[4840]: E0311 08:59:53.497631 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-11 08:59:53.997611501 +0000 UTC m=+192.663281316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pdrj8" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.508649 4840 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.508693 4840 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.526170 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a193fea6-fbc1-4fc6-8b65-c5a1928d8be1-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a193fea6-fbc1-4fc6-8b65-c5a1928d8be1\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.597189 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.612843 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.620133 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-gw77z" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.622660 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.704675 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2hvx\" (UniqueName: \"kubernetes.io/projected/a821fe36-3bdd-4b59-9dd4-004985404023-kube-api-access-f2hvx\") pod \"a821fe36-3bdd-4b59-9dd4-004985404023\" (UID: \"a821fe36-3bdd-4b59-9dd4-004985404023\") " Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.704810 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a821fe36-3bdd-4b59-9dd4-004985404023-secret-volume\") pod \"a821fe36-3bdd-4b59-9dd4-004985404023\" (UID: \"a821fe36-3bdd-4b59-9dd4-004985404023\") " Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.704888 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a821fe36-3bdd-4b59-9dd4-004985404023-config-volume\") pod \"a821fe36-3bdd-4b59-9dd4-004985404023\" (UID: \"a821fe36-3bdd-4b59-9dd4-004985404023\") " Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.705096 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.706353 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a821fe36-3bdd-4b59-9dd4-004985404023-config-volume" (OuterVolumeSpecName: "config-volume") pod "a821fe36-3bdd-4b59-9dd4-004985404023" (UID: "a821fe36-3bdd-4b59-9dd4-004985404023"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.708189 4840 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.708232 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.713204 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a821fe36-3bdd-4b59-9dd4-004985404023-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a821fe36-3bdd-4b59-9dd4-004985404023" (UID: "a821fe36-3bdd-4b59-9dd4-004985404023"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.713846 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a821fe36-3bdd-4b59-9dd4-004985404023-kube-api-access-f2hvx" (OuterVolumeSpecName: "kube-api-access-f2hvx") pod "a821fe36-3bdd-4b59-9dd4-004985404023" (UID: "a821fe36-3bdd-4b59-9dd4-004985404023"). InnerVolumeSpecName "kube-api-access-f2hvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.748916 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pdrj8\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.807632 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2hvx\" (UniqueName: \"kubernetes.io/projected/a821fe36-3bdd-4b59-9dd4-004985404023-kube-api-access-f2hvx\") on node \"crc\" DevicePath \"\"" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.807669 4840 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a821fe36-3bdd-4b59-9dd4-004985404023-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.807715 4840 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a821fe36-3bdd-4b59-9dd4-004985404023-config-volume\") on node \"crc\" DevicePath \"\"" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.852129 4840 generic.go:334] "Generic (PLEG): container finished" podID="db966398-cd84-4fb6-bedf-f1f13c670ce8" containerID="8a7b64590ad588a1fd9b8fe50a9debb1e9ad38bfade3c36d0d9b8217fd2a169e" exitCode=0 Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.852171 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" event={"ID":"db966398-cd84-4fb6-bedf-f1f13c670ce8","Type":"ContainerDied","Data":"8a7b64590ad588a1fd9b8fe50a9debb1e9ad38bfade3c36d0d9b8217fd2a169e"} Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.852244 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" event={"ID":"db966398-cd84-4fb6-bedf-f1f13c670ce8","Type":"ContainerDied","Data":"d16bca9f0091a0051d01207a0a26194086fa3bf31d89d82e0795e93ad7f8c571"} Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.852260 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d16bca9f0091a0051d01207a0a26194086fa3bf31d89d82e0795e93ad7f8c571" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.853028 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.854803 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" event={"ID":"c0687081-fb8b-4b87-945d-b107b2f86966","Type":"ContainerStarted","Data":"96a8e6244c7499788151c95dfc6a97d3889f63ad079c80154b3f4b324b189907"} Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.857390 4840 generic.go:334] "Generic (PLEG): container finished" podID="b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6" containerID="fa6e02aef95fb10e00731edf653a60f202f9475e549f77c7efbb9387c6643dc2" exitCode=0 Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.857537 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h578g" event={"ID":"b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6","Type":"ContainerDied","Data":"fa6e02aef95fb10e00731edf653a60f202f9475e549f77c7efbb9387c6643dc2"} Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.858086 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h578g" event={"ID":"b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6","Type":"ContainerStarted","Data":"22fe673c28b4500ddf3524d1832af01b112ce56d18e6ff7c1c3f49669895e39f"} Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.861568 4840 generic.go:334] "Generic (PLEG): container finished" podID="e88f7571-e0b3-4144-bec6-59c893a672d1" containerID="de7d799f0da17d3979b1e89962fea23e890b64a99628cdef1f0cb18ddfc6de72" exitCode=0 Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.861804 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e88f7571-e0b3-4144-bec6-59c893a672d1","Type":"ContainerDied","Data":"de7d799f0da17d3979b1e89962fea23e890b64a99628cdef1f0cb18ddfc6de72"} Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.870785 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.872236 4840 generic.go:334] "Generic (PLEG): container finished" podID="edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f" containerID="38d28ebda72bb5a8472b48825bb52b45d0a5f3768bb0fd02b11dde128850e8d4" exitCode=0 Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.872428 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6wxnb" event={"ID":"edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f","Type":"ContainerDied","Data":"38d28ebda72bb5a8472b48825bb52b45d0a5f3768bb0fd02b11dde128850e8d4"} Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.872522 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6wxnb" event={"ID":"edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f","Type":"ContainerStarted","Data":"a20fbd43217c6b1aacfa1bc6805a43077b1833ce908d8f4cd0cd2409121bf442"} Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.887516 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-gw77z" event={"ID":"a821fe36-3bdd-4b59-9dd4-004985404023","Type":"ContainerDied","Data":"f160e0f23d0641448e40fc76aba0686e03a04ac4c3c34959a82863737355a95b"} Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.887595 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553645-gw77z" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.887975 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44" podUID="7b145694-7add-48aa-9321-56703922b613" containerName="route-controller-manager" containerID="cri-o://c537d7b99d81101c0962a538ed2bf490f755945cf5c5868ed33eaff91e86473d" gracePeriod=30 Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.888250 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f160e0f23d0641448e40fc76aba0686e03a04ac4c3c34959a82863737355a95b" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.903982 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7w69s"] Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.908969 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db966398-cd84-4fb6-bedf-f1f13c670ce8-serving-cert\") pod \"db966398-cd84-4fb6-bedf-f1f13c670ce8\" (UID: \"db966398-cd84-4fb6-bedf-f1f13c670ce8\") " Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.909073 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9cjl\" (UniqueName: \"kubernetes.io/projected/db966398-cd84-4fb6-bedf-f1f13c670ce8-kube-api-access-k9cjl\") pod \"db966398-cd84-4fb6-bedf-f1f13c670ce8\" (UID: \"db966398-cd84-4fb6-bedf-f1f13c670ce8\") " Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.909290 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db966398-cd84-4fb6-bedf-f1f13c670ce8-config\") pod \"db966398-cd84-4fb6-bedf-f1f13c670ce8\" (UID: \"db966398-cd84-4fb6-bedf-f1f13c670ce8\") " Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.909316 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db966398-cd84-4fb6-bedf-f1f13c670ce8-client-ca\") pod \"db966398-cd84-4fb6-bedf-f1f13c670ce8\" (UID: \"db966398-cd84-4fb6-bedf-f1f13c670ce8\") " Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.909353 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db966398-cd84-4fb6-bedf-f1f13c670ce8-proxy-ca-bundles\") pod \"db966398-cd84-4fb6-bedf-f1f13c670ce8\" (UID: \"db966398-cd84-4fb6-bedf-f1f13c670ce8\") " Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.916105 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db966398-cd84-4fb6-bedf-f1f13c670ce8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "db966398-cd84-4fb6-bedf-f1f13c670ce8" (UID: "db966398-cd84-4fb6-bedf-f1f13c670ce8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.916163 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-chbkv"] Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.916985 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db966398-cd84-4fb6-bedf-f1f13c670ce8-config" (OuterVolumeSpecName: "config") pod "db966398-cd84-4fb6-bedf-f1f13c670ce8" (UID: "db966398-cd84-4fb6-bedf-f1f13c670ce8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:59:53 crc kubenswrapper[4840]: W0311 08:59:53.924131 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996d3b36_77c7_4e8f_a472_ac032aabd836.slice/crio-c2b63b61f60149a264e01e5162c3dbef6c3c932d73c0ed7f6b94acca5a532c56 WatchSource:0}: Error finding container c2b63b61f60149a264e01e5162c3dbef6c3c932d73c0ed7f6b94acca5a532c56: Status 404 returned error can't find the container with id c2b63b61f60149a264e01e5162c3dbef6c3c932d73c0ed7f6b94acca5a532c56 Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.927129 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db966398-cd84-4fb6-bedf-f1f13c670ce8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "db966398-cd84-4fb6-bedf-f1f13c670ce8" (UID: "db966398-cd84-4fb6-bedf-f1f13c670ce8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.930794 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db966398-cd84-4fb6-bedf-f1f13c670ce8-client-ca" (OuterVolumeSpecName: "client-ca") pod "db966398-cd84-4fb6-bedf-f1f13c670ce8" (UID: "db966398-cd84-4fb6-bedf-f1f13c670ce8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.937957 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:53 crc kubenswrapper[4840]: W0311 08:59:53.957763 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podecd2f9dc_1f5f_4787_b66f_8caaaeb9dc9f.slice/crio-59363b6776f0a73a945a87a03a4f555643cef31e0e17a2064779925d90d35070 WatchSource:0}: Error finding container 59363b6776f0a73a945a87a03a4f555643cef31e0e17a2064779925d90d35070: Status 404 returned error can't find the container with id 59363b6776f0a73a945a87a03a4f555643cef31e0e17a2064779925d90d35070 Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.959506 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db966398-cd84-4fb6-bedf-f1f13c670ce8-kube-api-access-k9cjl" (OuterVolumeSpecName: "kube-api-access-k9cjl") pod "db966398-cd84-4fb6-bedf-f1f13c670ce8" (UID: "db966398-cd84-4fb6-bedf-f1f13c670ce8"). InnerVolumeSpecName "kube-api-access-k9cjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.959639 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-8rsqx" podStartSLOduration=12.959617003 podStartE2EDuration="12.959617003s" podCreationTimestamp="2026-03-11 08:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:53.959051987 +0000 UTC m=+192.624721802" watchObservedRunningTime="2026-03-11 08:59:53.959617003 +0000 UTC m=+192.625286818" Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.990766 4840 patch_prober.go:28] interesting pod/router-default-5444994796-wwr6r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 11 08:59:53 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Mar 11 08:59:53 crc kubenswrapper[4840]: [+]process-running ok Mar 11 08:59:53 crc kubenswrapper[4840]: healthz check failed Mar 11 08:59:53 crc kubenswrapper[4840]: I0311 08:59:53.991244 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwr6r" podUID="4c602adf-1ed4-4779-a4f5-5ff24d9ee648" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.018351 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db966398-cd84-4fb6-bedf-f1f13c670ce8-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.018383 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9cjl\" (UniqueName: \"kubernetes.io/projected/db966398-cd84-4fb6-bedf-f1f13c670ce8-kube-api-access-k9cjl\") on node \"crc\" DevicePath \"\"" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.018394 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db966398-cd84-4fb6-bedf-f1f13c670ce8-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.018404 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db966398-cd84-4fb6-bedf-f1f13c670ce8-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.018412 4840 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db966398-cd84-4fb6-bedf-f1f13c670ce8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.057678 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-swqkn"] Mar 11 08:59:54 crc kubenswrapper[4840]: E0311 08:59:54.058115 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a821fe36-3bdd-4b59-9dd4-004985404023" containerName="collect-profiles" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.058127 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="a821fe36-3bdd-4b59-9dd4-004985404023" containerName="collect-profiles" Mar 11 08:59:54 crc kubenswrapper[4840]: E0311 08:59:54.058139 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db966398-cd84-4fb6-bedf-f1f13c670ce8" containerName="controller-manager" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.058145 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="db966398-cd84-4fb6-bedf-f1f13c670ce8" containerName="controller-manager" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.058248 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="a821fe36-3bdd-4b59-9dd4-004985404023" containerName="collect-profiles" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.058263 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="db966398-cd84-4fb6-bedf-f1f13c670ce8" containerName="controller-manager" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.059326 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-swqkn" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.060174 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.064834 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.074093 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.074996 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-swqkn"] Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.092983 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-df868c598-zf4cd"] Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.094020 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.119363 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea6e709-84f7-4603-bcda-6d336d3a96fc-utilities\") pod \"redhat-marketplace-swqkn\" (UID: \"3ea6e709-84f7-4603-bcda-6d336d3a96fc\") " pod="openshift-marketplace/redhat-marketplace-swqkn" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.119405 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea6e709-84f7-4603-bcda-6d336d3a96fc-catalog-content\") pod \"redhat-marketplace-swqkn\" (UID: \"3ea6e709-84f7-4603-bcda-6d336d3a96fc\") " pod="openshift-marketplace/redhat-marketplace-swqkn" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.119435 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvlbz\" (UniqueName: \"kubernetes.io/projected/3ea6e709-84f7-4603-bcda-6d336d3a96fc-kube-api-access-mvlbz\") pod \"redhat-marketplace-swqkn\" (UID: \"3ea6e709-84f7-4603-bcda-6d336d3a96fc\") " pod="openshift-marketplace/redhat-marketplace-swqkn" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.121818 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-df868c598-zf4cd"] Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.220759 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc370bb3-d2c0-44f7-ab92-ab317db166ee-serving-cert\") pod \"controller-manager-df868c598-zf4cd\" (UID: \"dc370bb3-d2c0-44f7-ab92-ab317db166ee\") " pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.220855 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc370bb3-d2c0-44f7-ab92-ab317db166ee-config\") pod \"controller-manager-df868c598-zf4cd\" (UID: \"dc370bb3-d2c0-44f7-ab92-ab317db166ee\") " pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.220919 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc370bb3-d2c0-44f7-ab92-ab317db166ee-proxy-ca-bundles\") pod \"controller-manager-df868c598-zf4cd\" (UID: \"dc370bb3-d2c0-44f7-ab92-ab317db166ee\") " pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.220954 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea6e709-84f7-4603-bcda-6d336d3a96fc-utilities\") pod \"redhat-marketplace-swqkn\" (UID: \"3ea6e709-84f7-4603-bcda-6d336d3a96fc\") " pod="openshift-marketplace/redhat-marketplace-swqkn" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.220988 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dc370bb3-d2c0-44f7-ab92-ab317db166ee-client-ca\") pod \"controller-manager-df868c598-zf4cd\" (UID: \"dc370bb3-d2c0-44f7-ab92-ab317db166ee\") " pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.221012 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea6e709-84f7-4603-bcda-6d336d3a96fc-catalog-content\") pod \"redhat-marketplace-swqkn\" (UID: \"3ea6e709-84f7-4603-bcda-6d336d3a96fc\") " pod="openshift-marketplace/redhat-marketplace-swqkn" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.221053 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvlbz\" (UniqueName: \"kubernetes.io/projected/3ea6e709-84f7-4603-bcda-6d336d3a96fc-kube-api-access-mvlbz\") pod \"redhat-marketplace-swqkn\" (UID: \"3ea6e709-84f7-4603-bcda-6d336d3a96fc\") " pod="openshift-marketplace/redhat-marketplace-swqkn" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.221103 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6lqm\" (UniqueName: \"kubernetes.io/projected/dc370bb3-d2c0-44f7-ab92-ab317db166ee-kube-api-access-b6lqm\") pod \"controller-manager-df868c598-zf4cd\" (UID: \"dc370bb3-d2c0-44f7-ab92-ab317db166ee\") " pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.222200 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea6e709-84f7-4603-bcda-6d336d3a96fc-utilities\") pod \"redhat-marketplace-swqkn\" (UID: \"3ea6e709-84f7-4603-bcda-6d336d3a96fc\") " pod="openshift-marketplace/redhat-marketplace-swqkn" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.222326 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea6e709-84f7-4603-bcda-6d336d3a96fc-catalog-content\") pod \"redhat-marketplace-swqkn\" (UID: \"3ea6e709-84f7-4603-bcda-6d336d3a96fc\") " pod="openshift-marketplace/redhat-marketplace-swqkn" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.263356 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvlbz\" (UniqueName: \"kubernetes.io/projected/3ea6e709-84f7-4603-bcda-6d336d3a96fc-kube-api-access-mvlbz\") pod \"redhat-marketplace-swqkn\" (UID: \"3ea6e709-84f7-4603-bcda-6d336d3a96fc\") " pod="openshift-marketplace/redhat-marketplace-swqkn" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.324231 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dc370bb3-d2c0-44f7-ab92-ab317db166ee-client-ca\") pod \"controller-manager-df868c598-zf4cd\" (UID: \"dc370bb3-d2c0-44f7-ab92-ab317db166ee\") " pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.324431 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6lqm\" (UniqueName: \"kubernetes.io/projected/dc370bb3-d2c0-44f7-ab92-ab317db166ee-kube-api-access-b6lqm\") pod \"controller-manager-df868c598-zf4cd\" (UID: \"dc370bb3-d2c0-44f7-ab92-ab317db166ee\") " pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.324799 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc370bb3-d2c0-44f7-ab92-ab317db166ee-serving-cert\") pod \"controller-manager-df868c598-zf4cd\" (UID: \"dc370bb3-d2c0-44f7-ab92-ab317db166ee\") " pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.324863 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc370bb3-d2c0-44f7-ab92-ab317db166ee-config\") pod \"controller-manager-df868c598-zf4cd\" (UID: \"dc370bb3-d2c0-44f7-ab92-ab317db166ee\") " pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.324937 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc370bb3-d2c0-44f7-ab92-ab317db166ee-proxy-ca-bundles\") pod \"controller-manager-df868c598-zf4cd\" (UID: \"dc370bb3-d2c0-44f7-ab92-ab317db166ee\") " pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.327030 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc370bb3-d2c0-44f7-ab92-ab317db166ee-config\") pod \"controller-manager-df868c598-zf4cd\" (UID: \"dc370bb3-d2c0-44f7-ab92-ab317db166ee\") " pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.329593 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dc370bb3-d2c0-44f7-ab92-ab317db166ee-client-ca\") pod \"controller-manager-df868c598-zf4cd\" (UID: \"dc370bb3-d2c0-44f7-ab92-ab317db166ee\") " pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.338898 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc370bb3-d2c0-44f7-ab92-ab317db166ee-serving-cert\") pod \"controller-manager-df868c598-zf4cd\" (UID: \"dc370bb3-d2c0-44f7-ab92-ab317db166ee\") " pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.339917 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc370bb3-d2c0-44f7-ab92-ab317db166ee-proxy-ca-bundles\") pod \"controller-manager-df868c598-zf4cd\" (UID: \"dc370bb3-d2c0-44f7-ab92-ab317db166ee\") " pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.351050 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6lqm\" (UniqueName: \"kubernetes.io/projected/dc370bb3-d2c0-44f7-ab92-ab317db166ee-kube-api-access-b6lqm\") pod \"controller-manager-df868c598-zf4cd\" (UID: \"dc370bb3-d2c0-44f7-ab92-ab317db166ee\") " pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.391434 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-swqkn" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.427619 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.446561 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v5s8n"] Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.449601 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v5s8n" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.451880 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v5s8n"] Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.461004 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.531139 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvklz\" (UniqueName: \"kubernetes.io/projected/7b145694-7add-48aa-9321-56703922b613-kube-api-access-gvklz\") pod \"7b145694-7add-48aa-9321-56703922b613\" (UID: \"7b145694-7add-48aa-9321-56703922b613\") " Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.534064 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b145694-7add-48aa-9321-56703922b613-config\") pod \"7b145694-7add-48aa-9321-56703922b613\" (UID: \"7b145694-7add-48aa-9321-56703922b613\") " Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.534160 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b145694-7add-48aa-9321-56703922b613-client-ca\") pod \"7b145694-7add-48aa-9321-56703922b613\" (UID: \"7b145694-7add-48aa-9321-56703922b613\") " Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.534210 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b145694-7add-48aa-9321-56703922b613-serving-cert\") pod \"7b145694-7add-48aa-9321-56703922b613\" (UID: \"7b145694-7add-48aa-9321-56703922b613\") " Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.534762 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcd6c58d-5984-4122-826c-18ecfe7dde26-catalog-content\") pod \"redhat-marketplace-v5s8n\" (UID: \"dcd6c58d-5984-4122-826c-18ecfe7dde26\") " pod="openshift-marketplace/redhat-marketplace-v5s8n" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.534808 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kszv8\" (UniqueName: \"kubernetes.io/projected/dcd6c58d-5984-4122-826c-18ecfe7dde26-kube-api-access-kszv8\") pod \"redhat-marketplace-v5s8n\" (UID: \"dcd6c58d-5984-4122-826c-18ecfe7dde26\") " pod="openshift-marketplace/redhat-marketplace-v5s8n" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.534850 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcd6c58d-5984-4122-826c-18ecfe7dde26-utilities\") pod \"redhat-marketplace-v5s8n\" (UID: \"dcd6c58d-5984-4122-826c-18ecfe7dde26\") " pod="openshift-marketplace/redhat-marketplace-v5s8n" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.535592 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b145694-7add-48aa-9321-56703922b613-config" (OuterVolumeSpecName: "config") pod "7b145694-7add-48aa-9321-56703922b613" (UID: "7b145694-7add-48aa-9321-56703922b613"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.535900 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b145694-7add-48aa-9321-56703922b613-client-ca" (OuterVolumeSpecName: "client-ca") pod "7b145694-7add-48aa-9321-56703922b613" (UID: "7b145694-7add-48aa-9321-56703922b613"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.541003 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b145694-7add-48aa-9321-56703922b613-kube-api-access-gvklz" (OuterVolumeSpecName: "kube-api-access-gvklz") pod "7b145694-7add-48aa-9321-56703922b613" (UID: "7b145694-7add-48aa-9321-56703922b613"). InnerVolumeSpecName "kube-api-access-gvklz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.543622 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b145694-7add-48aa-9321-56703922b613-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7b145694-7add-48aa-9321-56703922b613" (UID: "7b145694-7add-48aa-9321-56703922b613"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.550386 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pdrj8"] Mar 11 08:59:54 crc kubenswrapper[4840]: W0311 08:59:54.633397 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40a6df27_50b3_452a_940a_aab6b087cdb2.slice/crio-5eb488ed9a6e7b08a3cadc609fdd28d7556b05b193a59010f63eb8b1437abd3c WatchSource:0}: Error finding container 5eb488ed9a6e7b08a3cadc609fdd28d7556b05b193a59010f63eb8b1437abd3c: Status 404 returned error can't find the container with id 5eb488ed9a6e7b08a3cadc609fdd28d7556b05b193a59010f63eb8b1437abd3c Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.635628 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kszv8\" (UniqueName: \"kubernetes.io/projected/dcd6c58d-5984-4122-826c-18ecfe7dde26-kube-api-access-kszv8\") pod \"redhat-marketplace-v5s8n\" (UID: \"dcd6c58d-5984-4122-826c-18ecfe7dde26\") " pod="openshift-marketplace/redhat-marketplace-v5s8n" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.635688 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcd6c58d-5984-4122-826c-18ecfe7dde26-utilities\") pod \"redhat-marketplace-v5s8n\" (UID: \"dcd6c58d-5984-4122-826c-18ecfe7dde26\") " pod="openshift-marketplace/redhat-marketplace-v5s8n" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.635806 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcd6c58d-5984-4122-826c-18ecfe7dde26-catalog-content\") pod \"redhat-marketplace-v5s8n\" (UID: \"dcd6c58d-5984-4122-826c-18ecfe7dde26\") " pod="openshift-marketplace/redhat-marketplace-v5s8n" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.635866 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvklz\" (UniqueName: \"kubernetes.io/projected/7b145694-7add-48aa-9321-56703922b613-kube-api-access-gvklz\") on node \"crc\" DevicePath \"\"" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.635884 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b145694-7add-48aa-9321-56703922b613-config\") on node \"crc\" DevicePath \"\"" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.635896 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b145694-7add-48aa-9321-56703922b613-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.635906 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b145694-7add-48aa-9321-56703922b613-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.636282 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcd6c58d-5984-4122-826c-18ecfe7dde26-utilities\") pod \"redhat-marketplace-v5s8n\" (UID: \"dcd6c58d-5984-4122-826c-18ecfe7dde26\") " pod="openshift-marketplace/redhat-marketplace-v5s8n" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.636393 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcd6c58d-5984-4122-826c-18ecfe7dde26-catalog-content\") pod \"redhat-marketplace-v5s8n\" (UID: \"dcd6c58d-5984-4122-826c-18ecfe7dde26\") " pod="openshift-marketplace/redhat-marketplace-v5s8n" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.656520 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kszv8\" (UniqueName: \"kubernetes.io/projected/dcd6c58d-5984-4122-826c-18ecfe7dde26-kube-api-access-kszv8\") pod \"redhat-marketplace-v5s8n\" (UID: \"dcd6c58d-5984-4122-826c-18ecfe7dde26\") " pod="openshift-marketplace/redhat-marketplace-v5s8n" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.775354 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v5s8n" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.892477 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-df868c598-zf4cd"] Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.908535 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-swqkn"] Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.923454 4840 generic.go:334] "Generic (PLEG): container finished" podID="996d3b36-77c7-4e8f-a472-ac032aabd836" containerID="978d43f66002b007cd3eece14909369d3be344d57a9bd5b545750330f671634e" exitCode=0 Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.923618 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7w69s" event={"ID":"996d3b36-77c7-4e8f-a472-ac032aabd836","Type":"ContainerDied","Data":"978d43f66002b007cd3eece14909369d3be344d57a9bd5b545750330f671634e"} Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.923647 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7w69s" event={"ID":"996d3b36-77c7-4e8f-a472-ac032aabd836","Type":"ContainerStarted","Data":"c2b63b61f60149a264e01e5162c3dbef6c3c932d73c0ed7f6b94acca5a532c56"} Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.929748 4840 generic.go:334] "Generic (PLEG): container finished" podID="ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f" containerID="c4a214cd1ff05433fb5237befb1450c2e433df5f0c408e78530a28bbb13d5faf" exitCode=0 Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.929805 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chbkv" event={"ID":"ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f","Type":"ContainerDied","Data":"c4a214cd1ff05433fb5237befb1450c2e433df5f0c408e78530a28bbb13d5faf"} Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.929829 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chbkv" event={"ID":"ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f","Type":"ContainerStarted","Data":"59363b6776f0a73a945a87a03a4f555643cef31e0e17a2064779925d90d35070"} Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.934127 4840 generic.go:334] "Generic (PLEG): container finished" podID="7b145694-7add-48aa-9321-56703922b613" containerID="c537d7b99d81101c0962a538ed2bf490f755945cf5c5868ed33eaff91e86473d" exitCode=0 Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.934177 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44" event={"ID":"7b145694-7add-48aa-9321-56703922b613","Type":"ContainerDied","Data":"c537d7b99d81101c0962a538ed2bf490f755945cf5c5868ed33eaff91e86473d"} Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.935076 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44" event={"ID":"7b145694-7add-48aa-9321-56703922b613","Type":"ContainerDied","Data":"c334ec72f8fe37526a35446d89e4ea8dfbafde226f2ffda5cfc5f3410f4358fa"} Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.935102 4840 scope.go:117] "RemoveContainer" containerID="c537d7b99d81101c0962a538ed2bf490f755945cf5c5868ed33eaff91e86473d" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.935584 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.949837 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a193fea6-fbc1-4fc6-8b65-c5a1928d8be1","Type":"ContainerStarted","Data":"4fa4ef808cfc5c5d8a69c0e64c86d24cc973d2c9c5dc0afa4fbb2dcaaa54cdef"} Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.949890 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a193fea6-fbc1-4fc6-8b65-c5a1928d8be1","Type":"ContainerStarted","Data":"726d4d26556c84f10052c81679db2aa4c7f6c357066737cc48113c2cd816c524"} Mar 11 08:59:54 crc kubenswrapper[4840]: W0311 08:59:54.962536 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ea6e709_84f7_4603_bcda_6d336d3a96fc.slice/crio-2e4d1a9ae85704eee3ed25d49fb086d2544ecf6c3399ceb0af6164e349ba453d WatchSource:0}: Error finding container 2e4d1a9ae85704eee3ed25d49fb086d2544ecf6c3399ceb0af6164e349ba453d: Status 404 returned error can't find the container with id 2e4d1a9ae85704eee3ed25d49fb086d2544ecf6c3399ceb0af6164e349ba453d Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.966968 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6mpbx" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.967584 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" event={"ID":"40a6df27-50b3-452a-940a-aab6b087cdb2","Type":"ContainerStarted","Data":"08e6ad061b588a2322ac9ccb3f32b85a9c7b4219c2bd0b146ba49c64e0a75a65"} Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.967661 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" event={"ID":"40a6df27-50b3-452a-940a-aab6b087cdb2","Type":"ContainerStarted","Data":"5eb488ed9a6e7b08a3cadc609fdd28d7556b05b193a59010f63eb8b1437abd3c"} Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.976392 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-wwr6r" Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.981960 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v5s8n"] Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.993247 4840 patch_prober.go:28] interesting pod/router-default-5444994796-wwr6r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 11 08:59:54 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Mar 11 08:59:54 crc kubenswrapper[4840]: [+]process-running ok Mar 11 08:59:54 crc kubenswrapper[4840]: healthz check failed Mar 11 08:59:54 crc kubenswrapper[4840]: I0311 08:59:54.993297 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwr6r" podUID="4c602adf-1ed4-4779-a4f5-5ff24d9ee648" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.004250 4840 scope.go:117] "RemoveContainer" containerID="c537d7b99d81101c0962a538ed2bf490f755945cf5c5868ed33eaff91e86473d" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.004374 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44"] Mar 11 08:59:55 crc kubenswrapper[4840]: E0311 08:59:55.005630 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c537d7b99d81101c0962a538ed2bf490f755945cf5c5868ed33eaff91e86473d\": container with ID starting with c537d7b99d81101c0962a538ed2bf490f755945cf5c5868ed33eaff91e86473d not found: ID does not exist" containerID="c537d7b99d81101c0962a538ed2bf490f755945cf5c5868ed33eaff91e86473d" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.005670 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c537d7b99d81101c0962a538ed2bf490f755945cf5c5868ed33eaff91e86473d"} err="failed to get container status \"c537d7b99d81101c0962a538ed2bf490f755945cf5c5868ed33eaff91e86473d\": rpc error: code = NotFound desc = could not find container \"c537d7b99d81101c0962a538ed2bf490f755945cf5c5868ed33eaff91e86473d\": container with ID starting with c537d7b99d81101c0962a538ed2bf490f755945cf5c5868ed33eaff91e86473d not found: ID does not exist" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.010877 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lzd44"] Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.020372 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" podStartSLOduration=125.020354964 podStartE2EDuration="2m5.020354964s" podCreationTimestamp="2026-03-11 08:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:55.017843072 +0000 UTC m=+193.683512887" watchObservedRunningTime="2026-03-11 08:59:55.020354964 +0000 UTC m=+193.686024779" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.039902 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.039880152 podStartE2EDuration="2.039880152s" podCreationTimestamp="2026-03-11 08:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:55.035667032 +0000 UTC m=+193.701336857" watchObservedRunningTime="2026-03-11 08:59:55.039880152 +0000 UTC m=+193.705549977" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.083088 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6mpbx"] Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.085944 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6mpbx"] Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.332901 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.332959 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.339564 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.409707 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.409746 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.418954 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.439177 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.449036 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fp456"] Mar 11 08:59:55 crc kubenswrapper[4840]: E0311 08:59:55.449285 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b145694-7add-48aa-9321-56703922b613" containerName="route-controller-manager" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.449307 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b145694-7add-48aa-9321-56703922b613" containerName="route-controller-manager" Mar 11 08:59:55 crc kubenswrapper[4840]: E0311 08:59:55.449332 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e88f7571-e0b3-4144-bec6-59c893a672d1" containerName="pruner" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.449341 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="e88f7571-e0b3-4144-bec6-59c893a672d1" containerName="pruner" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.449456 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="e88f7571-e0b3-4144-bec6-59c893a672d1" containerName="pruner" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.449511 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b145694-7add-48aa-9321-56703922b613" containerName="route-controller-manager" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.451557 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fp456" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.458854 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.466333 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fp456"] Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.473246 4840 ???:1] "http: TLS handshake error from 192.168.126.11:42510: no serving certificate available for the kubelet" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.550855 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e88f7571-e0b3-4144-bec6-59c893a672d1-kube-api-access\") pod \"e88f7571-e0b3-4144-bec6-59c893a672d1\" (UID: \"e88f7571-e0b3-4144-bec6-59c893a672d1\") " Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.550914 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e88f7571-e0b3-4144-bec6-59c893a672d1-kubelet-dir\") pod \"e88f7571-e0b3-4144-bec6-59c893a672d1\" (UID: \"e88f7571-e0b3-4144-bec6-59c893a672d1\") " Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.551066 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e88f7571-e0b3-4144-bec6-59c893a672d1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e88f7571-e0b3-4144-bec6-59c893a672d1" (UID: "e88f7571-e0b3-4144-bec6-59c893a672d1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.551149 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g25d6\" (UniqueName: \"kubernetes.io/projected/5be5a417-86c5-43f6-b238-9c0c498028ab-kube-api-access-g25d6\") pod \"redhat-operators-fp456\" (UID: \"5be5a417-86c5-43f6-b238-9c0c498028ab\") " pod="openshift-marketplace/redhat-operators-fp456" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.551184 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5be5a417-86c5-43f6-b238-9c0c498028ab-catalog-content\") pod \"redhat-operators-fp456\" (UID: \"5be5a417-86c5-43f6-b238-9c0c498028ab\") " pod="openshift-marketplace/redhat-operators-fp456" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.551269 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5be5a417-86c5-43f6-b238-9c0c498028ab-utilities\") pod \"redhat-operators-fp456\" (UID: \"5be5a417-86c5-43f6-b238-9c0c498028ab\") " pod="openshift-marketplace/redhat-operators-fp456" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.551339 4840 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e88f7571-e0b3-4144-bec6-59c893a672d1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.558762 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e88f7571-e0b3-4144-bec6-59c893a672d1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e88f7571-e0b3-4144-bec6-59c893a672d1" (UID: "e88f7571-e0b3-4144-bec6-59c893a672d1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.634370 4840 patch_prober.go:28] interesting pod/downloads-7954f5f757-9k5xp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.634436 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-9k5xp" podUID="78462b75-44da-4862-88c5-5cf892a91058" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.634689 4840 patch_prober.go:28] interesting pod/downloads-7954f5f757-9k5xp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.634744 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-9k5xp" podUID="78462b75-44da-4862-88c5-5cf892a91058" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.652975 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5be5a417-86c5-43f6-b238-9c0c498028ab-catalog-content\") pod \"redhat-operators-fp456\" (UID: \"5be5a417-86c5-43f6-b238-9c0c498028ab\") " pod="openshift-marketplace/redhat-operators-fp456" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.653086 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5be5a417-86c5-43f6-b238-9c0c498028ab-utilities\") pod \"redhat-operators-fp456\" (UID: \"5be5a417-86c5-43f6-b238-9c0c498028ab\") " pod="openshift-marketplace/redhat-operators-fp456" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.653169 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g25d6\" (UniqueName: \"kubernetes.io/projected/5be5a417-86c5-43f6-b238-9c0c498028ab-kube-api-access-g25d6\") pod \"redhat-operators-fp456\" (UID: \"5be5a417-86c5-43f6-b238-9c0c498028ab\") " pod="openshift-marketplace/redhat-operators-fp456" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.653216 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e88f7571-e0b3-4144-bec6-59c893a672d1-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.653990 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5be5a417-86c5-43f6-b238-9c0c498028ab-catalog-content\") pod \"redhat-operators-fp456\" (UID: \"5be5a417-86c5-43f6-b238-9c0c498028ab\") " pod="openshift-marketplace/redhat-operators-fp456" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.654246 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5be5a417-86c5-43f6-b238-9c0c498028ab-utilities\") pod \"redhat-operators-fp456\" (UID: \"5be5a417-86c5-43f6-b238-9c0c498028ab\") " pod="openshift-marketplace/redhat-operators-fp456" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.671174 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g25d6\" (UniqueName: \"kubernetes.io/projected/5be5a417-86c5-43f6-b238-9c0c498028ab-kube-api-access-g25d6\") pod \"redhat-operators-fp456\" (UID: \"5be5a417-86c5-43f6-b238-9c0c498028ab\") " pod="openshift-marketplace/redhat-operators-fp456" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.679606 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.680654 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.681697 4840 patch_prober.go:28] interesting pod/console-f9d7485db-xkq7s container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.681776 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-xkq7s" podUID="5dc5ef77-d18a-4474-a523-473f27166095" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.773217 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fp456" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.853624 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-blqnh"] Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.855265 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blqnh" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.862331 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-blqnh"] Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.969844 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k25b6\" (UniqueName: \"kubernetes.io/projected/75259694-613a-407e-aea2-fb828fe927b9-kube-api-access-k25b6\") pod \"redhat-operators-blqnh\" (UID: \"75259694-613a-407e-aea2-fb828fe927b9\") " pod="openshift-marketplace/redhat-operators-blqnh" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.970425 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75259694-613a-407e-aea2-fb828fe927b9-catalog-content\") pod \"redhat-operators-blqnh\" (UID: \"75259694-613a-407e-aea2-fb828fe927b9\") " pod="openshift-marketplace/redhat-operators-blqnh" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.970624 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75259694-613a-407e-aea2-fb828fe927b9-utilities\") pod \"redhat-operators-blqnh\" (UID: \"75259694-613a-407e-aea2-fb828fe927b9\") " pod="openshift-marketplace/redhat-operators-blqnh" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.985633 4840 patch_prober.go:28] interesting pod/router-default-5444994796-wwr6r container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 11 08:59:55 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Mar 11 08:59:55 crc kubenswrapper[4840]: [+]process-running ok Mar 11 08:59:55 crc kubenswrapper[4840]: healthz check failed Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.985697 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wwr6r" podUID="4c602adf-1ed4-4779-a4f5-5ff24d9ee648" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.987407 4840 generic.go:334] "Generic (PLEG): container finished" podID="dcd6c58d-5984-4122-826c-18ecfe7dde26" containerID="5045fc7e473933ff3dbd287dc324c77eff4af4edab2d305e3b27e7f497553c95" exitCode=0 Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.987649 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v5s8n" event={"ID":"dcd6c58d-5984-4122-826c-18ecfe7dde26","Type":"ContainerDied","Data":"5045fc7e473933ff3dbd287dc324c77eff4af4edab2d305e3b27e7f497553c95"} Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.987728 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v5s8n" event={"ID":"dcd6c58d-5984-4122-826c-18ecfe7dde26","Type":"ContainerStarted","Data":"9e4329a7ed3f60f898ef504f1a7280249b3a2eeec028f6e171fb68ff7a68904f"} Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.990773 4840 generic.go:334] "Generic (PLEG): container finished" podID="a193fea6-fbc1-4fc6-8b65-c5a1928d8be1" containerID="4fa4ef808cfc5c5d8a69c0e64c86d24cc973d2c9c5dc0afa4fbb2dcaaa54cdef" exitCode=0 Mar 11 08:59:55 crc kubenswrapper[4840]: I0311 08:59:55.990825 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a193fea6-fbc1-4fc6-8b65-c5a1928d8be1","Type":"ContainerDied","Data":"4fa4ef808cfc5c5d8a69c0e64c86d24cc973d2c9c5dc0afa4fbb2dcaaa54cdef"} Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.010962 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh"] Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.023832 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh"] Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.023980 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.031994 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e88f7571-e0b3-4144-bec6-59c893a672d1","Type":"ContainerDied","Data":"010eaef79796145c2131f501d0ec3ab6fe59b2918524c8822ae2b581c3bc8be1"} Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.032075 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="010eaef79796145c2131f501d0ec3ab6fe59b2918524c8822ae2b581c3bc8be1" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.032206 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.033234 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.033246 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.033983 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.034530 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.033714 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.035966 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.054691 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" event={"ID":"dc370bb3-d2c0-44f7-ab92-ab317db166ee","Type":"ContainerStarted","Data":"44b691d39a38027d262d8d4ec83a1108755e1aa07ea83f3750e99ac390fa64c8"} Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.054757 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" event={"ID":"dc370bb3-d2c0-44f7-ab92-ab317db166ee","Type":"ContainerStarted","Data":"1bbdd69419a1c71146d8450d644c6a57f735b68fc65d28f272d80b7b0726a60e"} Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.055183 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.068928 4840 generic.go:334] "Generic (PLEG): container finished" podID="3ea6e709-84f7-4603-bcda-6d336d3a96fc" containerID="379399e647d567d2ed8e4696ccef7ecd2c7302d130df8a8eb89b6aa012e12595" exitCode=0 Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.084695 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf06411b-1fa1-4613-ad76-d979d0909964-serving-cert\") pod \"route-controller-manager-f947d96f4-qbmwh\" (UID: \"cf06411b-1fa1-4613-ad76-d979d0909964\") " pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.084744 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg6pc\" (UniqueName: \"kubernetes.io/projected/cf06411b-1fa1-4613-ad76-d979d0909964-kube-api-access-wg6pc\") pod \"route-controller-manager-f947d96f4-qbmwh\" (UID: \"cf06411b-1fa1-4613-ad76-d979d0909964\") " pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.084770 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75259694-613a-407e-aea2-fb828fe927b9-utilities\") pod \"redhat-operators-blqnh\" (UID: \"75259694-613a-407e-aea2-fb828fe927b9\") " pod="openshift-marketplace/redhat-operators-blqnh" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.085100 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k25b6\" (UniqueName: \"kubernetes.io/projected/75259694-613a-407e-aea2-fb828fe927b9-kube-api-access-k25b6\") pod \"redhat-operators-blqnh\" (UID: \"75259694-613a-407e-aea2-fb828fe927b9\") " pod="openshift-marketplace/redhat-operators-blqnh" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.085194 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75259694-613a-407e-aea2-fb828fe927b9-utilities\") pod \"redhat-operators-blqnh\" (UID: \"75259694-613a-407e-aea2-fb828fe927b9\") " pod="openshift-marketplace/redhat-operators-blqnh" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.085211 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf06411b-1fa1-4613-ad76-d979d0909964-client-ca\") pod \"route-controller-manager-f947d96f4-qbmwh\" (UID: \"cf06411b-1fa1-4613-ad76-d979d0909964\") " pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.085383 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75259694-613a-407e-aea2-fb828fe927b9-catalog-content\") pod \"redhat-operators-blqnh\" (UID: \"75259694-613a-407e-aea2-fb828fe927b9\") " pod="openshift-marketplace/redhat-operators-blqnh" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.085456 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf06411b-1fa1-4613-ad76-d979d0909964-config\") pod \"route-controller-manager-f947d96f4-qbmwh\" (UID: \"cf06411b-1fa1-4613-ad76-d979d0909964\") " pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.086452 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75259694-613a-407e-aea2-fb828fe927b9-catalog-content\") pod \"redhat-operators-blqnh\" (UID: \"75259694-613a-407e-aea2-fb828fe927b9\") " pod="openshift-marketplace/redhat-operators-blqnh" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.088690 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b145694-7add-48aa-9321-56703922b613" path="/var/lib/kubelet/pods/7b145694-7add-48aa-9321-56703922b613/volumes" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.090033 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db966398-cd84-4fb6-bedf-f1f13c670ce8" path="/var/lib/kubelet/pods/db966398-cd84-4fb6-bedf-f1f13c670ce8/volumes" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.090635 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fp456"] Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.090698 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.090713 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.090751 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-d5xpl" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.090770 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-swqkn" event={"ID":"3ea6e709-84f7-4603-bcda-6d336d3a96fc","Type":"ContainerDied","Data":"379399e647d567d2ed8e4696ccef7ecd2c7302d130df8a8eb89b6aa012e12595"} Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.090802 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-swqkn" event={"ID":"3ea6e709-84f7-4603-bcda-6d336d3a96fc","Type":"ContainerStarted","Data":"2e4d1a9ae85704eee3ed25d49fb086d2544ecf6c3399ceb0af6164e349ba453d"} Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.092729 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-j4zs8" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.104361 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" podStartSLOduration=2.10432645 podStartE2EDuration="2.10432645s" podCreationTimestamp="2026-03-11 08:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:56.100867552 +0000 UTC m=+194.766537377" watchObservedRunningTime="2026-03-11 08:59:56.10432645 +0000 UTC m=+194.769996265" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.104538 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k25b6\" (UniqueName: \"kubernetes.io/projected/75259694-613a-407e-aea2-fb828fe927b9-kube-api-access-k25b6\") pod \"redhat-operators-blqnh\" (UID: \"75259694-613a-407e-aea2-fb828fe927b9\") " pod="openshift-marketplace/redhat-operators-blqnh" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.188837 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blqnh" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.193297 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf06411b-1fa1-4613-ad76-d979d0909964-serving-cert\") pod \"route-controller-manager-f947d96f4-qbmwh\" (UID: \"cf06411b-1fa1-4613-ad76-d979d0909964\") " pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.193363 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg6pc\" (UniqueName: \"kubernetes.io/projected/cf06411b-1fa1-4613-ad76-d979d0909964-kube-api-access-wg6pc\") pod \"route-controller-manager-f947d96f4-qbmwh\" (UID: \"cf06411b-1fa1-4613-ad76-d979d0909964\") " pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.202762 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf06411b-1fa1-4613-ad76-d979d0909964-client-ca\") pod \"route-controller-manager-f947d96f4-qbmwh\" (UID: \"cf06411b-1fa1-4613-ad76-d979d0909964\") " pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.202912 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf06411b-1fa1-4613-ad76-d979d0909964-config\") pod \"route-controller-manager-f947d96f4-qbmwh\" (UID: \"cf06411b-1fa1-4613-ad76-d979d0909964\") " pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.205978 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf06411b-1fa1-4613-ad76-d979d0909964-client-ca\") pod \"route-controller-manager-f947d96f4-qbmwh\" (UID: \"cf06411b-1fa1-4613-ad76-d979d0909964\") " pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.207750 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf06411b-1fa1-4613-ad76-d979d0909964-config\") pod \"route-controller-manager-f947d96f4-qbmwh\" (UID: \"cf06411b-1fa1-4613-ad76-d979d0909964\") " pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.218017 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf06411b-1fa1-4613-ad76-d979d0909964-serving-cert\") pod \"route-controller-manager-f947d96f4-qbmwh\" (UID: \"cf06411b-1fa1-4613-ad76-d979d0909964\") " pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.245617 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg6pc\" (UniqueName: \"kubernetes.io/projected/cf06411b-1fa1-4613-ad76-d979d0909964-kube-api-access-wg6pc\") pod \"route-controller-manager-f947d96f4-qbmwh\" (UID: \"cf06411b-1fa1-4613-ad76-d979d0909964\") " pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.365488 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.665841 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-blqnh"] Mar 11 08:59:56 crc kubenswrapper[4840]: W0311 08:59:56.729187 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75259694_613a_407e_aea2_fb828fe927b9.slice/crio-f44fc1ecbf5c96bed3f9583fa74b3b31930e27cdb857843ea7465ed74d051d4d WatchSource:0}: Error finding container f44fc1ecbf5c96bed3f9583fa74b3b31930e27cdb857843ea7465ed74d051d4d: Status 404 returned error can't find the container with id f44fc1ecbf5c96bed3f9583fa74b3b31930e27cdb857843ea7465ed74d051d4d Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.768427 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh"] Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.986579 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-wwr6r" Mar 11 08:59:56 crc kubenswrapper[4840]: I0311 08:59:56.989600 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-wwr6r" Mar 11 08:59:57 crc kubenswrapper[4840]: I0311 08:59:57.094680 4840 generic.go:334] "Generic (PLEG): container finished" podID="5be5a417-86c5-43f6-b238-9c0c498028ab" containerID="f93c210b1ad4467215d496c7791e287e76bf65713b6265a6966346dffb07b48c" exitCode=0 Mar 11 08:59:57 crc kubenswrapper[4840]: I0311 08:59:57.095646 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fp456" event={"ID":"5be5a417-86c5-43f6-b238-9c0c498028ab","Type":"ContainerDied","Data":"f93c210b1ad4467215d496c7791e287e76bf65713b6265a6966346dffb07b48c"} Mar 11 08:59:57 crc kubenswrapper[4840]: I0311 08:59:57.095674 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fp456" event={"ID":"5be5a417-86c5-43f6-b238-9c0c498028ab","Type":"ContainerStarted","Data":"028b505f1510d8a6f4a82817145efd22de810d59525cfec258a7d75f7917315c"} Mar 11 08:59:57 crc kubenswrapper[4840]: I0311 08:59:57.122747 4840 generic.go:334] "Generic (PLEG): container finished" podID="75259694-613a-407e-aea2-fb828fe927b9" containerID="59890ad7678c874d5f21170b3e018913928b7b26d0b0f7e4d60db020991ff843" exitCode=0 Mar 11 08:59:57 crc kubenswrapper[4840]: I0311 08:59:57.122857 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blqnh" event={"ID":"75259694-613a-407e-aea2-fb828fe927b9","Type":"ContainerDied","Data":"59890ad7678c874d5f21170b3e018913928b7b26d0b0f7e4d60db020991ff843"} Mar 11 08:59:57 crc kubenswrapper[4840]: I0311 08:59:57.122885 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blqnh" event={"ID":"75259694-613a-407e-aea2-fb828fe927b9","Type":"ContainerStarted","Data":"f44fc1ecbf5c96bed3f9583fa74b3b31930e27cdb857843ea7465ed74d051d4d"} Mar 11 08:59:57 crc kubenswrapper[4840]: I0311 08:59:57.154515 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" event={"ID":"cf06411b-1fa1-4613-ad76-d979d0909964","Type":"ContainerStarted","Data":"c496c06db4ff8715005fdaf8e25eac41956290769f6c9d1c20df0a4ca02b5ef5"} Mar 11 08:59:57 crc kubenswrapper[4840]: I0311 08:59:57.155012 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" event={"ID":"cf06411b-1fa1-4613-ad76-d979d0909964","Type":"ContainerStarted","Data":"d5bd5edc5847b6471165fec111149761736c1f0cfceef453b053c0b93a39cf12"} Mar 11 08:59:57 crc kubenswrapper[4840]: I0311 08:59:57.176299 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" podStartSLOduration=3.176277733 podStartE2EDuration="3.176277733s" podCreationTimestamp="2026-03-11 08:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 08:59:57.173536145 +0000 UTC m=+195.839205960" watchObservedRunningTime="2026-03-11 08:59:57.176277733 +0000 UTC m=+195.841947548" Mar 11 08:59:57 crc kubenswrapper[4840]: I0311 08:59:57.500912 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 11 08:59:57 crc kubenswrapper[4840]: I0311 08:59:57.537525 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a193fea6-fbc1-4fc6-8b65-c5a1928d8be1-kube-api-access\") pod \"a193fea6-fbc1-4fc6-8b65-c5a1928d8be1\" (UID: \"a193fea6-fbc1-4fc6-8b65-c5a1928d8be1\") " Mar 11 08:59:57 crc kubenswrapper[4840]: I0311 08:59:57.537590 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a193fea6-fbc1-4fc6-8b65-c5a1928d8be1-kubelet-dir\") pod \"a193fea6-fbc1-4fc6-8b65-c5a1928d8be1\" (UID: \"a193fea6-fbc1-4fc6-8b65-c5a1928d8be1\") " Mar 11 08:59:57 crc kubenswrapper[4840]: I0311 08:59:57.539007 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a193fea6-fbc1-4fc6-8b65-c5a1928d8be1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a193fea6-fbc1-4fc6-8b65-c5a1928d8be1" (UID: "a193fea6-fbc1-4fc6-8b65-c5a1928d8be1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 08:59:57 crc kubenswrapper[4840]: I0311 08:59:57.551876 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a193fea6-fbc1-4fc6-8b65-c5a1928d8be1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a193fea6-fbc1-4fc6-8b65-c5a1928d8be1" (UID: "a193fea6-fbc1-4fc6-8b65-c5a1928d8be1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 08:59:57 crc kubenswrapper[4840]: I0311 08:59:57.640148 4840 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a193fea6-fbc1-4fc6-8b65-c5a1928d8be1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 11 08:59:57 crc kubenswrapper[4840]: I0311 08:59:57.640202 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a193fea6-fbc1-4fc6-8b65-c5a1928d8be1-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 11 08:59:57 crc kubenswrapper[4840]: I0311 08:59:57.907877 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 08:59:58 crc kubenswrapper[4840]: I0311 08:59:58.195852 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 11 08:59:58 crc kubenswrapper[4840]: I0311 08:59:58.196268 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a193fea6-fbc1-4fc6-8b65-c5a1928d8be1","Type":"ContainerDied","Data":"726d4d26556c84f10052c81679db2aa4c7f6c357066737cc48113c2cd816c524"} Mar 11 08:59:58 crc kubenswrapper[4840]: I0311 08:59:58.196297 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="726d4d26556c84f10052c81679db2aa4c7f6c357066737cc48113c2cd816c524" Mar 11 08:59:58 crc kubenswrapper[4840]: I0311 08:59:58.196319 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" Mar 11 08:59:58 crc kubenswrapper[4840]: I0311 08:59:58.205514 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" Mar 11 08:59:58 crc kubenswrapper[4840]: I0311 08:59:58.451978 4840 ???:1] "http: TLS handshake error from 192.168.126.11:42514: no serving certificate available for the kubelet" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.147786 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553660-t75r2"] Mar 11 09:00:00 crc kubenswrapper[4840]: E0311 09:00:00.148027 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a193fea6-fbc1-4fc6-8b65-c5a1928d8be1" containerName="pruner" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.148040 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="a193fea6-fbc1-4fc6-8b65-c5a1928d8be1" containerName="pruner" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.148137 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="a193fea6-fbc1-4fc6-8b65-c5a1928d8be1" containerName="pruner" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.148502 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553660-t75r2" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.154913 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.155023 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.155196 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.156024 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553660-ftnls"] Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.157417 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-ftnls" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.158949 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.159420 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.160084 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553660-t75r2"] Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.162654 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553660-ftnls"] Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.191414 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8567275-d6d0-4690-a442-76581bbcf793-secret-volume\") pod \"collect-profiles-29553660-ftnls\" (UID: \"e8567275-d6d0-4690-a442-76581bbcf793\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-ftnls" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.191532 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq2dh\" (UniqueName: \"kubernetes.io/projected/c4172593-e97b-48a1-b064-8051cd6aef46-kube-api-access-sq2dh\") pod \"auto-csr-approver-29553660-t75r2\" (UID: \"c4172593-e97b-48a1-b064-8051cd6aef46\") " pod="openshift-infra/auto-csr-approver-29553660-t75r2" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.191636 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmjbx\" (UniqueName: \"kubernetes.io/projected/e8567275-d6d0-4690-a442-76581bbcf793-kube-api-access-pmjbx\") pod \"collect-profiles-29553660-ftnls\" (UID: \"e8567275-d6d0-4690-a442-76581bbcf793\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-ftnls" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.191662 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8567275-d6d0-4690-a442-76581bbcf793-config-volume\") pod \"collect-profiles-29553660-ftnls\" (UID: \"e8567275-d6d0-4690-a442-76581bbcf793\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-ftnls" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.292762 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8567275-d6d0-4690-a442-76581bbcf793-secret-volume\") pod \"collect-profiles-29553660-ftnls\" (UID: \"e8567275-d6d0-4690-a442-76581bbcf793\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-ftnls" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.292829 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq2dh\" (UniqueName: \"kubernetes.io/projected/c4172593-e97b-48a1-b064-8051cd6aef46-kube-api-access-sq2dh\") pod \"auto-csr-approver-29553660-t75r2\" (UID: \"c4172593-e97b-48a1-b064-8051cd6aef46\") " pod="openshift-infra/auto-csr-approver-29553660-t75r2" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.292936 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8567275-d6d0-4690-a442-76581bbcf793-config-volume\") pod \"collect-profiles-29553660-ftnls\" (UID: \"e8567275-d6d0-4690-a442-76581bbcf793\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-ftnls" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.292965 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmjbx\" (UniqueName: \"kubernetes.io/projected/e8567275-d6d0-4690-a442-76581bbcf793-kube-api-access-pmjbx\") pod \"collect-profiles-29553660-ftnls\" (UID: \"e8567275-d6d0-4690-a442-76581bbcf793\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-ftnls" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.296532 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8567275-d6d0-4690-a442-76581bbcf793-config-volume\") pod \"collect-profiles-29553660-ftnls\" (UID: \"e8567275-d6d0-4690-a442-76581bbcf793\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-ftnls" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.304993 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8567275-d6d0-4690-a442-76581bbcf793-secret-volume\") pod \"collect-profiles-29553660-ftnls\" (UID: \"e8567275-d6d0-4690-a442-76581bbcf793\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-ftnls" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.311171 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmjbx\" (UniqueName: \"kubernetes.io/projected/e8567275-d6d0-4690-a442-76581bbcf793-kube-api-access-pmjbx\") pod \"collect-profiles-29553660-ftnls\" (UID: \"e8567275-d6d0-4690-a442-76581bbcf793\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-ftnls" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.317664 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq2dh\" (UniqueName: \"kubernetes.io/projected/c4172593-e97b-48a1-b064-8051cd6aef46-kube-api-access-sq2dh\") pod \"auto-csr-approver-29553660-t75r2\" (UID: \"c4172593-e97b-48a1-b064-8051cd6aef46\") " pod="openshift-infra/auto-csr-approver-29553660-t75r2" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.437312 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-cxgxb" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.488920 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553660-t75r2" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.500152 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-ftnls" Mar 11 09:00:00 crc kubenswrapper[4840]: I0311 09:00:00.623401 4840 ???:1] "http: TLS handshake error from 192.168.126.11:42522: no serving certificate available for the kubelet" Mar 11 09:00:01 crc kubenswrapper[4840]: I0311 09:00:01.178440 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553660-ftnls"] Mar 11 09:00:01 crc kubenswrapper[4840]: W0311 09:00:01.200638 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8567275_d6d0_4690_a442_76581bbcf793.slice/crio-7b60ae0d8e199f34eb9a6b3feb9a731f3d124d8b451319252203214ae70f9472 WatchSource:0}: Error finding container 7b60ae0d8e199f34eb9a6b3feb9a731f3d124d8b451319252203214ae70f9472: Status 404 returned error can't find the container with id 7b60ae0d8e199f34eb9a6b3feb9a731f3d124d8b451319252203214ae70f9472 Mar 11 09:00:01 crc kubenswrapper[4840]: I0311 09:00:01.251231 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-ftnls" event={"ID":"e8567275-d6d0-4690-a442-76581bbcf793","Type":"ContainerStarted","Data":"7b60ae0d8e199f34eb9a6b3feb9a731f3d124d8b451319252203214ae70f9472"} Mar 11 09:00:01 crc kubenswrapper[4840]: I0311 09:00:01.261208 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553660-t75r2"] Mar 11 09:00:01 crc kubenswrapper[4840]: W0311 09:00:01.310020 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4172593_e97b_48a1_b064_8051cd6aef46.slice/crio-3120a6d63dff40053fc154b7a41ef936ea0f61605cb7f79cbe4540b8f2be04a4 WatchSource:0}: Error finding container 3120a6d63dff40053fc154b7a41ef936ea0f61605cb7f79cbe4540b8f2be04a4: Status 404 returned error can't find the container with id 3120a6d63dff40053fc154b7a41ef936ea0f61605cb7f79cbe4540b8f2be04a4 Mar 11 09:00:02 crc kubenswrapper[4840]: I0311 09:00:02.275036 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553660-t75r2" event={"ID":"c4172593-e97b-48a1-b064-8051cd6aef46","Type":"ContainerStarted","Data":"3120a6d63dff40053fc154b7a41ef936ea0f61605cb7f79cbe4540b8f2be04a4"} Mar 11 09:00:02 crc kubenswrapper[4840]: I0311 09:00:02.282087 4840 generic.go:334] "Generic (PLEG): container finished" podID="e8567275-d6d0-4690-a442-76581bbcf793" containerID="390e185053bf789329840250d721b084ea7b2658b30b610c5944f1c0387b6ead" exitCode=0 Mar 11 09:00:02 crc kubenswrapper[4840]: I0311 09:00:02.282128 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-ftnls" event={"ID":"e8567275-d6d0-4690-a442-76581bbcf793","Type":"ContainerDied","Data":"390e185053bf789329840250d721b084ea7b2658b30b610c5944f1c0387b6ead"} Mar 11 09:00:05 crc kubenswrapper[4840]: I0311 09:00:05.639591 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-9k5xp" Mar 11 09:00:05 crc kubenswrapper[4840]: I0311 09:00:05.684239 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 09:00:05 crc kubenswrapper[4840]: I0311 09:00:05.688621 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 09:00:10 crc kubenswrapper[4840]: I0311 09:00:10.106069 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-ftnls" Mar 11 09:00:10 crc kubenswrapper[4840]: I0311 09:00:10.159943 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmjbx\" (UniqueName: \"kubernetes.io/projected/e8567275-d6d0-4690-a442-76581bbcf793-kube-api-access-pmjbx\") pod \"e8567275-d6d0-4690-a442-76581bbcf793\" (UID: \"e8567275-d6d0-4690-a442-76581bbcf793\") " Mar 11 09:00:10 crc kubenswrapper[4840]: I0311 09:00:10.160052 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8567275-d6d0-4690-a442-76581bbcf793-secret-volume\") pod \"e8567275-d6d0-4690-a442-76581bbcf793\" (UID: \"e8567275-d6d0-4690-a442-76581bbcf793\") " Mar 11 09:00:10 crc kubenswrapper[4840]: I0311 09:00:10.160114 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8567275-d6d0-4690-a442-76581bbcf793-config-volume\") pod \"e8567275-d6d0-4690-a442-76581bbcf793\" (UID: \"e8567275-d6d0-4690-a442-76581bbcf793\") " Mar 11 09:00:10 crc kubenswrapper[4840]: I0311 09:00:10.161876 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8567275-d6d0-4690-a442-76581bbcf793-config-volume" (OuterVolumeSpecName: "config-volume") pod "e8567275-d6d0-4690-a442-76581bbcf793" (UID: "e8567275-d6d0-4690-a442-76581bbcf793"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:10 crc kubenswrapper[4840]: I0311 09:00:10.167479 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8567275-d6d0-4690-a442-76581bbcf793-kube-api-access-pmjbx" (OuterVolumeSpecName: "kube-api-access-pmjbx") pod "e8567275-d6d0-4690-a442-76581bbcf793" (UID: "e8567275-d6d0-4690-a442-76581bbcf793"). InnerVolumeSpecName "kube-api-access-pmjbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:00:10 crc kubenswrapper[4840]: I0311 09:00:10.172967 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8567275-d6d0-4690-a442-76581bbcf793-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e8567275-d6d0-4690-a442-76581bbcf793" (UID: "e8567275-d6d0-4690-a442-76581bbcf793"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:00:10 crc kubenswrapper[4840]: I0311 09:00:10.262215 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmjbx\" (UniqueName: \"kubernetes.io/projected/e8567275-d6d0-4690-a442-76581bbcf793-kube-api-access-pmjbx\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:10 crc kubenswrapper[4840]: I0311 09:00:10.262621 4840 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8567275-d6d0-4690-a442-76581bbcf793-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:10 crc kubenswrapper[4840]: I0311 09:00:10.262638 4840 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8567275-d6d0-4690-a442-76581bbcf793-config-volume\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:10 crc kubenswrapper[4840]: I0311 09:00:10.374057 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-ftnls" event={"ID":"e8567275-d6d0-4690-a442-76581bbcf793","Type":"ContainerDied","Data":"7b60ae0d8e199f34eb9a6b3feb9a731f3d124d8b451319252203214ae70f9472"} Mar 11 09:00:10 crc kubenswrapper[4840]: I0311 09:00:10.374112 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b60ae0d8e199f34eb9a6b3feb9a731f3d124d8b451319252203214ae70f9472" Mar 11 09:00:10 crc kubenswrapper[4840]: I0311 09:00:10.374189 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553660-ftnls" Mar 11 09:00:12 crc kubenswrapper[4840]: I0311 09:00:12.563933 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-df868c598-zf4cd"] Mar 11 09:00:12 crc kubenswrapper[4840]: I0311 09:00:12.564156 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" podUID="dc370bb3-d2c0-44f7-ab92-ab317db166ee" containerName="controller-manager" containerID="cri-o://44b691d39a38027d262d8d4ec83a1108755e1aa07ea83f3750e99ac390fa64c8" gracePeriod=30 Mar 11 09:00:12 crc kubenswrapper[4840]: I0311 09:00:12.603062 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh"] Mar 11 09:00:12 crc kubenswrapper[4840]: I0311 09:00:12.603334 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" podUID="cf06411b-1fa1-4613-ad76-d979d0909964" containerName="route-controller-manager" containerID="cri-o://c496c06db4ff8715005fdaf8e25eac41956290769f6c9d1c20df0a4ca02b5ef5" gracePeriod=30 Mar 11 09:00:13 crc kubenswrapper[4840]: I0311 09:00:13.410023 4840 generic.go:334] "Generic (PLEG): container finished" podID="cf06411b-1fa1-4613-ad76-d979d0909964" containerID="c496c06db4ff8715005fdaf8e25eac41956290769f6c9d1c20df0a4ca02b5ef5" exitCode=0 Mar 11 09:00:13 crc kubenswrapper[4840]: I0311 09:00:13.410104 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" event={"ID":"cf06411b-1fa1-4613-ad76-d979d0909964","Type":"ContainerDied","Data":"c496c06db4ff8715005fdaf8e25eac41956290769f6c9d1c20df0a4ca02b5ef5"} Mar 11 09:00:13 crc kubenswrapper[4840]: I0311 09:00:13.416909 4840 generic.go:334] "Generic (PLEG): container finished" podID="dc370bb3-d2c0-44f7-ab92-ab317db166ee" containerID="44b691d39a38027d262d8d4ec83a1108755e1aa07ea83f3750e99ac390fa64c8" exitCode=0 Mar 11 09:00:13 crc kubenswrapper[4840]: I0311 09:00:13.416974 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" event={"ID":"dc370bb3-d2c0-44f7-ab92-ab317db166ee","Type":"ContainerDied","Data":"44b691d39a38027d262d8d4ec83a1108755e1aa07ea83f3750e99ac390fa64c8"} Mar 11 09:00:13 crc kubenswrapper[4840]: I0311 09:00:13.949097 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 09:00:14 crc kubenswrapper[4840]: I0311 09:00:14.428677 4840 patch_prober.go:28] interesting pod/controller-manager-df868c598-zf4cd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" start-of-body= Mar 11 09:00:14 crc kubenswrapper[4840]: I0311 09:00:14.428781 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" podUID="dc370bb3-d2c0-44f7-ab92-ab317db166ee" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" Mar 11 09:00:16 crc kubenswrapper[4840]: I0311 09:00:16.367373 4840 patch_prober.go:28] interesting pod/route-controller-manager-f947d96f4-qbmwh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Mar 11 09:00:16 crc kubenswrapper[4840]: I0311 09:00:16.367990 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" podUID="cf06411b-1fa1-4613-ad76-d979d0909964" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Mar 11 09:00:16 crc kubenswrapper[4840]: I0311 09:00:16.981952 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 09:00:16 crc kubenswrapper[4840]: I0311 09:00:16.982088 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 09:00:16 crc kubenswrapper[4840]: I0311 09:00:16.986402 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 11 09:00:16 crc kubenswrapper[4840]: I0311 09:00:16.986540 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 11 09:00:17 crc kubenswrapper[4840]: I0311 09:00:17.003029 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 09:00:17 crc kubenswrapper[4840]: I0311 09:00:17.011651 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 09:00:17 crc kubenswrapper[4840]: I0311 09:00:17.083557 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 09:00:17 crc kubenswrapper[4840]: I0311 09:00:17.084080 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 09:00:17 crc kubenswrapper[4840]: I0311 09:00:17.085868 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 11 09:00:17 crc kubenswrapper[4840]: I0311 09:00:17.096447 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 11 09:00:17 crc kubenswrapper[4840]: I0311 09:00:17.109757 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 09:00:17 crc kubenswrapper[4840]: I0311 09:00:17.114353 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 09:00:17 crc kubenswrapper[4840]: I0311 09:00:17.189503 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 11 09:00:17 crc kubenswrapper[4840]: I0311 09:00:17.203508 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 09:00:17 crc kubenswrapper[4840]: I0311 09:00:17.236727 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 11 09:00:21 crc kubenswrapper[4840]: I0311 09:00:21.131952 4840 ???:1] "http: TLS handshake error from 192.168.126.11:55688: no serving certificate available for the kubelet" Mar 11 09:00:21 crc kubenswrapper[4840]: E0311 09:00:21.831141 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 11 09:00:21 crc kubenswrapper[4840]: E0311 09:00:21.831457 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kszv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-v5s8n_openshift-marketplace(dcd6c58d-5984-4122-826c-18ecfe7dde26): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 11 09:00:21 crc kubenswrapper[4840]: E0311 09:00:21.833371 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-v5s8n" podUID="dcd6c58d-5984-4122-826c-18ecfe7dde26" Mar 11 09:00:25 crc kubenswrapper[4840]: I0311 09:00:25.013780 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2kbzn" Mar 11 09:00:25 crc kubenswrapper[4840]: I0311 09:00:25.428405 4840 patch_prober.go:28] interesting pod/controller-manager-df868c598-zf4cd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 11 09:00:25 crc kubenswrapper[4840]: I0311 09:00:25.428494 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" podUID="dc370bb3-d2c0-44f7-ab92-ab317db166ee" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 11 09:00:25 crc kubenswrapper[4840]: E0311 09:00:25.918571 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v5s8n" podUID="dcd6c58d-5984-4122-826c-18ecfe7dde26" Mar 11 09:00:25 crc kubenswrapper[4840]: E0311 09:00:25.993435 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 11 09:00:25 crc kubenswrapper[4840]: E0311 09:00:25.993641 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k25b6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-blqnh_openshift-marketplace(75259694-613a-407e-aea2-fb828fe927b9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 11 09:00:25 crc kubenswrapper[4840]: E0311 09:00:25.995636 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-blqnh" podUID="75259694-613a-407e-aea2-fb828fe927b9" Mar 11 09:00:26 crc kubenswrapper[4840]: I0311 09:00:26.581783 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 11 09:00:26 crc kubenswrapper[4840]: E0311 09:00:26.582054 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8567275-d6d0-4690-a442-76581bbcf793" containerName="collect-profiles" Mar 11 09:00:26 crc kubenswrapper[4840]: I0311 09:00:26.582070 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8567275-d6d0-4690-a442-76581bbcf793" containerName="collect-profiles" Mar 11 09:00:26 crc kubenswrapper[4840]: I0311 09:00:26.582208 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8567275-d6d0-4690-a442-76581bbcf793" containerName="collect-profiles" Mar 11 09:00:26 crc kubenswrapper[4840]: I0311 09:00:26.582671 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 11 09:00:26 crc kubenswrapper[4840]: I0311 09:00:26.584994 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 11 09:00:26 crc kubenswrapper[4840]: I0311 09:00:26.585197 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 11 09:00:26 crc kubenswrapper[4840]: I0311 09:00:26.601094 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 11 09:00:26 crc kubenswrapper[4840]: I0311 09:00:26.636096 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9424d118-b912-4307-a8af-18be13d29880-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9424d118-b912-4307-a8af-18be13d29880\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 11 09:00:26 crc kubenswrapper[4840]: I0311 09:00:26.636158 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9424d118-b912-4307-a8af-18be13d29880-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9424d118-b912-4307-a8af-18be13d29880\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 11 09:00:26 crc kubenswrapper[4840]: I0311 09:00:26.737366 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9424d118-b912-4307-a8af-18be13d29880-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9424d118-b912-4307-a8af-18be13d29880\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 11 09:00:26 crc kubenswrapper[4840]: I0311 09:00:26.737432 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9424d118-b912-4307-a8af-18be13d29880-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9424d118-b912-4307-a8af-18be13d29880\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 11 09:00:26 crc kubenswrapper[4840]: I0311 09:00:26.737534 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9424d118-b912-4307-a8af-18be13d29880-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9424d118-b912-4307-a8af-18be13d29880\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 11 09:00:26 crc kubenswrapper[4840]: I0311 09:00:26.762163 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9424d118-b912-4307-a8af-18be13d29880-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9424d118-b912-4307-a8af-18be13d29880\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 11 09:00:26 crc kubenswrapper[4840]: I0311 09:00:26.940218 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 11 09:00:27 crc kubenswrapper[4840]: E0311 09:00:27.351127 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-blqnh" podUID="75259694-613a-407e-aea2-fb828fe927b9" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.368785 4840 patch_prober.go:28] interesting pod/route-controller-manager-f947d96f4-qbmwh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.368847 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" podUID="cf06411b-1fa1-4613-ad76-d979d0909964" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.410252 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.416287 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" Mar 11 09:00:27 crc kubenswrapper[4840]: E0311 09:00:27.426918 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 11 09:00:27 crc kubenswrapper[4840]: E0311 09:00:27.427069 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v5f94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-h578g_openshift-marketplace(b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 11 09:00:27 crc kubenswrapper[4840]: E0311 09:00:27.429784 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-h578g" podUID="b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6" Mar 11 09:00:27 crc kubenswrapper[4840]: E0311 09:00:27.439888 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 11 09:00:27 crc kubenswrapper[4840]: E0311 09:00:27.440073 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mvlbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-swqkn_openshift-marketplace(3ea6e709-84f7-4603-bcda-6d336d3a96fc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 11 09:00:27 crc kubenswrapper[4840]: E0311 09:00:27.441260 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-swqkn" podUID="3ea6e709-84f7-4603-bcda-6d336d3a96fc" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.448065 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf06411b-1fa1-4613-ad76-d979d0909964-client-ca\") pod \"cf06411b-1fa1-4613-ad76-d979d0909964\" (UID: \"cf06411b-1fa1-4613-ad76-d979d0909964\") " Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.448165 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf06411b-1fa1-4613-ad76-d979d0909964-serving-cert\") pod \"cf06411b-1fa1-4613-ad76-d979d0909964\" (UID: \"cf06411b-1fa1-4613-ad76-d979d0909964\") " Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.448185 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf06411b-1fa1-4613-ad76-d979d0909964-config\") pod \"cf06411b-1fa1-4613-ad76-d979d0909964\" (UID: \"cf06411b-1fa1-4613-ad76-d979d0909964\") " Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.448190 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.448230 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg6pc\" (UniqueName: \"kubernetes.io/projected/cf06411b-1fa1-4613-ad76-d979d0909964-kube-api-access-wg6pc\") pod \"cf06411b-1fa1-4613-ad76-d979d0909964\" (UID: \"cf06411b-1fa1-4613-ad76-d979d0909964\") " Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.448228 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.449427 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf06411b-1fa1-4613-ad76-d979d0909964-client-ca" (OuterVolumeSpecName: "client-ca") pod "cf06411b-1fa1-4613-ad76-d979d0909964" (UID: "cf06411b-1fa1-4613-ad76-d979d0909964"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.449528 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh"] Mar 11 09:00:27 crc kubenswrapper[4840]: E0311 09:00:27.449791 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc370bb3-d2c0-44f7-ab92-ab317db166ee" containerName="controller-manager" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.449806 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc370bb3-d2c0-44f7-ab92-ab317db166ee" containerName="controller-manager" Mar 11 09:00:27 crc kubenswrapper[4840]: E0311 09:00:27.449825 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf06411b-1fa1-4613-ad76-d979d0909964" containerName="route-controller-manager" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.449834 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf06411b-1fa1-4613-ad76-d979d0909964" containerName="route-controller-manager" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.449961 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc370bb3-d2c0-44f7-ab92-ab317db166ee" containerName="controller-manager" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.449978 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf06411b-1fa1-4613-ad76-d979d0909964" containerName="route-controller-manager" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.451757 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf06411b-1fa1-4613-ad76-d979d0909964-config" (OuterVolumeSpecName: "config") pod "cf06411b-1fa1-4613-ad76-d979d0909964" (UID: "cf06411b-1fa1-4613-ad76-d979d0909964"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.452676 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.452722 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf06411b-1fa1-4613-ad76-d979d0909964-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cf06411b-1fa1-4613-ad76-d979d0909964" (UID: "cf06411b-1fa1-4613-ad76-d979d0909964"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.453454 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh"] Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.454911 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf06411b-1fa1-4613-ad76-d979d0909964-kube-api-access-wg6pc" (OuterVolumeSpecName: "kube-api-access-wg6pc") pod "cf06411b-1fa1-4613-ad76-d979d0909964" (UID: "cf06411b-1fa1-4613-ad76-d979d0909964"). InnerVolumeSpecName "kube-api-access-wg6pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:00:27 crc kubenswrapper[4840]: E0311 09:00:27.483112 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 11 09:00:27 crc kubenswrapper[4840]: E0311 09:00:27.483267 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jcf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-chbkv_openshift-marketplace(ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 11 09:00:27 crc kubenswrapper[4840]: E0311 09:00:27.485026 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-chbkv" podUID="ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.506141 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.506136 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-df868c598-zf4cd" event={"ID":"dc370bb3-d2c0-44f7-ab92-ab317db166ee","Type":"ContainerDied","Data":"1bbdd69419a1c71146d8450d644c6a57f735b68fc65d28f272d80b7b0726a60e"} Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.506355 4840 scope.go:117] "RemoveContainer" containerID="44b691d39a38027d262d8d4ec83a1108755e1aa07ea83f3750e99ac390fa64c8" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.511630 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" event={"ID":"cf06411b-1fa1-4613-ad76-d979d0909964","Type":"ContainerDied","Data":"d5bd5edc5847b6471165fec111149761736c1f0cfceef453b053c0b93a39cf12"} Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.511703 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh" Mar 11 09:00:27 crc kubenswrapper[4840]: E0311 09:00:27.522166 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 11 09:00:27 crc kubenswrapper[4840]: E0311 09:00:27.522322 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g25d6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-fp456_openshift-marketplace(5be5a417-86c5-43f6-b238-9c0c498028ab): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 11 09:00:27 crc kubenswrapper[4840]: E0311 09:00:27.523515 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-fp456" podUID="5be5a417-86c5-43f6-b238-9c0c498028ab" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.549253 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6lqm\" (UniqueName: \"kubernetes.io/projected/dc370bb3-d2c0-44f7-ab92-ab317db166ee-kube-api-access-b6lqm\") pod \"dc370bb3-d2c0-44f7-ab92-ab317db166ee\" (UID: \"dc370bb3-d2c0-44f7-ab92-ab317db166ee\") " Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.550524 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc370bb3-d2c0-44f7-ab92-ab317db166ee-config\") pod \"dc370bb3-d2c0-44f7-ab92-ab317db166ee\" (UID: \"dc370bb3-d2c0-44f7-ab92-ab317db166ee\") " Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.550589 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc370bb3-d2c0-44f7-ab92-ab317db166ee-proxy-ca-bundles\") pod \"dc370bb3-d2c0-44f7-ab92-ab317db166ee\" (UID: \"dc370bb3-d2c0-44f7-ab92-ab317db166ee\") " Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.550615 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dc370bb3-d2c0-44f7-ab92-ab317db166ee-client-ca\") pod \"dc370bb3-d2c0-44f7-ab92-ab317db166ee\" (UID: \"dc370bb3-d2c0-44f7-ab92-ab317db166ee\") " Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.550657 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc370bb3-d2c0-44f7-ab92-ab317db166ee-serving-cert\") pod \"dc370bb3-d2c0-44f7-ab92-ab317db166ee\" (UID: \"dc370bb3-d2c0-44f7-ab92-ab317db166ee\") " Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.550967 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/441fe3ee-8864-48fb-9cd6-1ef72967fd60-serving-cert\") pod \"route-controller-manager-67f47dd54b-l98jh\" (UID: \"441fe3ee-8864-48fb-9cd6-1ef72967fd60\") " pod="openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.550997 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/441fe3ee-8864-48fb-9cd6-1ef72967fd60-client-ca\") pod \"route-controller-manager-67f47dd54b-l98jh\" (UID: \"441fe3ee-8864-48fb-9cd6-1ef72967fd60\") " pod="openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.551017 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/441fe3ee-8864-48fb-9cd6-1ef72967fd60-config\") pod \"route-controller-manager-67f47dd54b-l98jh\" (UID: \"441fe3ee-8864-48fb-9cd6-1ef72967fd60\") " pod="openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.551104 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sxrl\" (UniqueName: \"kubernetes.io/projected/441fe3ee-8864-48fb-9cd6-1ef72967fd60-kube-api-access-9sxrl\") pod \"route-controller-manager-67f47dd54b-l98jh\" (UID: \"441fe3ee-8864-48fb-9cd6-1ef72967fd60\") " pod="openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.551163 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg6pc\" (UniqueName: \"kubernetes.io/projected/cf06411b-1fa1-4613-ad76-d979d0909964-kube-api-access-wg6pc\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.551276 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cf06411b-1fa1-4613-ad76-d979d0909964-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.551289 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf06411b-1fa1-4613-ad76-d979d0909964-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.551300 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf06411b-1fa1-4613-ad76-d979d0909964-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.551622 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc370bb3-d2c0-44f7-ab92-ab317db166ee-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "dc370bb3-d2c0-44f7-ab92-ab317db166ee" (UID: "dc370bb3-d2c0-44f7-ab92-ab317db166ee"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.552218 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc370bb3-d2c0-44f7-ab92-ab317db166ee-config" (OuterVolumeSpecName: "config") pod "dc370bb3-d2c0-44f7-ab92-ab317db166ee" (UID: "dc370bb3-d2c0-44f7-ab92-ab317db166ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.552606 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc370bb3-d2c0-44f7-ab92-ab317db166ee-client-ca" (OuterVolumeSpecName: "client-ca") pod "dc370bb3-d2c0-44f7-ab92-ab317db166ee" (UID: "dc370bb3-d2c0-44f7-ab92-ab317db166ee"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.573546 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc370bb3-d2c0-44f7-ab92-ab317db166ee-kube-api-access-b6lqm" (OuterVolumeSpecName: "kube-api-access-b6lqm") pod "dc370bb3-d2c0-44f7-ab92-ab317db166ee" (UID: "dc370bb3-d2c0-44f7-ab92-ab317db166ee"). InnerVolumeSpecName "kube-api-access-b6lqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.583127 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh"] Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.586266 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f947d96f4-qbmwh"] Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.587299 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc370bb3-d2c0-44f7-ab92-ab317db166ee-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dc370bb3-d2c0-44f7-ab92-ab317db166ee" (UID: "dc370bb3-d2c0-44f7-ab92-ab317db166ee"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.652762 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/441fe3ee-8864-48fb-9cd6-1ef72967fd60-serving-cert\") pod \"route-controller-manager-67f47dd54b-l98jh\" (UID: \"441fe3ee-8864-48fb-9cd6-1ef72967fd60\") " pod="openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.652812 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/441fe3ee-8864-48fb-9cd6-1ef72967fd60-client-ca\") pod \"route-controller-manager-67f47dd54b-l98jh\" (UID: \"441fe3ee-8864-48fb-9cd6-1ef72967fd60\") " pod="openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.652834 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/441fe3ee-8864-48fb-9cd6-1ef72967fd60-config\") pod \"route-controller-manager-67f47dd54b-l98jh\" (UID: \"441fe3ee-8864-48fb-9cd6-1ef72967fd60\") " pod="openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.652887 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sxrl\" (UniqueName: \"kubernetes.io/projected/441fe3ee-8864-48fb-9cd6-1ef72967fd60-kube-api-access-9sxrl\") pod \"route-controller-manager-67f47dd54b-l98jh\" (UID: \"441fe3ee-8864-48fb-9cd6-1ef72967fd60\") " pod="openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.652956 4840 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dc370bb3-d2c0-44f7-ab92-ab317db166ee-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.652970 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dc370bb3-d2c0-44f7-ab92-ab317db166ee-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.652978 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc370bb3-d2c0-44f7-ab92-ab317db166ee-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.652987 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6lqm\" (UniqueName: \"kubernetes.io/projected/dc370bb3-d2c0-44f7-ab92-ab317db166ee-kube-api-access-b6lqm\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.652997 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc370bb3-d2c0-44f7-ab92-ab317db166ee-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.653795 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/441fe3ee-8864-48fb-9cd6-1ef72967fd60-client-ca\") pod \"route-controller-manager-67f47dd54b-l98jh\" (UID: \"441fe3ee-8864-48fb-9cd6-1ef72967fd60\") " pod="openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.654359 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/441fe3ee-8864-48fb-9cd6-1ef72967fd60-config\") pod \"route-controller-manager-67f47dd54b-l98jh\" (UID: \"441fe3ee-8864-48fb-9cd6-1ef72967fd60\") " pod="openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.657306 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/441fe3ee-8864-48fb-9cd6-1ef72967fd60-serving-cert\") pod \"route-controller-manager-67f47dd54b-l98jh\" (UID: \"441fe3ee-8864-48fb-9cd6-1ef72967fd60\") " pod="openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.670039 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sxrl\" (UniqueName: \"kubernetes.io/projected/441fe3ee-8864-48fb-9cd6-1ef72967fd60-kube-api-access-9sxrl\") pod \"route-controller-manager-67f47dd54b-l98jh\" (UID: \"441fe3ee-8864-48fb-9cd6-1ef72967fd60\") " pod="openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.804149 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh" Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.835331 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-df868c598-zf4cd"] Mar 11 09:00:27 crc kubenswrapper[4840]: I0311 09:00:27.838097 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-df868c598-zf4cd"] Mar 11 09:00:28 crc kubenswrapper[4840]: I0311 09:00:28.067745 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf06411b-1fa1-4613-ad76-d979d0909964" path="/var/lib/kubelet/pods/cf06411b-1fa1-4613-ad76-d979d0909964/volumes" Mar 11 09:00:28 crc kubenswrapper[4840]: I0311 09:00:28.068745 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc370bb3-d2c0-44f7-ab92-ab317db166ee" path="/var/lib/kubelet/pods/dc370bb3-d2c0-44f7-ab92-ab317db166ee/volumes" Mar 11 09:00:29 crc kubenswrapper[4840]: E0311 09:00:29.613077 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-fp456" podUID="5be5a417-86c5-43f6-b238-9c0c498028ab" Mar 11 09:00:29 crc kubenswrapper[4840]: E0311 09:00:29.613156 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-h578g" podUID="b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6" Mar 11 09:00:29 crc kubenswrapper[4840]: E0311 09:00:29.613365 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-chbkv" podUID="ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f" Mar 11 09:00:29 crc kubenswrapper[4840]: E0311 09:00:29.613602 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-swqkn" podUID="3ea6e709-84f7-4603-bcda-6d336d3a96fc" Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.034014 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6d58fc4977-8c7r2"] Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.035710 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.039942 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.040761 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d58fc4977-8c7r2"] Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.040924 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.041521 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.041533 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.042157 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.044843 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.048775 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.091568 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d4cd1a5-8344-4f34-8862-73f9659c0924-proxy-ca-bundles\") pod \"controller-manager-6d58fc4977-8c7r2\" (UID: \"5d4cd1a5-8344-4f34-8862-73f9659c0924\") " pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.091670 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d4cd1a5-8344-4f34-8862-73f9659c0924-client-ca\") pod \"controller-manager-6d58fc4977-8c7r2\" (UID: \"5d4cd1a5-8344-4f34-8862-73f9659c0924\") " pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.091756 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2457f\" (UniqueName: \"kubernetes.io/projected/5d4cd1a5-8344-4f34-8862-73f9659c0924-kube-api-access-2457f\") pod \"controller-manager-6d58fc4977-8c7r2\" (UID: \"5d4cd1a5-8344-4f34-8862-73f9659c0924\") " pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.091824 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d4cd1a5-8344-4f34-8862-73f9659c0924-config\") pod \"controller-manager-6d58fc4977-8c7r2\" (UID: \"5d4cd1a5-8344-4f34-8862-73f9659c0924\") " pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.091999 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d4cd1a5-8344-4f34-8862-73f9659c0924-serving-cert\") pod \"controller-manager-6d58fc4977-8c7r2\" (UID: \"5d4cd1a5-8344-4f34-8862-73f9659c0924\") " pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.193827 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d4cd1a5-8344-4f34-8862-73f9659c0924-serving-cert\") pod \"controller-manager-6d58fc4977-8c7r2\" (UID: \"5d4cd1a5-8344-4f34-8862-73f9659c0924\") " pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.193904 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d4cd1a5-8344-4f34-8862-73f9659c0924-proxy-ca-bundles\") pod \"controller-manager-6d58fc4977-8c7r2\" (UID: \"5d4cd1a5-8344-4f34-8862-73f9659c0924\") " pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.193930 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d4cd1a5-8344-4f34-8862-73f9659c0924-client-ca\") pod \"controller-manager-6d58fc4977-8c7r2\" (UID: \"5d4cd1a5-8344-4f34-8862-73f9659c0924\") " pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.193969 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2457f\" (UniqueName: \"kubernetes.io/projected/5d4cd1a5-8344-4f34-8862-73f9659c0924-kube-api-access-2457f\") pod \"controller-manager-6d58fc4977-8c7r2\" (UID: \"5d4cd1a5-8344-4f34-8862-73f9659c0924\") " pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.194010 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d4cd1a5-8344-4f34-8862-73f9659c0924-config\") pod \"controller-manager-6d58fc4977-8c7r2\" (UID: \"5d4cd1a5-8344-4f34-8862-73f9659c0924\") " pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.195771 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d4cd1a5-8344-4f34-8862-73f9659c0924-proxy-ca-bundles\") pod \"controller-manager-6d58fc4977-8c7r2\" (UID: \"5d4cd1a5-8344-4f34-8862-73f9659c0924\") " pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.195961 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d4cd1a5-8344-4f34-8862-73f9659c0924-config\") pod \"controller-manager-6d58fc4977-8c7r2\" (UID: \"5d4cd1a5-8344-4f34-8862-73f9659c0924\") " pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.197052 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d4cd1a5-8344-4f34-8862-73f9659c0924-client-ca\") pod \"controller-manager-6d58fc4977-8c7r2\" (UID: \"5d4cd1a5-8344-4f34-8862-73f9659c0924\") " pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.210998 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2457f\" (UniqueName: \"kubernetes.io/projected/5d4cd1a5-8344-4f34-8862-73f9659c0924-kube-api-access-2457f\") pod \"controller-manager-6d58fc4977-8c7r2\" (UID: \"5d4cd1a5-8344-4f34-8862-73f9659c0924\") " pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.214991 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d4cd1a5-8344-4f34-8862-73f9659c0924-serving-cert\") pod \"controller-manager-6d58fc4977-8c7r2\" (UID: \"5d4cd1a5-8344-4f34-8862-73f9659c0924\") " pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.358044 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.996507 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 11 09:00:30 crc kubenswrapper[4840]: I0311 09:00:30.998169 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 11 09:00:31 crc kubenswrapper[4840]: I0311 09:00:31.002995 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 11 09:00:31 crc kubenswrapper[4840]: I0311 09:00:31.106844 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b2771978-d705-4ef2-a98b-5b980e717c99-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b2771978-d705-4ef2-a98b-5b980e717c99\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 11 09:00:31 crc kubenswrapper[4840]: I0311 09:00:31.107043 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b2771978-d705-4ef2-a98b-5b980e717c99-var-lock\") pod \"installer-9-crc\" (UID: \"b2771978-d705-4ef2-a98b-5b980e717c99\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 11 09:00:31 crc kubenswrapper[4840]: I0311 09:00:31.107145 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2771978-d705-4ef2-a98b-5b980e717c99-kube-api-access\") pod \"installer-9-crc\" (UID: \"b2771978-d705-4ef2-a98b-5b980e717c99\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 11 09:00:31 crc kubenswrapper[4840]: I0311 09:00:31.208592 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b2771978-d705-4ef2-a98b-5b980e717c99-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b2771978-d705-4ef2-a98b-5b980e717c99\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 11 09:00:31 crc kubenswrapper[4840]: I0311 09:00:31.208648 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b2771978-d705-4ef2-a98b-5b980e717c99-var-lock\") pod \"installer-9-crc\" (UID: \"b2771978-d705-4ef2-a98b-5b980e717c99\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 11 09:00:31 crc kubenswrapper[4840]: I0311 09:00:31.208678 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2771978-d705-4ef2-a98b-5b980e717c99-kube-api-access\") pod \"installer-9-crc\" (UID: \"b2771978-d705-4ef2-a98b-5b980e717c99\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 11 09:00:31 crc kubenswrapper[4840]: I0311 09:00:31.208736 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b2771978-d705-4ef2-a98b-5b980e717c99-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b2771978-d705-4ef2-a98b-5b980e717c99\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 11 09:00:31 crc kubenswrapper[4840]: I0311 09:00:31.208820 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b2771978-d705-4ef2-a98b-5b980e717c99-var-lock\") pod \"installer-9-crc\" (UID: \"b2771978-d705-4ef2-a98b-5b980e717c99\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 11 09:00:31 crc kubenswrapper[4840]: I0311 09:00:31.227946 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2771978-d705-4ef2-a98b-5b980e717c99-kube-api-access\") pod \"installer-9-crc\" (UID: \"b2771978-d705-4ef2-a98b-5b980e717c99\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 11 09:00:31 crc kubenswrapper[4840]: I0311 09:00:31.318026 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 11 09:00:31 crc kubenswrapper[4840]: E0311 09:00:31.619099 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 11 09:00:31 crc kubenswrapper[4840]: E0311 09:00:31.619674 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6b9fv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7w69s_openshift-marketplace(996d3b36-77c7-4e8f-a472-ac032aabd836): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 11 09:00:31 crc kubenswrapper[4840]: E0311 09:00:31.620845 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-7w69s" podUID="996d3b36-77c7-4e8f-a472-ac032aabd836" Mar 11 09:00:31 crc kubenswrapper[4840]: E0311 09:00:31.829693 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 11 09:00:31 crc kubenswrapper[4840]: E0311 09:00:31.829866 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cv94k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6wxnb_openshift-marketplace(edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 11 09:00:31 crc kubenswrapper[4840]: E0311 09:00:31.831078 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-6wxnb" podUID="edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f" Mar 11 09:00:31 crc kubenswrapper[4840]: I0311 09:00:31.979818 4840 scope.go:117] "RemoveContainer" containerID="c496c06db4ff8715005fdaf8e25eac41956290769f6c9d1c20df0a4ca02b5ef5" Mar 11 09:00:32 crc kubenswrapper[4840]: I0311 09:00:32.593574 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6d58fc4977-8c7r2"] Mar 11 09:00:32 crc kubenswrapper[4840]: I0311 09:00:32.683128 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh"] Mar 11 09:00:32 crc kubenswrapper[4840]: E0311 09:00:32.961059 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-7w69s" podUID="996d3b36-77c7-4e8f-a472-ac032aabd836" Mar 11 09:00:32 crc kubenswrapper[4840]: E0311 09:00:32.963603 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6wxnb" podUID="edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f" Mar 11 09:00:33 crc kubenswrapper[4840]: W0311 09:00:33.503400 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-b22df49b640dd07035da745f9d87d2783cdf12fc68a996eed69115fbf8204755 WatchSource:0}: Error finding container b22df49b640dd07035da745f9d87d2783cdf12fc68a996eed69115fbf8204755: Status 404 returned error can't find the container with id b22df49b640dd07035da745f9d87d2783cdf12fc68a996eed69115fbf8204755 Mar 11 09:00:33 crc kubenswrapper[4840]: I0311 09:00:33.545977 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553660-t75r2" event={"ID":"c4172593-e97b-48a1-b064-8051cd6aef46","Type":"ContainerStarted","Data":"cc56fc852306a16d14ebb28a025ff3e7385553056d053b394158b4fc4fc52a44"} Mar 11 09:00:33 crc kubenswrapper[4840]: I0311 09:00:33.552820 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"cbf64931a23d1cd81535f0f5db09a213de99357422d3659fdf8ba2043542ce0f"} Mar 11 09:00:33 crc kubenswrapper[4840]: I0311 09:00:33.560031 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b22df49b640dd07035da745f9d87d2783cdf12fc68a996eed69115fbf8204755"} Mar 11 09:00:33 crc kubenswrapper[4840]: I0311 09:00:33.569752 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553660-t75r2" podStartSLOduration=1.81187942 podStartE2EDuration="33.56973221s" podCreationTimestamp="2026-03-11 09:00:00 +0000 UTC" firstStartedPulling="2026-03-11 09:00:01.314842157 +0000 UTC m=+199.980511972" lastFinishedPulling="2026-03-11 09:00:33.072694947 +0000 UTC m=+231.738364762" observedRunningTime="2026-03-11 09:00:33.563389368 +0000 UTC m=+232.229059183" watchObservedRunningTime="2026-03-11 09:00:33.56973221 +0000 UTC m=+232.235402025" Mar 11 09:00:33 crc kubenswrapper[4840]: W0311 09:00:33.616239 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-19923e0c89a6247a208cc926b33493a8628a78ab5222a205b92648dca4bb0828 WatchSource:0}: Error finding container 19923e0c89a6247a208cc926b33493a8628a78ab5222a205b92648dca4bb0828: Status 404 returned error can't find the container with id 19923e0c89a6247a208cc926b33493a8628a78ab5222a205b92648dca4bb0828 Mar 11 09:00:33 crc kubenswrapper[4840]: I0311 09:00:33.647961 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6d58fc4977-8c7r2"] Mar 11 09:00:33 crc kubenswrapper[4840]: I0311 09:00:33.670812 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 11 09:00:33 crc kubenswrapper[4840]: I0311 09:00:33.677216 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 11 09:00:33 crc kubenswrapper[4840]: I0311 09:00:33.680970 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh"] Mar 11 09:00:33 crc kubenswrapper[4840]: W0311 09:00:33.686925 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9424d118_b912_4307_a8af_18be13d29880.slice/crio-44cc0f39d39aa74090c2a54f32e6cf08538426c2b82d2ef6c0a4e4bf76e8c1ff WatchSource:0}: Error finding container 44cc0f39d39aa74090c2a54f32e6cf08538426c2b82d2ef6c0a4e4bf76e8c1ff: Status 404 returned error can't find the container with id 44cc0f39d39aa74090c2a54f32e6cf08538426c2b82d2ef6c0a4e4bf76e8c1ff Mar 11 09:00:33 crc kubenswrapper[4840]: W0311 09:00:33.691450 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod441fe3ee_8864_48fb_9cd6_1ef72967fd60.slice/crio-d504477be11cfc6763ca280e597922242f64fec0dcbd5fc68b3cb00a51d8d829 WatchSource:0}: Error finding container d504477be11cfc6763ca280e597922242f64fec0dcbd5fc68b3cb00a51d8d829: Status 404 returned error can't find the container with id d504477be11cfc6763ca280e597922242f64fec0dcbd5fc68b3cb00a51d8d829 Mar 11 09:00:33 crc kubenswrapper[4840]: I0311 09:00:33.934350 4840 csr.go:261] certificate signing request csr-rwzwc is approved, waiting to be issued Mar 11 09:00:33 crc kubenswrapper[4840]: I0311 09:00:33.942721 4840 csr.go:257] certificate signing request csr-rwzwc is issued Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.566046 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"59c112de1570bcf6551f0e65fb930933ed2e06911a70fcdcf4e9f71750733af8"} Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.568760 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh" event={"ID":"441fe3ee-8864-48fb-9cd6-1ef72967fd60","Type":"ContainerStarted","Data":"c2042fa0a3c4a7385e234f939ca52eaa6f11617ce5c55071eac181c71ae14664"} Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.568792 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh" event={"ID":"441fe3ee-8864-48fb-9cd6-1ef72967fd60","Type":"ContainerStarted","Data":"d504477be11cfc6763ca280e597922242f64fec0dcbd5fc68b3cb00a51d8d829"} Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.568876 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh" podUID="441fe3ee-8864-48fb-9cd6-1ef72967fd60" containerName="route-controller-manager" containerID="cri-o://c2042fa0a3c4a7385e234f939ca52eaa6f11617ce5c55071eac181c71ae14664" gracePeriod=30 Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.569896 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh" Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.574993 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" event={"ID":"5d4cd1a5-8344-4f34-8862-73f9659c0924","Type":"ContainerStarted","Data":"15a572bd6fbe1aaf37ad30b77bfc59cefd51bb49e2e629e5993a00e0cba7e192"} Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.575213 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.575340 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" event={"ID":"5d4cd1a5-8344-4f34-8862-73f9659c0924","Type":"ContainerStarted","Data":"a3b45eb6f4752df9bec40e880b2cd8ced36c02806f91db67310187b37b97107a"} Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.575108 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" podUID="5d4cd1a5-8344-4f34-8862-73f9659c0924" containerName="controller-manager" containerID="cri-o://15a572bd6fbe1aaf37ad30b77bfc59cefd51bb49e2e629e5993a00e0cba7e192" gracePeriod=30 Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.575658 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh" Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.577197 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b2771978-d705-4ef2-a98b-5b980e717c99","Type":"ContainerStarted","Data":"bbed5bf119a8c1a638090e3d865c4454144a37574fcd08054b060fc18cfd642f"} Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.577237 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b2771978-d705-4ef2-a98b-5b980e717c99","Type":"ContainerStarted","Data":"b14cccf60f1e8ed5f37b886068b16f94149f3ceb7944d8bbc8dec5160a56e238"} Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.584348 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"79b4035757104bb4ca348a292b2eb7958eebd8a70593a1d950f18c130cff1fe4"} Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.585377 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.602795 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"26525a4721f040a78f9982d8d2cc271f485f64508a66132b05e33524327b6816"} Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.602862 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"19923e0c89a6247a208cc926b33493a8628a78ab5222a205b92648dca4bb0828"} Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.606858 4840 patch_prober.go:28] interesting pod/controller-manager-6d58fc4977-8c7r2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": EOF" start-of-body= Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.607016 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" podUID="5d4cd1a5-8344-4f34-8862-73f9659c0924" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": EOF" Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.607402 4840 generic.go:334] "Generic (PLEG): container finished" podID="c4172593-e97b-48a1-b064-8051cd6aef46" containerID="cc56fc852306a16d14ebb28a025ff3e7385553056d053b394158b4fc4fc52a44" exitCode=0 Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.607496 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553660-t75r2" event={"ID":"c4172593-e97b-48a1-b064-8051cd6aef46","Type":"ContainerDied","Data":"cc56fc852306a16d14ebb28a025ff3e7385553056d053b394158b4fc4fc52a44"} Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.609582 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9424d118-b912-4307-a8af-18be13d29880","Type":"ContainerStarted","Data":"b8203ebda6f7481b5e20d20db1fe2c7b00707b395799b5dc5ae2110f925040c4"} Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.609612 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9424d118-b912-4307-a8af-18be13d29880","Type":"ContainerStarted","Data":"44cc0f39d39aa74090c2a54f32e6cf08538426c2b82d2ef6c0a4e4bf76e8c1ff"} Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.626724 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=4.626704974 podStartE2EDuration="4.626704974s" podCreationTimestamp="2026-03-11 09:00:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:00:34.625642424 +0000 UTC m=+233.291312229" watchObservedRunningTime="2026-03-11 09:00:34.626704974 +0000 UTC m=+233.292374789" Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.628449 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh" podStartSLOduration=22.628443394 podStartE2EDuration="22.628443394s" podCreationTimestamp="2026-03-11 09:00:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:00:34.613494677 +0000 UTC m=+233.279164492" watchObservedRunningTime="2026-03-11 09:00:34.628443394 +0000 UTC m=+233.294113209" Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.652921 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" podStartSLOduration=22.652894053 podStartE2EDuration="22.652894053s" podCreationTimestamp="2026-03-11 09:00:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:00:34.65000453 +0000 UTC m=+233.315674345" watchObservedRunningTime="2026-03-11 09:00:34.652894053 +0000 UTC m=+233.318563868" Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.701678 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=8.701660696 podStartE2EDuration="8.701660696s" podCreationTimestamp="2026-03-11 09:00:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:00:34.697497037 +0000 UTC m=+233.363166852" watchObservedRunningTime="2026-03-11 09:00:34.701660696 +0000 UTC m=+233.367330511" Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.944299 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-20 07:20:10.11014988 +0000 UTC Mar 11 09:00:34 crc kubenswrapper[4840]: I0311 09:00:34.944414 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6094h19m35.165739587s for next certificate rotation Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.035576 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.039531 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.073774 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q"] Mar 11 09:00:35 crc kubenswrapper[4840]: E0311 09:00:35.074003 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="441fe3ee-8864-48fb-9cd6-1ef72967fd60" containerName="route-controller-manager" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.074018 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="441fe3ee-8864-48fb-9cd6-1ef72967fd60" containerName="route-controller-manager" Mar 11 09:00:35 crc kubenswrapper[4840]: E0311 09:00:35.074034 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d4cd1a5-8344-4f34-8862-73f9659c0924" containerName="controller-manager" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.074042 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d4cd1a5-8344-4f34-8862-73f9659c0924" containerName="controller-manager" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.074144 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="441fe3ee-8864-48fb-9cd6-1ef72967fd60" containerName="route-controller-manager" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.074160 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d4cd1a5-8344-4f34-8862-73f9659c0924" containerName="controller-manager" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.074537 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.078988 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d4cd1a5-8344-4f34-8862-73f9659c0924-client-ca\") pod \"5d4cd1a5-8344-4f34-8862-73f9659c0924\" (UID: \"5d4cd1a5-8344-4f34-8862-73f9659c0924\") " Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.079035 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d4cd1a5-8344-4f34-8862-73f9659c0924-config\") pod \"5d4cd1a5-8344-4f34-8862-73f9659c0924\" (UID: \"5d4cd1a5-8344-4f34-8862-73f9659c0924\") " Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.079083 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/441fe3ee-8864-48fb-9cd6-1ef72967fd60-serving-cert\") pod \"441fe3ee-8864-48fb-9cd6-1ef72967fd60\" (UID: \"441fe3ee-8864-48fb-9cd6-1ef72967fd60\") " Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.079137 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d4cd1a5-8344-4f34-8862-73f9659c0924-proxy-ca-bundles\") pod \"5d4cd1a5-8344-4f34-8862-73f9659c0924\" (UID: \"5d4cd1a5-8344-4f34-8862-73f9659c0924\") " Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.079162 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2457f\" (UniqueName: \"kubernetes.io/projected/5d4cd1a5-8344-4f34-8862-73f9659c0924-kube-api-access-2457f\") pod \"5d4cd1a5-8344-4f34-8862-73f9659c0924\" (UID: \"5d4cd1a5-8344-4f34-8862-73f9659c0924\") " Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.079190 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/441fe3ee-8864-48fb-9cd6-1ef72967fd60-client-ca\") pod \"441fe3ee-8864-48fb-9cd6-1ef72967fd60\" (UID: \"441fe3ee-8864-48fb-9cd6-1ef72967fd60\") " Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.079212 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d4cd1a5-8344-4f34-8862-73f9659c0924-serving-cert\") pod \"5d4cd1a5-8344-4f34-8862-73f9659c0924\" (UID: \"5d4cd1a5-8344-4f34-8862-73f9659c0924\") " Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.079448 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sxrl\" (UniqueName: \"kubernetes.io/projected/441fe3ee-8864-48fb-9cd6-1ef72967fd60-kube-api-access-9sxrl\") pod \"441fe3ee-8864-48fb-9cd6-1ef72967fd60\" (UID: \"441fe3ee-8864-48fb-9cd6-1ef72967fd60\") " Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.079509 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/441fe3ee-8864-48fb-9cd6-1ef72967fd60-config\") pod \"441fe3ee-8864-48fb-9cd6-1ef72967fd60\" (UID: \"441fe3ee-8864-48fb-9cd6-1ef72967fd60\") " Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.080890 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d4cd1a5-8344-4f34-8862-73f9659c0924-client-ca" (OuterVolumeSpecName: "client-ca") pod "5d4cd1a5-8344-4f34-8862-73f9659c0924" (UID: "5d4cd1a5-8344-4f34-8862-73f9659c0924"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.081191 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/441fe3ee-8864-48fb-9cd6-1ef72967fd60-client-ca" (OuterVolumeSpecName: "client-ca") pod "441fe3ee-8864-48fb-9cd6-1ef72967fd60" (UID: "441fe3ee-8864-48fb-9cd6-1ef72967fd60"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.081621 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d4cd1a5-8344-4f34-8862-73f9659c0924-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5d4cd1a5-8344-4f34-8862-73f9659c0924" (UID: "5d4cd1a5-8344-4f34-8862-73f9659c0924"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.081718 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d4cd1a5-8344-4f34-8862-73f9659c0924-config" (OuterVolumeSpecName: "config") pod "5d4cd1a5-8344-4f34-8862-73f9659c0924" (UID: "5d4cd1a5-8344-4f34-8862-73f9659c0924"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.081742 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/441fe3ee-8864-48fb-9cd6-1ef72967fd60-config" (OuterVolumeSpecName: "config") pod "441fe3ee-8864-48fb-9cd6-1ef72967fd60" (UID: "441fe3ee-8864-48fb-9cd6-1ef72967fd60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.088713 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/441fe3ee-8864-48fb-9cd6-1ef72967fd60-kube-api-access-9sxrl" (OuterVolumeSpecName: "kube-api-access-9sxrl") pod "441fe3ee-8864-48fb-9cd6-1ef72967fd60" (UID: "441fe3ee-8864-48fb-9cd6-1ef72967fd60"). InnerVolumeSpecName "kube-api-access-9sxrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.089638 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d4cd1a5-8344-4f34-8862-73f9659c0924-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5d4cd1a5-8344-4f34-8862-73f9659c0924" (UID: "5d4cd1a5-8344-4f34-8862-73f9659c0924"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.095435 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d4cd1a5-8344-4f34-8862-73f9659c0924-kube-api-access-2457f" (OuterVolumeSpecName: "kube-api-access-2457f") pod "5d4cd1a5-8344-4f34-8862-73f9659c0924" (UID: "5d4cd1a5-8344-4f34-8862-73f9659c0924"). InnerVolumeSpecName "kube-api-access-2457f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.099460 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q"] Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.102348 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/441fe3ee-8864-48fb-9cd6-1ef72967fd60-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "441fe3ee-8864-48fb-9cd6-1ef72967fd60" (UID: "441fe3ee-8864-48fb-9cd6-1ef72967fd60"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.180682 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r899n\" (UniqueName: \"kubernetes.io/projected/8363ed47-d2dd-42f0-a4fd-22029a4bc1e4-kube-api-access-r899n\") pod \"route-controller-manager-6999b6d5db-jnf9q\" (UID: \"8363ed47-d2dd-42f0-a4fd-22029a4bc1e4\") " pod="openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.180786 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8363ed47-d2dd-42f0-a4fd-22029a4bc1e4-serving-cert\") pod \"route-controller-manager-6999b6d5db-jnf9q\" (UID: \"8363ed47-d2dd-42f0-a4fd-22029a4bc1e4\") " pod="openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.180839 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8363ed47-d2dd-42f0-a4fd-22029a4bc1e4-config\") pod \"route-controller-manager-6999b6d5db-jnf9q\" (UID: \"8363ed47-d2dd-42f0-a4fd-22029a4bc1e4\") " pod="openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.180961 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8363ed47-d2dd-42f0-a4fd-22029a4bc1e4-client-ca\") pod \"route-controller-manager-6999b6d5db-jnf9q\" (UID: \"8363ed47-d2dd-42f0-a4fd-22029a4bc1e4\") " pod="openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.181418 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sxrl\" (UniqueName: \"kubernetes.io/projected/441fe3ee-8864-48fb-9cd6-1ef72967fd60-kube-api-access-9sxrl\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.181445 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/441fe3ee-8864-48fb-9cd6-1ef72967fd60-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.181456 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d4cd1a5-8344-4f34-8862-73f9659c0924-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.181492 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d4cd1a5-8344-4f34-8862-73f9659c0924-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.181505 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/441fe3ee-8864-48fb-9cd6-1ef72967fd60-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.181515 4840 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d4cd1a5-8344-4f34-8862-73f9659c0924-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.181526 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2457f\" (UniqueName: \"kubernetes.io/projected/5d4cd1a5-8344-4f34-8862-73f9659c0924-kube-api-access-2457f\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.181534 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/441fe3ee-8864-48fb-9cd6-1ef72967fd60-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.181542 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d4cd1a5-8344-4f34-8862-73f9659c0924-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.283126 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r899n\" (UniqueName: \"kubernetes.io/projected/8363ed47-d2dd-42f0-a4fd-22029a4bc1e4-kube-api-access-r899n\") pod \"route-controller-manager-6999b6d5db-jnf9q\" (UID: \"8363ed47-d2dd-42f0-a4fd-22029a4bc1e4\") " pod="openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.283647 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8363ed47-d2dd-42f0-a4fd-22029a4bc1e4-serving-cert\") pod \"route-controller-manager-6999b6d5db-jnf9q\" (UID: \"8363ed47-d2dd-42f0-a4fd-22029a4bc1e4\") " pod="openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.283694 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8363ed47-d2dd-42f0-a4fd-22029a4bc1e4-config\") pod \"route-controller-manager-6999b6d5db-jnf9q\" (UID: \"8363ed47-d2dd-42f0-a4fd-22029a4bc1e4\") " pod="openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.283712 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8363ed47-d2dd-42f0-a4fd-22029a4bc1e4-client-ca\") pod \"route-controller-manager-6999b6d5db-jnf9q\" (UID: \"8363ed47-d2dd-42f0-a4fd-22029a4bc1e4\") " pod="openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.284821 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8363ed47-d2dd-42f0-a4fd-22029a4bc1e4-client-ca\") pod \"route-controller-manager-6999b6d5db-jnf9q\" (UID: \"8363ed47-d2dd-42f0-a4fd-22029a4bc1e4\") " pod="openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.284940 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8363ed47-d2dd-42f0-a4fd-22029a4bc1e4-config\") pod \"route-controller-manager-6999b6d5db-jnf9q\" (UID: \"8363ed47-d2dd-42f0-a4fd-22029a4bc1e4\") " pod="openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.287899 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8363ed47-d2dd-42f0-a4fd-22029a4bc1e4-serving-cert\") pod \"route-controller-manager-6999b6d5db-jnf9q\" (UID: \"8363ed47-d2dd-42f0-a4fd-22029a4bc1e4\") " pod="openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.313411 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r899n\" (UniqueName: \"kubernetes.io/projected/8363ed47-d2dd-42f0-a4fd-22029a4bc1e4-kube-api-access-r899n\") pod \"route-controller-manager-6999b6d5db-jnf9q\" (UID: \"8363ed47-d2dd-42f0-a4fd-22029a4bc1e4\") " pod="openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.430290 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.638424 4840 generic.go:334] "Generic (PLEG): container finished" podID="5d4cd1a5-8344-4f34-8862-73f9659c0924" containerID="15a572bd6fbe1aaf37ad30b77bfc59cefd51bb49e2e629e5993a00e0cba7e192" exitCode=0 Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.638513 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" event={"ID":"5d4cd1a5-8344-4f34-8862-73f9659c0924","Type":"ContainerDied","Data":"15a572bd6fbe1aaf37ad30b77bfc59cefd51bb49e2e629e5993a00e0cba7e192"} Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.638558 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" event={"ID":"5d4cd1a5-8344-4f34-8862-73f9659c0924","Type":"ContainerDied","Data":"a3b45eb6f4752df9bec40e880b2cd8ced36c02806f91db67310187b37b97107a"} Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.638579 4840 scope.go:117] "RemoveContainer" containerID="15a572bd6fbe1aaf37ad30b77bfc59cefd51bb49e2e629e5993a00e0cba7e192" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.638734 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d58fc4977-8c7r2" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.643429 4840 generic.go:334] "Generic (PLEG): container finished" podID="9424d118-b912-4307-a8af-18be13d29880" containerID="b8203ebda6f7481b5e20d20db1fe2c7b00707b395799b5dc5ae2110f925040c4" exitCode=0 Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.643537 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9424d118-b912-4307-a8af-18be13d29880","Type":"ContainerDied","Data":"b8203ebda6f7481b5e20d20db1fe2c7b00707b395799b5dc5ae2110f925040c4"} Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.647577 4840 generic.go:334] "Generic (PLEG): container finished" podID="441fe3ee-8864-48fb-9cd6-1ef72967fd60" containerID="c2042fa0a3c4a7385e234f939ca52eaa6f11617ce5c55071eac181c71ae14664" exitCode=0 Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.648098 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh" event={"ID":"441fe3ee-8864-48fb-9cd6-1ef72967fd60","Type":"ContainerDied","Data":"c2042fa0a3c4a7385e234f939ca52eaa6f11617ce5c55071eac181c71ae14664"} Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.648146 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh" event={"ID":"441fe3ee-8864-48fb-9cd6-1ef72967fd60","Type":"ContainerDied","Data":"d504477be11cfc6763ca280e597922242f64fec0dcbd5fc68b3cb00a51d8d829"} Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.648210 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.677779 4840 scope.go:117] "RemoveContainer" containerID="15a572bd6fbe1aaf37ad30b77bfc59cefd51bb49e2e629e5993a00e0cba7e192" Mar 11 09:00:35 crc kubenswrapper[4840]: E0311 09:00:35.678698 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15a572bd6fbe1aaf37ad30b77bfc59cefd51bb49e2e629e5993a00e0cba7e192\": container with ID starting with 15a572bd6fbe1aaf37ad30b77bfc59cefd51bb49e2e629e5993a00e0cba7e192 not found: ID does not exist" containerID="15a572bd6fbe1aaf37ad30b77bfc59cefd51bb49e2e629e5993a00e0cba7e192" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.678732 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15a572bd6fbe1aaf37ad30b77bfc59cefd51bb49e2e629e5993a00e0cba7e192"} err="failed to get container status \"15a572bd6fbe1aaf37ad30b77bfc59cefd51bb49e2e629e5993a00e0cba7e192\": rpc error: code = NotFound desc = could not find container \"15a572bd6fbe1aaf37ad30b77bfc59cefd51bb49e2e629e5993a00e0cba7e192\": container with ID starting with 15a572bd6fbe1aaf37ad30b77bfc59cefd51bb49e2e629e5993a00e0cba7e192 not found: ID does not exist" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.678752 4840 scope.go:117] "RemoveContainer" containerID="c2042fa0a3c4a7385e234f939ca52eaa6f11617ce5c55071eac181c71ae14664" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.688299 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6d58fc4977-8c7r2"] Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.692151 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6d58fc4977-8c7r2"] Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.705234 4840 scope.go:117] "RemoveContainer" containerID="c2042fa0a3c4a7385e234f939ca52eaa6f11617ce5c55071eac181c71ae14664" Mar 11 09:00:35 crc kubenswrapper[4840]: E0311 09:00:35.705766 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2042fa0a3c4a7385e234f939ca52eaa6f11617ce5c55071eac181c71ae14664\": container with ID starting with c2042fa0a3c4a7385e234f939ca52eaa6f11617ce5c55071eac181c71ae14664 not found: ID does not exist" containerID="c2042fa0a3c4a7385e234f939ca52eaa6f11617ce5c55071eac181c71ae14664" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.705815 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2042fa0a3c4a7385e234f939ca52eaa6f11617ce5c55071eac181c71ae14664"} err="failed to get container status \"c2042fa0a3c4a7385e234f939ca52eaa6f11617ce5c55071eac181c71ae14664\": rpc error: code = NotFound desc = could not find container \"c2042fa0a3c4a7385e234f939ca52eaa6f11617ce5c55071eac181c71ae14664\": container with ID starting with c2042fa0a3c4a7385e234f939ca52eaa6f11617ce5c55071eac181c71ae14664 not found: ID does not exist" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.707145 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh"] Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.719534 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67f47dd54b-l98jh"] Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.862633 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553660-t75r2" Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.916037 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q"] Mar 11 09:00:35 crc kubenswrapper[4840]: W0311 09:00:35.929018 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8363ed47_d2dd_42f0_a4fd_22029a4bc1e4.slice/crio-be3aa0d883030d328062b0e186945b3b0ba9979d0eed155e95ceb801fa2f9ead WatchSource:0}: Error finding container be3aa0d883030d328062b0e186945b3b0ba9979d0eed155e95ceb801fa2f9ead: Status 404 returned error can't find the container with id be3aa0d883030d328062b0e186945b3b0ba9979d0eed155e95ceb801fa2f9ead Mar 11 09:00:35 crc kubenswrapper[4840]: I0311 09:00:35.996306 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq2dh\" (UniqueName: \"kubernetes.io/projected/c4172593-e97b-48a1-b064-8051cd6aef46-kube-api-access-sq2dh\") pod \"c4172593-e97b-48a1-b064-8051cd6aef46\" (UID: \"c4172593-e97b-48a1-b064-8051cd6aef46\") " Mar 11 09:00:36 crc kubenswrapper[4840]: I0311 09:00:36.003264 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4172593-e97b-48a1-b064-8051cd6aef46-kube-api-access-sq2dh" (OuterVolumeSpecName: "kube-api-access-sq2dh") pod "c4172593-e97b-48a1-b064-8051cd6aef46" (UID: "c4172593-e97b-48a1-b064-8051cd6aef46"). InnerVolumeSpecName "kube-api-access-sq2dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:00:36 crc kubenswrapper[4840]: I0311 09:00:36.070457 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="441fe3ee-8864-48fb-9cd6-1ef72967fd60" path="/var/lib/kubelet/pods/441fe3ee-8864-48fb-9cd6-1ef72967fd60/volumes" Mar 11 09:00:36 crc kubenswrapper[4840]: I0311 09:00:36.071006 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d4cd1a5-8344-4f34-8862-73f9659c0924" path="/var/lib/kubelet/pods/5d4cd1a5-8344-4f34-8862-73f9659c0924/volumes" Mar 11 09:00:36 crc kubenswrapper[4840]: I0311 09:00:36.097980 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq2dh\" (UniqueName: \"kubernetes.io/projected/c4172593-e97b-48a1-b064-8051cd6aef46-kube-api-access-sq2dh\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:36 crc kubenswrapper[4840]: I0311 09:00:36.656245 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q" event={"ID":"8363ed47-d2dd-42f0-a4fd-22029a4bc1e4","Type":"ContainerStarted","Data":"9642d4cfb376c9d6d99720e1803625a4e0314489ef59ff0bd4c5c339110bd66e"} Mar 11 09:00:36 crc kubenswrapper[4840]: I0311 09:00:36.656704 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q" event={"ID":"8363ed47-d2dd-42f0-a4fd-22029a4bc1e4","Type":"ContainerStarted","Data":"be3aa0d883030d328062b0e186945b3b0ba9979d0eed155e95ceb801fa2f9ead"} Mar 11 09:00:36 crc kubenswrapper[4840]: I0311 09:00:36.656747 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q" Mar 11 09:00:36 crc kubenswrapper[4840]: I0311 09:00:36.660287 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553660-t75r2" event={"ID":"c4172593-e97b-48a1-b064-8051cd6aef46","Type":"ContainerDied","Data":"3120a6d63dff40053fc154b7a41ef936ea0f61605cb7f79cbe4540b8f2be04a4"} Mar 11 09:00:36 crc kubenswrapper[4840]: I0311 09:00:36.660328 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3120a6d63dff40053fc154b7a41ef936ea0f61605cb7f79cbe4540b8f2be04a4" Mar 11 09:00:36 crc kubenswrapper[4840]: I0311 09:00:36.660348 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553660-t75r2" Mar 11 09:00:36 crc kubenswrapper[4840]: I0311 09:00:36.664185 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q" Mar 11 09:00:36 crc kubenswrapper[4840]: I0311 09:00:36.676843 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q" podStartSLOduration=4.6768179199999995 podStartE2EDuration="4.67681792s" podCreationTimestamp="2026-03-11 09:00:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:00:36.671977041 +0000 UTC m=+235.337646876" watchObservedRunningTime="2026-03-11 09:00:36.67681792 +0000 UTC m=+235.342487755" Mar 11 09:00:36 crc kubenswrapper[4840]: I0311 09:00:36.929480 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 11 09:00:37 crc kubenswrapper[4840]: I0311 09:00:37.009758 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9424d118-b912-4307-a8af-18be13d29880-kubelet-dir\") pod \"9424d118-b912-4307-a8af-18be13d29880\" (UID: \"9424d118-b912-4307-a8af-18be13d29880\") " Mar 11 09:00:37 crc kubenswrapper[4840]: I0311 09:00:37.009838 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9424d118-b912-4307-a8af-18be13d29880-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9424d118-b912-4307-a8af-18be13d29880" (UID: "9424d118-b912-4307-a8af-18be13d29880"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:00:37 crc kubenswrapper[4840]: I0311 09:00:37.009921 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9424d118-b912-4307-a8af-18be13d29880-kube-api-access\") pod \"9424d118-b912-4307-a8af-18be13d29880\" (UID: \"9424d118-b912-4307-a8af-18be13d29880\") " Mar 11 09:00:37 crc kubenswrapper[4840]: I0311 09:00:37.010145 4840 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9424d118-b912-4307-a8af-18be13d29880-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:37 crc kubenswrapper[4840]: I0311 09:00:37.018794 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9424d118-b912-4307-a8af-18be13d29880-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9424d118-b912-4307-a8af-18be13d29880" (UID: "9424d118-b912-4307-a8af-18be13d29880"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:00:37 crc kubenswrapper[4840]: I0311 09:00:37.111626 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9424d118-b912-4307-a8af-18be13d29880-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:37 crc kubenswrapper[4840]: I0311 09:00:37.686096 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9424d118-b912-4307-a8af-18be13d29880","Type":"ContainerDied","Data":"44cc0f39d39aa74090c2a54f32e6cf08538426c2b82d2ef6c0a4e4bf76e8c1ff"} Mar 11 09:00:37 crc kubenswrapper[4840]: I0311 09:00:37.686636 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44cc0f39d39aa74090c2a54f32e6cf08538426c2b82d2ef6c0a4e4bf76e8c1ff" Mar 11 09:00:37 crc kubenswrapper[4840]: I0311 09:00:37.687359 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.037550 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7df4b549f7-dgdzz"] Mar 11 09:00:38 crc kubenswrapper[4840]: E0311 09:00:38.038022 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4172593-e97b-48a1-b064-8051cd6aef46" containerName="oc" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.038052 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4172593-e97b-48a1-b064-8051cd6aef46" containerName="oc" Mar 11 09:00:38 crc kubenswrapper[4840]: E0311 09:00:38.038089 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9424d118-b912-4307-a8af-18be13d29880" containerName="pruner" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.038107 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="9424d118-b912-4307-a8af-18be13d29880" containerName="pruner" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.038368 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4172593-e97b-48a1-b064-8051cd6aef46" containerName="oc" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.038437 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="9424d118-b912-4307-a8af-18be13d29880" containerName="pruner" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.039366 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.042175 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.042546 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.044590 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.044841 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.045037 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.045296 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.050239 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7df4b549f7-dgdzz"] Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.061879 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.133651 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8547661a-f824-4f88-9c4e-d44727e4430a-proxy-ca-bundles\") pod \"controller-manager-7df4b549f7-dgdzz\" (UID: \"8547661a-f824-4f88-9c4e-d44727e4430a\") " pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.134424 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8547661a-f824-4f88-9c4e-d44727e4430a-config\") pod \"controller-manager-7df4b549f7-dgdzz\" (UID: \"8547661a-f824-4f88-9c4e-d44727e4430a\") " pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.134574 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8547661a-f824-4f88-9c4e-d44727e4430a-client-ca\") pod \"controller-manager-7df4b549f7-dgdzz\" (UID: \"8547661a-f824-4f88-9c4e-d44727e4430a\") " pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.134637 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8547661a-f824-4f88-9c4e-d44727e4430a-serving-cert\") pod \"controller-manager-7df4b549f7-dgdzz\" (UID: \"8547661a-f824-4f88-9c4e-d44727e4430a\") " pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.134873 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwtjb\" (UniqueName: \"kubernetes.io/projected/8547661a-f824-4f88-9c4e-d44727e4430a-kube-api-access-qwtjb\") pod \"controller-manager-7df4b549f7-dgdzz\" (UID: \"8547661a-f824-4f88-9c4e-d44727e4430a\") " pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.236070 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8547661a-f824-4f88-9c4e-d44727e4430a-proxy-ca-bundles\") pod \"controller-manager-7df4b549f7-dgdzz\" (UID: \"8547661a-f824-4f88-9c4e-d44727e4430a\") " pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.236161 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8547661a-f824-4f88-9c4e-d44727e4430a-config\") pod \"controller-manager-7df4b549f7-dgdzz\" (UID: \"8547661a-f824-4f88-9c4e-d44727e4430a\") " pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.236196 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8547661a-f824-4f88-9c4e-d44727e4430a-client-ca\") pod \"controller-manager-7df4b549f7-dgdzz\" (UID: \"8547661a-f824-4f88-9c4e-d44727e4430a\") " pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.236214 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8547661a-f824-4f88-9c4e-d44727e4430a-serving-cert\") pod \"controller-manager-7df4b549f7-dgdzz\" (UID: \"8547661a-f824-4f88-9c4e-d44727e4430a\") " pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.236283 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwtjb\" (UniqueName: \"kubernetes.io/projected/8547661a-f824-4f88-9c4e-d44727e4430a-kube-api-access-qwtjb\") pod \"controller-manager-7df4b549f7-dgdzz\" (UID: \"8547661a-f824-4f88-9c4e-d44727e4430a\") " pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.237649 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8547661a-f824-4f88-9c4e-d44727e4430a-client-ca\") pod \"controller-manager-7df4b549f7-dgdzz\" (UID: \"8547661a-f824-4f88-9c4e-d44727e4430a\") " pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.237796 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8547661a-f824-4f88-9c4e-d44727e4430a-proxy-ca-bundles\") pod \"controller-manager-7df4b549f7-dgdzz\" (UID: \"8547661a-f824-4f88-9c4e-d44727e4430a\") " pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.254415 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8547661a-f824-4f88-9c4e-d44727e4430a-serving-cert\") pod \"controller-manager-7df4b549f7-dgdzz\" (UID: \"8547661a-f824-4f88-9c4e-d44727e4430a\") " pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.260806 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8547661a-f824-4f88-9c4e-d44727e4430a-config\") pod \"controller-manager-7df4b549f7-dgdzz\" (UID: \"8547661a-f824-4f88-9c4e-d44727e4430a\") " pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.273176 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwtjb\" (UniqueName: \"kubernetes.io/projected/8547661a-f824-4f88-9c4e-d44727e4430a-kube-api-access-qwtjb\") pod \"controller-manager-7df4b549f7-dgdzz\" (UID: \"8547661a-f824-4f88-9c4e-d44727e4430a\") " pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.360907 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.590191 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7df4b549f7-dgdzz"] Mar 11 09:00:38 crc kubenswrapper[4840]: I0311 09:00:38.692692 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" event={"ID":"8547661a-f824-4f88-9c4e-d44727e4430a","Type":"ContainerStarted","Data":"6cb96e8ee1e0f09446429ca6ba89f21bd2d6240de4ec5176ecb8722c3e67575d"} Mar 11 09:00:39 crc kubenswrapper[4840]: I0311 09:00:39.699885 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" event={"ID":"8547661a-f824-4f88-9c4e-d44727e4430a","Type":"ContainerStarted","Data":"5388b1d9122b8fb92750b5e93c9fbf477091e36aab959880089f67c055bab8d4"} Mar 11 09:00:39 crc kubenswrapper[4840]: I0311 09:00:39.700929 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" Mar 11 09:00:39 crc kubenswrapper[4840]: I0311 09:00:39.703913 4840 generic.go:334] "Generic (PLEG): container finished" podID="dcd6c58d-5984-4122-826c-18ecfe7dde26" containerID="b0f6aad5687e543eaedd17e9c5161c0b82a5dc52c861cea0de6aface10b94b02" exitCode=0 Mar 11 09:00:39 crc kubenswrapper[4840]: I0311 09:00:39.703985 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v5s8n" event={"ID":"dcd6c58d-5984-4122-826c-18ecfe7dde26","Type":"ContainerDied","Data":"b0f6aad5687e543eaedd17e9c5161c0b82a5dc52c861cea0de6aface10b94b02"} Mar 11 09:00:39 crc kubenswrapper[4840]: I0311 09:00:39.707051 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" Mar 11 09:00:39 crc kubenswrapper[4840]: I0311 09:00:39.725639 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" podStartSLOduration=7.725452657 podStartE2EDuration="7.725452657s" podCreationTimestamp="2026-03-11 09:00:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:00:39.718173629 +0000 UTC m=+238.383843434" watchObservedRunningTime="2026-03-11 09:00:39.725452657 +0000 UTC m=+238.391122472" Mar 11 09:00:40 crc kubenswrapper[4840]: I0311 09:00:40.712935 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v5s8n" event={"ID":"dcd6c58d-5984-4122-826c-18ecfe7dde26","Type":"ContainerStarted","Data":"762522a45ea7a7769f27466d4bf242c1a8eecbeb69b54c9eff19935afcae3c5b"} Mar 11 09:00:40 crc kubenswrapper[4840]: I0311 09:00:40.735209 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v5s8n" podStartSLOduration=2.186234472 podStartE2EDuration="46.735183161s" podCreationTimestamp="2026-03-11 08:59:54 +0000 UTC" firstStartedPulling="2026-03-11 08:59:55.989899391 +0000 UTC m=+194.655569206" lastFinishedPulling="2026-03-11 09:00:40.53884808 +0000 UTC m=+239.204517895" observedRunningTime="2026-03-11 09:00:40.733320208 +0000 UTC m=+239.398990053" watchObservedRunningTime="2026-03-11 09:00:40.735183161 +0000 UTC m=+239.400852996" Mar 11 09:00:42 crc kubenswrapper[4840]: I0311 09:00:42.633974 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-td29c"] Mar 11 09:00:42 crc kubenswrapper[4840]: I0311 09:00:42.740901 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fp456" event={"ID":"5be5a417-86c5-43f6-b238-9c0c498028ab","Type":"ContainerStarted","Data":"d51dae7559949d4d8934f6238e8a989ed047203a5bdc3e1941018e1201b3c740"} Mar 11 09:00:43 crc kubenswrapper[4840]: I0311 09:00:43.748017 4840 generic.go:334] "Generic (PLEG): container finished" podID="5be5a417-86c5-43f6-b238-9c0c498028ab" containerID="d51dae7559949d4d8934f6238e8a989ed047203a5bdc3e1941018e1201b3c740" exitCode=0 Mar 11 09:00:43 crc kubenswrapper[4840]: I0311 09:00:43.748125 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fp456" event={"ID":"5be5a417-86c5-43f6-b238-9c0c498028ab","Type":"ContainerDied","Data":"d51dae7559949d4d8934f6238e8a989ed047203a5bdc3e1941018e1201b3c740"} Mar 11 09:00:43 crc kubenswrapper[4840]: I0311 09:00:43.753202 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blqnh" event={"ID":"75259694-613a-407e-aea2-fb828fe927b9","Type":"ContainerStarted","Data":"a727096026fbe214f8cf780c7b1e3b46defdd1b8f41d04748ad871c3f1170242"} Mar 11 09:00:43 crc kubenswrapper[4840]: I0311 09:00:43.756783 4840 generic.go:334] "Generic (PLEG): container finished" podID="3ea6e709-84f7-4603-bcda-6d336d3a96fc" containerID="395edf4404c61a731826407dd96439dd27106b688a92b6a49b07bc2070071bcc" exitCode=0 Mar 11 09:00:43 crc kubenswrapper[4840]: I0311 09:00:43.756866 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-swqkn" event={"ID":"3ea6e709-84f7-4603-bcda-6d336d3a96fc","Type":"ContainerDied","Data":"395edf4404c61a731826407dd96439dd27106b688a92b6a49b07bc2070071bcc"} Mar 11 09:00:43 crc kubenswrapper[4840]: E0311 09:00:43.894032 4840 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75259694_613a_407e_aea2_fb828fe927b9.slice/crio-conmon-a727096026fbe214f8cf780c7b1e3b46defdd1b8f41d04748ad871c3f1170242.scope\": RecentStats: unable to find data in memory cache]" Mar 11 09:00:44 crc kubenswrapper[4840]: I0311 09:00:44.765536 4840 generic.go:334] "Generic (PLEG): container finished" podID="75259694-613a-407e-aea2-fb828fe927b9" containerID="a727096026fbe214f8cf780c7b1e3b46defdd1b8f41d04748ad871c3f1170242" exitCode=0 Mar 11 09:00:44 crc kubenswrapper[4840]: I0311 09:00:44.765580 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blqnh" event={"ID":"75259694-613a-407e-aea2-fb828fe927b9","Type":"ContainerDied","Data":"a727096026fbe214f8cf780c7b1e3b46defdd1b8f41d04748ad871c3f1170242"} Mar 11 09:00:44 crc kubenswrapper[4840]: I0311 09:00:44.776389 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v5s8n" Mar 11 09:00:44 crc kubenswrapper[4840]: I0311 09:00:44.776422 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v5s8n" Mar 11 09:00:45 crc kubenswrapper[4840]: I0311 09:00:45.595948 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v5s8n" Mar 11 09:00:45 crc kubenswrapper[4840]: I0311 09:00:45.811655 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v5s8n" Mar 11 09:00:46 crc kubenswrapper[4840]: I0311 09:00:46.748271 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v5s8n"] Mar 11 09:00:46 crc kubenswrapper[4840]: I0311 09:00:46.777969 4840 generic.go:334] "Generic (PLEG): container finished" podID="ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f" containerID="1d1784697349e16a1ada97bac72b9b94c906b7b716e855c7d8151b121b0eda94" exitCode=0 Mar 11 09:00:46 crc kubenswrapper[4840]: I0311 09:00:46.778045 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chbkv" event={"ID":"ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f","Type":"ContainerDied","Data":"1d1784697349e16a1ada97bac72b9b94c906b7b716e855c7d8151b121b0eda94"} Mar 11 09:00:46 crc kubenswrapper[4840]: I0311 09:00:46.782000 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-swqkn" event={"ID":"3ea6e709-84f7-4603-bcda-6d336d3a96fc","Type":"ContainerStarted","Data":"b3810fa86559b7398625eb5bff30db6085d1ea88a8cd21fc4239ae7f999b7347"} Mar 11 09:00:46 crc kubenswrapper[4840]: I0311 09:00:46.786541 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fp456" event={"ID":"5be5a417-86c5-43f6-b238-9c0c498028ab","Type":"ContainerStarted","Data":"3690c91110a373b4c2061a5a2ab79f4ad4d340fb72b05374ad1956e1d66d4e5e"} Mar 11 09:00:46 crc kubenswrapper[4840]: I0311 09:00:46.789462 4840 generic.go:334] "Generic (PLEG): container finished" podID="b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6" containerID="5f2e47e72f6aeae7930cc5143f650e751f0be708fd06c361b440211992a47026" exitCode=0 Mar 11 09:00:46 crc kubenswrapper[4840]: I0311 09:00:46.789510 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h578g" event={"ID":"b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6","Type":"ContainerDied","Data":"5f2e47e72f6aeae7930cc5143f650e751f0be708fd06c361b440211992a47026"} Mar 11 09:00:46 crc kubenswrapper[4840]: I0311 09:00:46.841033 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fp456" podStartSLOduration=2.95059256 podStartE2EDuration="51.841017512s" podCreationTimestamp="2026-03-11 08:59:55 +0000 UTC" firstStartedPulling="2026-03-11 08:59:57.101801955 +0000 UTC m=+195.767471760" lastFinishedPulling="2026-03-11 09:00:45.992226897 +0000 UTC m=+244.657896712" observedRunningTime="2026-03-11 09:00:46.837579884 +0000 UTC m=+245.503249699" watchObservedRunningTime="2026-03-11 09:00:46.841017512 +0000 UTC m=+245.506687327" Mar 11 09:00:46 crc kubenswrapper[4840]: I0311 09:00:46.861996 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-swqkn" podStartSLOduration=2.958385867 podStartE2EDuration="52.861975291s" podCreationTimestamp="2026-03-11 08:59:54 +0000 UTC" firstStartedPulling="2026-03-11 08:59:56.084524655 +0000 UTC m=+194.750194470" lastFinishedPulling="2026-03-11 09:00:45.988114089 +0000 UTC m=+244.653783894" observedRunningTime="2026-03-11 09:00:46.856268108 +0000 UTC m=+245.521937933" watchObservedRunningTime="2026-03-11 09:00:46.861975291 +0000 UTC m=+245.527645106" Mar 11 09:00:47 crc kubenswrapper[4840]: I0311 09:00:47.799282 4840 generic.go:334] "Generic (PLEG): container finished" podID="996d3b36-77c7-4e8f-a472-ac032aabd836" containerID="dfb61453350707fe6372284817fe70c02004ca1bb80a9cfb3b6f3cbaafca3d6b" exitCode=0 Mar 11 09:00:47 crc kubenswrapper[4840]: I0311 09:00:47.800083 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v5s8n" podUID="dcd6c58d-5984-4122-826c-18ecfe7dde26" containerName="registry-server" containerID="cri-o://762522a45ea7a7769f27466d4bf242c1a8eecbeb69b54c9eff19935afcae3c5b" gracePeriod=2 Mar 11 09:00:47 crc kubenswrapper[4840]: I0311 09:00:47.800231 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7w69s" event={"ID":"996d3b36-77c7-4e8f-a472-ac032aabd836","Type":"ContainerDied","Data":"dfb61453350707fe6372284817fe70c02004ca1bb80a9cfb3b6f3cbaafca3d6b"} Mar 11 09:00:48 crc kubenswrapper[4840]: I0311 09:00:48.826331 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blqnh" event={"ID":"75259694-613a-407e-aea2-fb828fe927b9","Type":"ContainerStarted","Data":"6771e73e7005b64dc8b2d4943f7852ac7498329363b356b4e71d56aeb4d05da2"} Mar 11 09:00:48 crc kubenswrapper[4840]: I0311 09:00:48.831611 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chbkv" event={"ID":"ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f","Type":"ContainerStarted","Data":"a0abb392161b78cc6497a4f3f5188c863ac093889437254a5f9b6053c9d25813"} Mar 11 09:00:48 crc kubenswrapper[4840]: I0311 09:00:48.840029 4840 generic.go:334] "Generic (PLEG): container finished" podID="dcd6c58d-5984-4122-826c-18ecfe7dde26" containerID="762522a45ea7a7769f27466d4bf242c1a8eecbeb69b54c9eff19935afcae3c5b" exitCode=0 Mar 11 09:00:48 crc kubenswrapper[4840]: I0311 09:00:48.840081 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v5s8n" event={"ID":"dcd6c58d-5984-4122-826c-18ecfe7dde26","Type":"ContainerDied","Data":"762522a45ea7a7769f27466d4bf242c1a8eecbeb69b54c9eff19935afcae3c5b"} Mar 11 09:00:48 crc kubenswrapper[4840]: I0311 09:00:48.854938 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-blqnh" podStartSLOduration=2.546645817 podStartE2EDuration="53.854923702s" podCreationTimestamp="2026-03-11 08:59:55 +0000 UTC" firstStartedPulling="2026-03-11 08:59:57.139393229 +0000 UTC m=+195.805063044" lastFinishedPulling="2026-03-11 09:00:48.447671114 +0000 UTC m=+247.113340929" observedRunningTime="2026-03-11 09:00:48.849382343 +0000 UTC m=+247.515052158" watchObservedRunningTime="2026-03-11 09:00:48.854923702 +0000 UTC m=+247.520593517" Mar 11 09:00:48 crc kubenswrapper[4840]: I0311 09:00:48.875178 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-chbkv" podStartSLOduration=3.372869718 podStartE2EDuration="56.87514514s" podCreationTimestamp="2026-03-11 08:59:52 +0000 UTC" firstStartedPulling="2026-03-11 08:59:54.959960038 +0000 UTC m=+193.625629853" lastFinishedPulling="2026-03-11 09:00:48.46223546 +0000 UTC m=+247.127905275" observedRunningTime="2026-03-11 09:00:48.87200893 +0000 UTC m=+247.537678745" watchObservedRunningTime="2026-03-11 09:00:48.87514514 +0000 UTC m=+247.540814955" Mar 11 09:00:48 crc kubenswrapper[4840]: I0311 09:00:48.952740 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v5s8n" Mar 11 09:00:48 crc kubenswrapper[4840]: I0311 09:00:48.996879 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcd6c58d-5984-4122-826c-18ecfe7dde26-catalog-content\") pod \"dcd6c58d-5984-4122-826c-18ecfe7dde26\" (UID: \"dcd6c58d-5984-4122-826c-18ecfe7dde26\") " Mar 11 09:00:48 crc kubenswrapper[4840]: I0311 09:00:48.997267 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kszv8\" (UniqueName: \"kubernetes.io/projected/dcd6c58d-5984-4122-826c-18ecfe7dde26-kube-api-access-kszv8\") pod \"dcd6c58d-5984-4122-826c-18ecfe7dde26\" (UID: \"dcd6c58d-5984-4122-826c-18ecfe7dde26\") " Mar 11 09:00:48 crc kubenswrapper[4840]: I0311 09:00:48.997312 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcd6c58d-5984-4122-826c-18ecfe7dde26-utilities\") pod \"dcd6c58d-5984-4122-826c-18ecfe7dde26\" (UID: \"dcd6c58d-5984-4122-826c-18ecfe7dde26\") " Mar 11 09:00:48 crc kubenswrapper[4840]: I0311 09:00:48.998628 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcd6c58d-5984-4122-826c-18ecfe7dde26-utilities" (OuterVolumeSpecName: "utilities") pod "dcd6c58d-5984-4122-826c-18ecfe7dde26" (UID: "dcd6c58d-5984-4122-826c-18ecfe7dde26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:00:49 crc kubenswrapper[4840]: I0311 09:00:49.004189 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd6c58d-5984-4122-826c-18ecfe7dde26-kube-api-access-kszv8" (OuterVolumeSpecName: "kube-api-access-kszv8") pod "dcd6c58d-5984-4122-826c-18ecfe7dde26" (UID: "dcd6c58d-5984-4122-826c-18ecfe7dde26"). InnerVolumeSpecName "kube-api-access-kszv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:00:49 crc kubenswrapper[4840]: I0311 09:00:49.046308 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcd6c58d-5984-4122-826c-18ecfe7dde26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dcd6c58d-5984-4122-826c-18ecfe7dde26" (UID: "dcd6c58d-5984-4122-826c-18ecfe7dde26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:00:49 crc kubenswrapper[4840]: I0311 09:00:49.099302 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcd6c58d-5984-4122-826c-18ecfe7dde26-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:49 crc kubenswrapper[4840]: I0311 09:00:49.099363 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kszv8\" (UniqueName: \"kubernetes.io/projected/dcd6c58d-5984-4122-826c-18ecfe7dde26-kube-api-access-kszv8\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:49 crc kubenswrapper[4840]: I0311 09:00:49.099382 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcd6c58d-5984-4122-826c-18ecfe7dde26-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:49 crc kubenswrapper[4840]: I0311 09:00:49.848782 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v5s8n" event={"ID":"dcd6c58d-5984-4122-826c-18ecfe7dde26","Type":"ContainerDied","Data":"9e4329a7ed3f60f898ef504f1a7280249b3a2eeec028f6e171fb68ff7a68904f"} Mar 11 09:00:49 crc kubenswrapper[4840]: I0311 09:00:49.848859 4840 scope.go:117] "RemoveContainer" containerID="762522a45ea7a7769f27466d4bf242c1a8eecbeb69b54c9eff19935afcae3c5b" Mar 11 09:00:49 crc kubenswrapper[4840]: I0311 09:00:49.848805 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v5s8n" Mar 11 09:00:49 crc kubenswrapper[4840]: I0311 09:00:49.855580 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h578g" event={"ID":"b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6","Type":"ContainerStarted","Data":"27ad3910a22bc8888e452aedc5441f68a7394e29f9ebebc7ce53b0fc45a30fba"} Mar 11 09:00:49 crc kubenswrapper[4840]: I0311 09:00:49.868679 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7w69s" event={"ID":"996d3b36-77c7-4e8f-a472-ac032aabd836","Type":"ContainerStarted","Data":"5f2d298083f37d57640ea24cb030fec7f4ad8d2dabad678095e607093a6ad04c"} Mar 11 09:00:49 crc kubenswrapper[4840]: I0311 09:00:49.873782 4840 generic.go:334] "Generic (PLEG): container finished" podID="edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f" containerID="0d2f27125a8e6137d993fad1a035889c8d1a06a7394e97d1ed53eebc8057ef7b" exitCode=0 Mar 11 09:00:49 crc kubenswrapper[4840]: I0311 09:00:49.873847 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6wxnb" event={"ID":"edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f","Type":"ContainerDied","Data":"0d2f27125a8e6137d993fad1a035889c8d1a06a7394e97d1ed53eebc8057ef7b"} Mar 11 09:00:49 crc kubenswrapper[4840]: I0311 09:00:49.875192 4840 scope.go:117] "RemoveContainer" containerID="b0f6aad5687e543eaedd17e9c5161c0b82a5dc52c861cea0de6aface10b94b02" Mar 11 09:00:49 crc kubenswrapper[4840]: I0311 09:00:49.888769 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h578g" podStartSLOduration=3.045131524 podStartE2EDuration="57.888755995s" podCreationTimestamp="2026-03-11 08:59:52 +0000 UTC" firstStartedPulling="2026-03-11 08:59:53.87026542 +0000 UTC m=+192.535935235" lastFinishedPulling="2026-03-11 09:00:48.713889891 +0000 UTC m=+247.379559706" observedRunningTime="2026-03-11 09:00:49.887841019 +0000 UTC m=+248.553510834" watchObservedRunningTime="2026-03-11 09:00:49.888755995 +0000 UTC m=+248.554425810" Mar 11 09:00:49 crc kubenswrapper[4840]: I0311 09:00:49.903547 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v5s8n"] Mar 11 09:00:49 crc kubenswrapper[4840]: I0311 09:00:49.905694 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v5s8n"] Mar 11 09:00:49 crc kubenswrapper[4840]: I0311 09:00:49.911355 4840 scope.go:117] "RemoveContainer" containerID="5045fc7e473933ff3dbd287dc324c77eff4af4edab2d305e3b27e7f497553c95" Mar 11 09:00:49 crc kubenswrapper[4840]: I0311 09:00:49.926706 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7w69s" podStartSLOduration=4.293637651 podStartE2EDuration="57.926684219s" podCreationTimestamp="2026-03-11 08:59:52 +0000 UTC" firstStartedPulling="2026-03-11 08:59:54.925600267 +0000 UTC m=+193.591270082" lastFinishedPulling="2026-03-11 09:00:48.558646835 +0000 UTC m=+247.224316650" observedRunningTime="2026-03-11 09:00:49.925856935 +0000 UTC m=+248.591526750" watchObservedRunningTime="2026-03-11 09:00:49.926684219 +0000 UTC m=+248.592354034" Mar 11 09:00:50 crc kubenswrapper[4840]: I0311 09:00:50.068657 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd6c58d-5984-4122-826c-18ecfe7dde26" path="/var/lib/kubelet/pods/dcd6c58d-5984-4122-826c-18ecfe7dde26/volumes" Mar 11 09:00:52 crc kubenswrapper[4840]: I0311 09:00:52.619063 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h578g" Mar 11 09:00:52 crc kubenswrapper[4840]: I0311 09:00:52.619592 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h578g" Mar 11 09:00:52 crc kubenswrapper[4840]: I0311 09:00:52.672515 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h578g" Mar 11 09:00:52 crc kubenswrapper[4840]: I0311 09:00:52.848132 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7w69s" Mar 11 09:00:52 crc kubenswrapper[4840]: I0311 09:00:52.848202 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7w69s" Mar 11 09:00:52 crc kubenswrapper[4840]: I0311 09:00:52.896311 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6wxnb" event={"ID":"edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f","Type":"ContainerStarted","Data":"1c3d69ee82022c3839bad445e555126a4e28c053d7b2e869ce008169c4fb2a7b"} Mar 11 09:00:52 crc kubenswrapper[4840]: I0311 09:00:52.898813 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7w69s" Mar 11 09:00:52 crc kubenswrapper[4840]: I0311 09:00:52.921719 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6wxnb" podStartSLOduration=3.009934088 podStartE2EDuration="1m0.921685855s" podCreationTimestamp="2026-03-11 08:59:52 +0000 UTC" firstStartedPulling="2026-03-11 08:59:53.882542461 +0000 UTC m=+192.548212276" lastFinishedPulling="2026-03-11 09:00:51.794294228 +0000 UTC m=+250.459964043" observedRunningTime="2026-03-11 09:00:52.917055592 +0000 UTC m=+251.582725417" watchObservedRunningTime="2026-03-11 09:00:52.921685855 +0000 UTC m=+251.587355670" Mar 11 09:00:53 crc kubenswrapper[4840]: I0311 09:00:53.373203 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-chbkv" Mar 11 09:00:53 crc kubenswrapper[4840]: I0311 09:00:53.373808 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-chbkv" Mar 11 09:00:53 crc kubenswrapper[4840]: I0311 09:00:53.424365 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-chbkv" Mar 11 09:00:53 crc kubenswrapper[4840]: I0311 09:00:53.978045 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-chbkv" Mar 11 09:00:54 crc kubenswrapper[4840]: I0311 09:00:54.391834 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-swqkn" Mar 11 09:00:54 crc kubenswrapper[4840]: I0311 09:00:54.391933 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-swqkn" Mar 11 09:00:54 crc kubenswrapper[4840]: I0311 09:00:54.429509 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-swqkn" Mar 11 09:00:54 crc kubenswrapper[4840]: I0311 09:00:54.948227 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-swqkn" Mar 11 09:00:55 crc kubenswrapper[4840]: I0311 09:00:55.774117 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fp456" Mar 11 09:00:55 crc kubenswrapper[4840]: I0311 09:00:55.774173 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fp456" Mar 11 09:00:55 crc kubenswrapper[4840]: I0311 09:00:55.827016 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fp456" Mar 11 09:00:55 crc kubenswrapper[4840]: I0311 09:00:55.955544 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fp456" Mar 11 09:00:56 crc kubenswrapper[4840]: I0311 09:00:56.189983 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-blqnh" Mar 11 09:00:56 crc kubenswrapper[4840]: I0311 09:00:56.190062 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-blqnh" Mar 11 09:00:56 crc kubenswrapper[4840]: I0311 09:00:56.229917 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-blqnh" Mar 11 09:00:56 crc kubenswrapper[4840]: I0311 09:00:56.958381 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-blqnh" Mar 11 09:00:57 crc kubenswrapper[4840]: I0311 09:00:57.148298 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-chbkv"] Mar 11 09:00:57 crc kubenswrapper[4840]: I0311 09:00:57.148602 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-chbkv" podUID="ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f" containerName="registry-server" containerID="cri-o://a0abb392161b78cc6497a4f3f5188c863ac093889437254a5f9b6053c9d25813" gracePeriod=2 Mar 11 09:00:57 crc kubenswrapper[4840]: I0311 09:00:57.446044 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:00:57 crc kubenswrapper[4840]: I0311 09:00:57.446131 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:00:57 crc kubenswrapper[4840]: I0311 09:00:57.926144 4840 generic.go:334] "Generic (PLEG): container finished" podID="ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f" containerID="a0abb392161b78cc6497a4f3f5188c863ac093889437254a5f9b6053c9d25813" exitCode=0 Mar 11 09:00:57 crc kubenswrapper[4840]: I0311 09:00:57.926266 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chbkv" event={"ID":"ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f","Type":"ContainerDied","Data":"a0abb392161b78cc6497a4f3f5188c863ac093889437254a5f9b6053c9d25813"} Mar 11 09:00:58 crc kubenswrapper[4840]: I0311 09:00:58.146346 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chbkv" Mar 11 09:00:58 crc kubenswrapper[4840]: I0311 09:00:58.235029 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f-catalog-content\") pod \"ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f\" (UID: \"ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f\") " Mar 11 09:00:58 crc kubenswrapper[4840]: I0311 09:00:58.235232 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f-utilities\") pod \"ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f\" (UID: \"ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f\") " Mar 11 09:00:58 crc kubenswrapper[4840]: I0311 09:00:58.235326 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jcf7\" (UniqueName: \"kubernetes.io/projected/ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f-kube-api-access-6jcf7\") pod \"ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f\" (UID: \"ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f\") " Mar 11 09:00:58 crc kubenswrapper[4840]: I0311 09:00:58.236129 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f-utilities" (OuterVolumeSpecName: "utilities") pod "ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f" (UID: "ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:00:58 crc kubenswrapper[4840]: I0311 09:00:58.246875 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f-kube-api-access-6jcf7" (OuterVolumeSpecName: "kube-api-access-6jcf7") pod "ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f" (UID: "ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f"). InnerVolumeSpecName "kube-api-access-6jcf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:00:58 crc kubenswrapper[4840]: I0311 09:00:58.284026 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f" (UID: "ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:00:58 crc kubenswrapper[4840]: I0311 09:00:58.336716 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jcf7\" (UniqueName: \"kubernetes.io/projected/ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f-kube-api-access-6jcf7\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:58 crc kubenswrapper[4840]: I0311 09:00:58.336749 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:58 crc kubenswrapper[4840]: I0311 09:00:58.336761 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:00:58 crc kubenswrapper[4840]: I0311 09:00:58.935337 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chbkv" event={"ID":"ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f","Type":"ContainerDied","Data":"59363b6776f0a73a945a87a03a4f555643cef31e0e17a2064779925d90d35070"} Mar 11 09:00:58 crc kubenswrapper[4840]: I0311 09:00:58.935405 4840 scope.go:117] "RemoveContainer" containerID="a0abb392161b78cc6497a4f3f5188c863ac093889437254a5f9b6053c9d25813" Mar 11 09:00:58 crc kubenswrapper[4840]: I0311 09:00:58.935499 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chbkv" Mar 11 09:00:58 crc kubenswrapper[4840]: I0311 09:00:58.950201 4840 scope.go:117] "RemoveContainer" containerID="1d1784697349e16a1ada97bac72b9b94c906b7b716e855c7d8151b121b0eda94" Mar 11 09:00:58 crc kubenswrapper[4840]: I0311 09:00:58.971513 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-chbkv"] Mar 11 09:00:58 crc kubenswrapper[4840]: I0311 09:00:58.972814 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-chbkv"] Mar 11 09:00:58 crc kubenswrapper[4840]: I0311 09:00:58.979761 4840 scope.go:117] "RemoveContainer" containerID="c4a214cd1ff05433fb5237befb1450c2e433df5f0c408e78530a28bbb13d5faf" Mar 11 09:00:59 crc kubenswrapper[4840]: I0311 09:00:59.549233 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-blqnh"] Mar 11 09:00:59 crc kubenswrapper[4840]: I0311 09:00:59.549564 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-blqnh" podUID="75259694-613a-407e-aea2-fb828fe927b9" containerName="registry-server" containerID="cri-o://6771e73e7005b64dc8b2d4943f7852ac7498329363b356b4e71d56aeb4d05da2" gracePeriod=2 Mar 11 09:01:00 crc kubenswrapper[4840]: I0311 09:01:00.217158 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f" path="/var/lib/kubelet/pods/ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f/volumes" Mar 11 09:01:01 crc kubenswrapper[4840]: I0311 09:01:01.011373 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blqnh" Mar 11 09:01:01 crc kubenswrapper[4840]: I0311 09:01:01.084968 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75259694-613a-407e-aea2-fb828fe927b9-utilities\") pod \"75259694-613a-407e-aea2-fb828fe927b9\" (UID: \"75259694-613a-407e-aea2-fb828fe927b9\") " Mar 11 09:01:01 crc kubenswrapper[4840]: I0311 09:01:01.085023 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k25b6\" (UniqueName: \"kubernetes.io/projected/75259694-613a-407e-aea2-fb828fe927b9-kube-api-access-k25b6\") pod \"75259694-613a-407e-aea2-fb828fe927b9\" (UID: \"75259694-613a-407e-aea2-fb828fe927b9\") " Mar 11 09:01:01 crc kubenswrapper[4840]: I0311 09:01:01.085065 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75259694-613a-407e-aea2-fb828fe927b9-catalog-content\") pod \"75259694-613a-407e-aea2-fb828fe927b9\" (UID: \"75259694-613a-407e-aea2-fb828fe927b9\") " Mar 11 09:01:01 crc kubenswrapper[4840]: I0311 09:01:01.088394 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75259694-613a-407e-aea2-fb828fe927b9-utilities" (OuterVolumeSpecName: "utilities") pod "75259694-613a-407e-aea2-fb828fe927b9" (UID: "75259694-613a-407e-aea2-fb828fe927b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:01:01 crc kubenswrapper[4840]: I0311 09:01:01.092962 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75259694-613a-407e-aea2-fb828fe927b9-kube-api-access-k25b6" (OuterVolumeSpecName: "kube-api-access-k25b6") pod "75259694-613a-407e-aea2-fb828fe927b9" (UID: "75259694-613a-407e-aea2-fb828fe927b9"). InnerVolumeSpecName "kube-api-access-k25b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:01 crc kubenswrapper[4840]: I0311 09:01:01.186627 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75259694-613a-407e-aea2-fb828fe927b9-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:01 crc kubenswrapper[4840]: I0311 09:01:01.186668 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k25b6\" (UniqueName: \"kubernetes.io/projected/75259694-613a-407e-aea2-fb828fe927b9-kube-api-access-k25b6\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:01 crc kubenswrapper[4840]: I0311 09:01:01.214415 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75259694-613a-407e-aea2-fb828fe927b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75259694-613a-407e-aea2-fb828fe927b9" (UID: "75259694-613a-407e-aea2-fb828fe927b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:01:01 crc kubenswrapper[4840]: I0311 09:01:01.244865 4840 generic.go:334] "Generic (PLEG): container finished" podID="75259694-613a-407e-aea2-fb828fe927b9" containerID="6771e73e7005b64dc8b2d4943f7852ac7498329363b356b4e71d56aeb4d05da2" exitCode=0 Mar 11 09:01:01 crc kubenswrapper[4840]: I0311 09:01:01.244947 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blqnh" Mar 11 09:01:01 crc kubenswrapper[4840]: I0311 09:01:01.244963 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blqnh" event={"ID":"75259694-613a-407e-aea2-fb828fe927b9","Type":"ContainerDied","Data":"6771e73e7005b64dc8b2d4943f7852ac7498329363b356b4e71d56aeb4d05da2"} Mar 11 09:01:01 crc kubenswrapper[4840]: I0311 09:01:01.245038 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blqnh" event={"ID":"75259694-613a-407e-aea2-fb828fe927b9","Type":"ContainerDied","Data":"f44fc1ecbf5c96bed3f9583fa74b3b31930e27cdb857843ea7465ed74d051d4d"} Mar 11 09:01:01 crc kubenswrapper[4840]: I0311 09:01:01.245087 4840 scope.go:117] "RemoveContainer" containerID="6771e73e7005b64dc8b2d4943f7852ac7498329363b356b4e71d56aeb4d05da2" Mar 11 09:01:01 crc kubenswrapper[4840]: I0311 09:01:01.276175 4840 scope.go:117] "RemoveContainer" containerID="a727096026fbe214f8cf780c7b1e3b46defdd1b8f41d04748ad871c3f1170242" Mar 11 09:01:01 crc kubenswrapper[4840]: I0311 09:01:01.276596 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-blqnh"] Mar 11 09:01:01 crc kubenswrapper[4840]: I0311 09:01:01.288522 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75259694-613a-407e-aea2-fb828fe927b9-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:01 crc kubenswrapper[4840]: I0311 09:01:01.290133 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-blqnh"] Mar 11 09:01:01 crc kubenswrapper[4840]: I0311 09:01:01.301584 4840 scope.go:117] "RemoveContainer" containerID="59890ad7678c874d5f21170b3e018913928b7b26d0b0f7e4d60db020991ff843" Mar 11 09:01:01 crc kubenswrapper[4840]: I0311 09:01:01.325298 4840 scope.go:117] "RemoveContainer" containerID="6771e73e7005b64dc8b2d4943f7852ac7498329363b356b4e71d56aeb4d05da2" Mar 11 09:01:01 crc kubenswrapper[4840]: E0311 09:01:01.325875 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6771e73e7005b64dc8b2d4943f7852ac7498329363b356b4e71d56aeb4d05da2\": container with ID starting with 6771e73e7005b64dc8b2d4943f7852ac7498329363b356b4e71d56aeb4d05da2 not found: ID does not exist" containerID="6771e73e7005b64dc8b2d4943f7852ac7498329363b356b4e71d56aeb4d05da2" Mar 11 09:01:01 crc kubenswrapper[4840]: I0311 09:01:01.325915 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6771e73e7005b64dc8b2d4943f7852ac7498329363b356b4e71d56aeb4d05da2"} err="failed to get container status \"6771e73e7005b64dc8b2d4943f7852ac7498329363b356b4e71d56aeb4d05da2\": rpc error: code = NotFound desc = could not find container \"6771e73e7005b64dc8b2d4943f7852ac7498329363b356b4e71d56aeb4d05da2\": container with ID starting with 6771e73e7005b64dc8b2d4943f7852ac7498329363b356b4e71d56aeb4d05da2 not found: ID does not exist" Mar 11 09:01:01 crc kubenswrapper[4840]: I0311 09:01:01.325946 4840 scope.go:117] "RemoveContainer" containerID="a727096026fbe214f8cf780c7b1e3b46defdd1b8f41d04748ad871c3f1170242" Mar 11 09:01:01 crc kubenswrapper[4840]: E0311 09:01:01.326407 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a727096026fbe214f8cf780c7b1e3b46defdd1b8f41d04748ad871c3f1170242\": container with ID starting with a727096026fbe214f8cf780c7b1e3b46defdd1b8f41d04748ad871c3f1170242 not found: ID does not exist" containerID="a727096026fbe214f8cf780c7b1e3b46defdd1b8f41d04748ad871c3f1170242" Mar 11 09:01:01 crc kubenswrapper[4840]: I0311 09:01:01.326430 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a727096026fbe214f8cf780c7b1e3b46defdd1b8f41d04748ad871c3f1170242"} err="failed to get container status \"a727096026fbe214f8cf780c7b1e3b46defdd1b8f41d04748ad871c3f1170242\": rpc error: code = NotFound desc = could not find container \"a727096026fbe214f8cf780c7b1e3b46defdd1b8f41d04748ad871c3f1170242\": container with ID starting with a727096026fbe214f8cf780c7b1e3b46defdd1b8f41d04748ad871c3f1170242 not found: ID does not exist" Mar 11 09:01:01 crc kubenswrapper[4840]: I0311 09:01:01.326443 4840 scope.go:117] "RemoveContainer" containerID="59890ad7678c874d5f21170b3e018913928b7b26d0b0f7e4d60db020991ff843" Mar 11 09:01:01 crc kubenswrapper[4840]: E0311 09:01:01.326726 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59890ad7678c874d5f21170b3e018913928b7b26d0b0f7e4d60db020991ff843\": container with ID starting with 59890ad7678c874d5f21170b3e018913928b7b26d0b0f7e4d60db020991ff843 not found: ID does not exist" containerID="59890ad7678c874d5f21170b3e018913928b7b26d0b0f7e4d60db020991ff843" Mar 11 09:01:01 crc kubenswrapper[4840]: I0311 09:01:01.326753 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59890ad7678c874d5f21170b3e018913928b7b26d0b0f7e4d60db020991ff843"} err="failed to get container status \"59890ad7678c874d5f21170b3e018913928b7b26d0b0f7e4d60db020991ff843\": rpc error: code = NotFound desc = could not find container \"59890ad7678c874d5f21170b3e018913928b7b26d0b0f7e4d60db020991ff843\": container with ID starting with 59890ad7678c874d5f21170b3e018913928b7b26d0b0f7e4d60db020991ff843 not found: ID does not exist" Mar 11 09:01:02 crc kubenswrapper[4840]: I0311 09:01:02.069375 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75259694-613a-407e-aea2-fb828fe927b9" path="/var/lib/kubelet/pods/75259694-613a-407e-aea2-fb828fe927b9/volumes" Mar 11 09:01:02 crc kubenswrapper[4840]: I0311 09:01:02.405657 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6wxnb" Mar 11 09:01:02 crc kubenswrapper[4840]: I0311 09:01:02.406133 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6wxnb" Mar 11 09:01:02 crc kubenswrapper[4840]: I0311 09:01:02.459307 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6wxnb" Mar 11 09:01:02 crc kubenswrapper[4840]: I0311 09:01:02.658345 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h578g" Mar 11 09:01:02 crc kubenswrapper[4840]: I0311 09:01:02.901520 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7w69s" Mar 11 09:01:03 crc kubenswrapper[4840]: I0311 09:01:03.299047 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6wxnb" Mar 11 09:01:04 crc kubenswrapper[4840]: I0311 09:01:04.749280 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7w69s"] Mar 11 09:01:04 crc kubenswrapper[4840]: I0311 09:01:04.749601 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7w69s" podUID="996d3b36-77c7-4e8f-a472-ac032aabd836" containerName="registry-server" containerID="cri-o://5f2d298083f37d57640ea24cb030fec7f4ad8d2dabad678095e607093a6ad04c" gracePeriod=2 Mar 11 09:01:05 crc kubenswrapper[4840]: I0311 09:01:05.243205 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7w69s" Mar 11 09:01:05 crc kubenswrapper[4840]: I0311 09:01:05.271323 4840 generic.go:334] "Generic (PLEG): container finished" podID="996d3b36-77c7-4e8f-a472-ac032aabd836" containerID="5f2d298083f37d57640ea24cb030fec7f4ad8d2dabad678095e607093a6ad04c" exitCode=0 Mar 11 09:01:05 crc kubenswrapper[4840]: I0311 09:01:05.271384 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7w69s" event={"ID":"996d3b36-77c7-4e8f-a472-ac032aabd836","Type":"ContainerDied","Data":"5f2d298083f37d57640ea24cb030fec7f4ad8d2dabad678095e607093a6ad04c"} Mar 11 09:01:05 crc kubenswrapper[4840]: I0311 09:01:05.271425 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7w69s" event={"ID":"996d3b36-77c7-4e8f-a472-ac032aabd836","Type":"ContainerDied","Data":"c2b63b61f60149a264e01e5162c3dbef6c3c932d73c0ed7f6b94acca5a532c56"} Mar 11 09:01:05 crc kubenswrapper[4840]: I0311 09:01:05.271451 4840 scope.go:117] "RemoveContainer" containerID="5f2d298083f37d57640ea24cb030fec7f4ad8d2dabad678095e607093a6ad04c" Mar 11 09:01:05 crc kubenswrapper[4840]: I0311 09:01:05.271890 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7w69s" Mar 11 09:01:05 crc kubenswrapper[4840]: I0311 09:01:05.292850 4840 scope.go:117] "RemoveContainer" containerID="dfb61453350707fe6372284817fe70c02004ca1bb80a9cfb3b6f3cbaafca3d6b" Mar 11 09:01:05 crc kubenswrapper[4840]: I0311 09:01:05.315956 4840 scope.go:117] "RemoveContainer" containerID="978d43f66002b007cd3eece14909369d3be344d57a9bd5b545750330f671634e" Mar 11 09:01:05 crc kubenswrapper[4840]: I0311 09:01:05.339621 4840 scope.go:117] "RemoveContainer" containerID="5f2d298083f37d57640ea24cb030fec7f4ad8d2dabad678095e607093a6ad04c" Mar 11 09:01:05 crc kubenswrapper[4840]: E0311 09:01:05.341665 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f2d298083f37d57640ea24cb030fec7f4ad8d2dabad678095e607093a6ad04c\": container with ID starting with 5f2d298083f37d57640ea24cb030fec7f4ad8d2dabad678095e607093a6ad04c not found: ID does not exist" containerID="5f2d298083f37d57640ea24cb030fec7f4ad8d2dabad678095e607093a6ad04c" Mar 11 09:01:05 crc kubenswrapper[4840]: I0311 09:01:05.341716 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f2d298083f37d57640ea24cb030fec7f4ad8d2dabad678095e607093a6ad04c"} err="failed to get container status \"5f2d298083f37d57640ea24cb030fec7f4ad8d2dabad678095e607093a6ad04c\": rpc error: code = NotFound desc = could not find container \"5f2d298083f37d57640ea24cb030fec7f4ad8d2dabad678095e607093a6ad04c\": container with ID starting with 5f2d298083f37d57640ea24cb030fec7f4ad8d2dabad678095e607093a6ad04c not found: ID does not exist" Mar 11 09:01:05 crc kubenswrapper[4840]: I0311 09:01:05.341750 4840 scope.go:117] "RemoveContainer" containerID="dfb61453350707fe6372284817fe70c02004ca1bb80a9cfb3b6f3cbaafca3d6b" Mar 11 09:01:05 crc kubenswrapper[4840]: E0311 09:01:05.342332 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfb61453350707fe6372284817fe70c02004ca1bb80a9cfb3b6f3cbaafca3d6b\": container with ID starting with dfb61453350707fe6372284817fe70c02004ca1bb80a9cfb3b6f3cbaafca3d6b not found: ID does not exist" containerID="dfb61453350707fe6372284817fe70c02004ca1bb80a9cfb3b6f3cbaafca3d6b" Mar 11 09:01:05 crc kubenswrapper[4840]: I0311 09:01:05.342369 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfb61453350707fe6372284817fe70c02004ca1bb80a9cfb3b6f3cbaafca3d6b"} err="failed to get container status \"dfb61453350707fe6372284817fe70c02004ca1bb80a9cfb3b6f3cbaafca3d6b\": rpc error: code = NotFound desc = could not find container \"dfb61453350707fe6372284817fe70c02004ca1bb80a9cfb3b6f3cbaafca3d6b\": container with ID starting with dfb61453350707fe6372284817fe70c02004ca1bb80a9cfb3b6f3cbaafca3d6b not found: ID does not exist" Mar 11 09:01:05 crc kubenswrapper[4840]: I0311 09:01:05.342388 4840 scope.go:117] "RemoveContainer" containerID="978d43f66002b007cd3eece14909369d3be344d57a9bd5b545750330f671634e" Mar 11 09:01:05 crc kubenswrapper[4840]: E0311 09:01:05.343126 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"978d43f66002b007cd3eece14909369d3be344d57a9bd5b545750330f671634e\": container with ID starting with 978d43f66002b007cd3eece14909369d3be344d57a9bd5b545750330f671634e not found: ID does not exist" containerID="978d43f66002b007cd3eece14909369d3be344d57a9bd5b545750330f671634e" Mar 11 09:01:05 crc kubenswrapper[4840]: I0311 09:01:05.343184 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"978d43f66002b007cd3eece14909369d3be344d57a9bd5b545750330f671634e"} err="failed to get container status \"978d43f66002b007cd3eece14909369d3be344d57a9bd5b545750330f671634e\": rpc error: code = NotFound desc = could not find container \"978d43f66002b007cd3eece14909369d3be344d57a9bd5b545750330f671634e\": container with ID starting with 978d43f66002b007cd3eece14909369d3be344d57a9bd5b545750330f671634e not found: ID does not exist" Mar 11 09:01:05 crc kubenswrapper[4840]: I0311 09:01:05.359046 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/996d3b36-77c7-4e8f-a472-ac032aabd836-catalog-content\") pod \"996d3b36-77c7-4e8f-a472-ac032aabd836\" (UID: \"996d3b36-77c7-4e8f-a472-ac032aabd836\") " Mar 11 09:01:05 crc kubenswrapper[4840]: I0311 09:01:05.359151 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/996d3b36-77c7-4e8f-a472-ac032aabd836-utilities\") pod \"996d3b36-77c7-4e8f-a472-ac032aabd836\" (UID: \"996d3b36-77c7-4e8f-a472-ac032aabd836\") " Mar 11 09:01:05 crc kubenswrapper[4840]: I0311 09:01:05.359981 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/996d3b36-77c7-4e8f-a472-ac032aabd836-utilities" (OuterVolumeSpecName: "utilities") pod "996d3b36-77c7-4e8f-a472-ac032aabd836" (UID: "996d3b36-77c7-4e8f-a472-ac032aabd836"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:01:05 crc kubenswrapper[4840]: I0311 09:01:05.360030 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b9fv\" (UniqueName: \"kubernetes.io/projected/996d3b36-77c7-4e8f-a472-ac032aabd836-kube-api-access-6b9fv\") pod \"996d3b36-77c7-4e8f-a472-ac032aabd836\" (UID: \"996d3b36-77c7-4e8f-a472-ac032aabd836\") " Mar 11 09:01:05 crc kubenswrapper[4840]: I0311 09:01:05.360364 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/996d3b36-77c7-4e8f-a472-ac032aabd836-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:05 crc kubenswrapper[4840]: I0311 09:01:05.379520 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/996d3b36-77c7-4e8f-a472-ac032aabd836-kube-api-access-6b9fv" (OuterVolumeSpecName: "kube-api-access-6b9fv") pod "996d3b36-77c7-4e8f-a472-ac032aabd836" (UID: "996d3b36-77c7-4e8f-a472-ac032aabd836"). InnerVolumeSpecName "kube-api-access-6b9fv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:05 crc kubenswrapper[4840]: I0311 09:01:05.429294 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/996d3b36-77c7-4e8f-a472-ac032aabd836-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "996d3b36-77c7-4e8f-a472-ac032aabd836" (UID: "996d3b36-77c7-4e8f-a472-ac032aabd836"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:01:05 crc kubenswrapper[4840]: I0311 09:01:05.461386 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/996d3b36-77c7-4e8f-a472-ac032aabd836-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:05 crc kubenswrapper[4840]: I0311 09:01:05.461421 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b9fv\" (UniqueName: \"kubernetes.io/projected/996d3b36-77c7-4e8f-a472-ac032aabd836-kube-api-access-6b9fv\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:05 crc kubenswrapper[4840]: I0311 09:01:05.600975 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7w69s"] Mar 11 09:01:05 crc kubenswrapper[4840]: I0311 09:01:05.608095 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7w69s"] Mar 11 09:01:06 crc kubenswrapper[4840]: I0311 09:01:06.070735 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="996d3b36-77c7-4e8f-a472-ac032aabd836" path="/var/lib/kubelet/pods/996d3b36-77c7-4e8f-a472-ac032aabd836/volumes" Mar 11 09:01:07 crc kubenswrapper[4840]: I0311 09:01:07.209670 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 11 09:01:07 crc kubenswrapper[4840]: I0311 09:01:07.675181 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-td29c" podUID="2b03405c-0fe1-4c7f-ad2a-dcd0db280109" containerName="oauth-openshift" containerID="cri-o://d775be480d48c7842e249c6e4ab37dc71b61fa37a144303846c126cf0dfb38b3" gracePeriod=15 Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.150215 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.199934 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-ocp-branding-template\") pod \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.199982 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-audit-policies\") pod \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.200012 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-user-template-login\") pod \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.200028 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-session\") pod \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.200060 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-serving-cert\") pod \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.200085 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-user-template-provider-selection\") pod \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.200104 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-user-template-error\") pod \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.200126 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8kxm\" (UniqueName: \"kubernetes.io/projected/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-kube-api-access-b8kxm\") pod \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.200149 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-trusted-ca-bundle\") pod \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.200182 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-service-ca\") pod \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.201258 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "2b03405c-0fe1-4c7f-ad2a-dcd0db280109" (UID: "2b03405c-0fe1-4c7f-ad2a-dcd0db280109"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.201338 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "2b03405c-0fe1-4c7f-ad2a-dcd0db280109" (UID: "2b03405c-0fe1-4c7f-ad2a-dcd0db280109"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.201401 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-audit-dir\") pod \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.201423 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "2b03405c-0fe1-4c7f-ad2a-dcd0db280109" (UID: "2b03405c-0fe1-4c7f-ad2a-dcd0db280109"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.201447 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-user-idp-0-file-data\") pod \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.201491 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-cliconfig\") pod \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.201530 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-router-certs\") pod \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\" (UID: \"2b03405c-0fe1-4c7f-ad2a-dcd0db280109\") " Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.201789 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.201802 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.201815 4840 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.201964 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "2b03405c-0fe1-4c7f-ad2a-dcd0db280109" (UID: "2b03405c-0fe1-4c7f-ad2a-dcd0db280109"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.202194 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "2b03405c-0fe1-4c7f-ad2a-dcd0db280109" (UID: "2b03405c-0fe1-4c7f-ad2a-dcd0db280109"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.206615 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "2b03405c-0fe1-4c7f-ad2a-dcd0db280109" (UID: "2b03405c-0fe1-4c7f-ad2a-dcd0db280109"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.208761 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "2b03405c-0fe1-4c7f-ad2a-dcd0db280109" (UID: "2b03405c-0fe1-4c7f-ad2a-dcd0db280109"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.208803 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "2b03405c-0fe1-4c7f-ad2a-dcd0db280109" (UID: "2b03405c-0fe1-4c7f-ad2a-dcd0db280109"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.209009 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "2b03405c-0fe1-4c7f-ad2a-dcd0db280109" (UID: "2b03405c-0fe1-4c7f-ad2a-dcd0db280109"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.209283 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "2b03405c-0fe1-4c7f-ad2a-dcd0db280109" (UID: "2b03405c-0fe1-4c7f-ad2a-dcd0db280109"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.209705 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "2b03405c-0fe1-4c7f-ad2a-dcd0db280109" (UID: "2b03405c-0fe1-4c7f-ad2a-dcd0db280109"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.216603 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "2b03405c-0fe1-4c7f-ad2a-dcd0db280109" (UID: "2b03405c-0fe1-4c7f-ad2a-dcd0db280109"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.216692 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-kube-api-access-b8kxm" (OuterVolumeSpecName: "kube-api-access-b8kxm") pod "2b03405c-0fe1-4c7f-ad2a-dcd0db280109" (UID: "2b03405c-0fe1-4c7f-ad2a-dcd0db280109"). InnerVolumeSpecName "kube-api-access-b8kxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.217839 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "2b03405c-0fe1-4c7f-ad2a-dcd0db280109" (UID: "2b03405c-0fe1-4c7f-ad2a-dcd0db280109"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.293951 4840 generic.go:334] "Generic (PLEG): container finished" podID="2b03405c-0fe1-4c7f-ad2a-dcd0db280109" containerID="d775be480d48c7842e249c6e4ab37dc71b61fa37a144303846c126cf0dfb38b3" exitCode=0 Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.293997 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-td29c" event={"ID":"2b03405c-0fe1-4c7f-ad2a-dcd0db280109","Type":"ContainerDied","Data":"d775be480d48c7842e249c6e4ab37dc71b61fa37a144303846c126cf0dfb38b3"} Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.294024 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-td29c" event={"ID":"2b03405c-0fe1-4c7f-ad2a-dcd0db280109","Type":"ContainerDied","Data":"30ce7f734186175713f8e79759d08450ecf8a6ef868c3d35255362a87b7816c9"} Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.294082 4840 scope.go:117] "RemoveContainer" containerID="d775be480d48c7842e249c6e4ab37dc71b61fa37a144303846c126cf0dfb38b3" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.294674 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-td29c" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.302770 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.302797 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.302806 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.302816 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.302826 4840 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.302835 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.302848 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.302859 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.302871 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.302883 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.302897 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8kxm\" (UniqueName: \"kubernetes.io/projected/2b03405c-0fe1-4c7f-ad2a-dcd0db280109-kube-api-access-b8kxm\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.311931 4840 scope.go:117] "RemoveContainer" containerID="d775be480d48c7842e249c6e4ab37dc71b61fa37a144303846c126cf0dfb38b3" Mar 11 09:01:08 crc kubenswrapper[4840]: E0311 09:01:08.312346 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d775be480d48c7842e249c6e4ab37dc71b61fa37a144303846c126cf0dfb38b3\": container with ID starting with d775be480d48c7842e249c6e4ab37dc71b61fa37a144303846c126cf0dfb38b3 not found: ID does not exist" containerID="d775be480d48c7842e249c6e4ab37dc71b61fa37a144303846c126cf0dfb38b3" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.312393 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d775be480d48c7842e249c6e4ab37dc71b61fa37a144303846c126cf0dfb38b3"} err="failed to get container status \"d775be480d48c7842e249c6e4ab37dc71b61fa37a144303846c126cf0dfb38b3\": rpc error: code = NotFound desc = could not find container \"d775be480d48c7842e249c6e4ab37dc71b61fa37a144303846c126cf0dfb38b3\": container with ID starting with d775be480d48c7842e249c6e4ab37dc71b61fa37a144303846c126cf0dfb38b3 not found: ID does not exist" Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.323931 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-td29c"] Mar 11 09:01:08 crc kubenswrapper[4840]: I0311 09:01:08.327306 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-td29c"] Mar 11 09:01:10 crc kubenswrapper[4840]: I0311 09:01:10.068372 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b03405c-0fe1-4c7f-ad2a-dcd0db280109" path="/var/lib/kubelet/pods/2b03405c-0fe1-4c7f-ad2a-dcd0db280109/volumes" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.056767 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h"] Mar 11 09:01:11 crc kubenswrapper[4840]: E0311 09:01:11.057401 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd6c58d-5984-4122-826c-18ecfe7dde26" containerName="extract-content" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.057421 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd6c58d-5984-4122-826c-18ecfe7dde26" containerName="extract-content" Mar 11 09:01:11 crc kubenswrapper[4840]: E0311 09:01:11.057439 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f" containerName="extract-utilities" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.057447 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f" containerName="extract-utilities" Mar 11 09:01:11 crc kubenswrapper[4840]: E0311 09:01:11.057455 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f" containerName="extract-content" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.057463 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f" containerName="extract-content" Mar 11 09:01:11 crc kubenswrapper[4840]: E0311 09:01:11.057491 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75259694-613a-407e-aea2-fb828fe927b9" containerName="extract-utilities" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.057498 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="75259694-613a-407e-aea2-fb828fe927b9" containerName="extract-utilities" Mar 11 09:01:11 crc kubenswrapper[4840]: E0311 09:01:11.057512 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd6c58d-5984-4122-826c-18ecfe7dde26" containerName="extract-utilities" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.057518 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd6c58d-5984-4122-826c-18ecfe7dde26" containerName="extract-utilities" Mar 11 09:01:11 crc kubenswrapper[4840]: E0311 09:01:11.057526 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcd6c58d-5984-4122-826c-18ecfe7dde26" containerName="registry-server" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.057532 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcd6c58d-5984-4122-826c-18ecfe7dde26" containerName="registry-server" Mar 11 09:01:11 crc kubenswrapper[4840]: E0311 09:01:11.057541 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75259694-613a-407e-aea2-fb828fe927b9" containerName="registry-server" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.057549 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="75259694-613a-407e-aea2-fb828fe927b9" containerName="registry-server" Mar 11 09:01:11 crc kubenswrapper[4840]: E0311 09:01:11.057560 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="996d3b36-77c7-4e8f-a472-ac032aabd836" containerName="registry-server" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.057567 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="996d3b36-77c7-4e8f-a472-ac032aabd836" containerName="registry-server" Mar 11 09:01:11 crc kubenswrapper[4840]: E0311 09:01:11.057579 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="996d3b36-77c7-4e8f-a472-ac032aabd836" containerName="extract-content" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.057586 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="996d3b36-77c7-4e8f-a472-ac032aabd836" containerName="extract-content" Mar 11 09:01:11 crc kubenswrapper[4840]: E0311 09:01:11.057596 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b03405c-0fe1-4c7f-ad2a-dcd0db280109" containerName="oauth-openshift" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.057602 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b03405c-0fe1-4c7f-ad2a-dcd0db280109" containerName="oauth-openshift" Mar 11 09:01:11 crc kubenswrapper[4840]: E0311 09:01:11.057795 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="996d3b36-77c7-4e8f-a472-ac032aabd836" containerName="extract-utilities" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.057801 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="996d3b36-77c7-4e8f-a472-ac032aabd836" containerName="extract-utilities" Mar 11 09:01:11 crc kubenswrapper[4840]: E0311 09:01:11.057811 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75259694-613a-407e-aea2-fb828fe927b9" containerName="extract-content" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.057817 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="75259694-613a-407e-aea2-fb828fe927b9" containerName="extract-content" Mar 11 09:01:11 crc kubenswrapper[4840]: E0311 09:01:11.057827 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f" containerName="registry-server" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.057833 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f" containerName="registry-server" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.057950 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b03405c-0fe1-4c7f-ad2a-dcd0db280109" containerName="oauth-openshift" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.057966 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="996d3b36-77c7-4e8f-a472-ac032aabd836" containerName="registry-server" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.057977 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcd6c58d-5984-4122-826c-18ecfe7dde26" containerName="registry-server" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.057988 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="75259694-613a-407e-aea2-fb828fe927b9" containerName="registry-server" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.057995 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecd2f9dc-1f5f-4787-b66f-8caaaeb9dc9f" containerName="registry-server" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.058513 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.061981 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.062385 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.062638 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.062891 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.063387 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.063647 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.063781 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.064129 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.064906 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.065457 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.065734 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.071330 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.072876 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.087682 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.090335 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h"] Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.095561 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.140097 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6018aeae-f9f1-4848-b8ea-695ddf794001-audit-policies\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.140186 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-user-template-error\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.140278 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.140326 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-system-router-certs\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.140361 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-system-session\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.140627 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tntdm\" (UniqueName: \"kubernetes.io/projected/6018aeae-f9f1-4848-b8ea-695ddf794001-kube-api-access-tntdm\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.140706 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-user-template-login\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.140797 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6018aeae-f9f1-4848-b8ea-695ddf794001-audit-dir\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.140845 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.140872 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-system-service-ca\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.140899 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.140962 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.141310 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.141428 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.242494 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.242557 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.242605 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6018aeae-f9f1-4848-b8ea-695ddf794001-audit-policies\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.242625 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-user-template-error\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.242644 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.242661 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-system-router-certs\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.242676 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-system-session\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.242702 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tntdm\" (UniqueName: \"kubernetes.io/projected/6018aeae-f9f1-4848-b8ea-695ddf794001-kube-api-access-tntdm\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.242720 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-user-template-login\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.242746 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6018aeae-f9f1-4848-b8ea-695ddf794001-audit-dir\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.242766 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.242980 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-system-service-ca\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.243001 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.243025 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.243317 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6018aeae-f9f1-4848-b8ea-695ddf794001-audit-dir\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.244397 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.244765 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-system-service-ca\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.245385 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.245748 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6018aeae-f9f1-4848-b8ea-695ddf794001-audit-policies\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.252408 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-user-template-error\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.253034 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-user-template-login\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.255895 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-system-session\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.256122 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-system-router-certs\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.260041 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.262725 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tntdm\" (UniqueName: \"kubernetes.io/projected/6018aeae-f9f1-4848-b8ea-695ddf794001-kube-api-access-tntdm\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.263774 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.263974 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.264523 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6018aeae-f9f1-4848-b8ea-695ddf794001-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6cfdf67ff-4mn7h\" (UID: \"6018aeae-f9f1-4848-b8ea-695ddf794001\") " pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.389439 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.805065 4840 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.806126 4840 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.806280 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.806389 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2" gracePeriod=15 Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.806418 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc" gracePeriod=15 Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.806531 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7" gracePeriod=15 Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.806558 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e" gracePeriod=15 Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.806533 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10" gracePeriod=15 Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.807487 4840 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 11 09:01:11 crc kubenswrapper[4840]: E0311 09:01:11.807619 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.807632 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 11 09:01:11 crc kubenswrapper[4840]: E0311 09:01:11.807641 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.807647 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 11 09:01:11 crc kubenswrapper[4840]: E0311 09:01:11.807653 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.807659 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 09:01:11 crc kubenswrapper[4840]: E0311 09:01:11.807670 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.807675 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 11 09:01:11 crc kubenswrapper[4840]: E0311 09:01:11.807683 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.807689 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 11 09:01:11 crc kubenswrapper[4840]: E0311 09:01:11.807698 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.807704 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 09:01:11 crc kubenswrapper[4840]: E0311 09:01:11.807712 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.807718 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 11 09:01:11 crc kubenswrapper[4840]: E0311 09:01:11.807725 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.807731 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 09:01:11 crc kubenswrapper[4840]: E0311 09:01:11.807737 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.807743 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.807827 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.807837 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.807845 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.807852 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.807860 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.807876 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.807885 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 11 09:01:11 crc kubenswrapper[4840]: E0311 09:01:11.807976 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.807984 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.808071 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.808081 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.827405 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h"] Mar 11 09:01:11 crc kubenswrapper[4840]: E0311 09:01:11.847531 4840 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/events\": dial tcp 38.102.83.30:6443: connect: connection refused" event="&Event{ObjectMeta:{oauth-openshift-6cfdf67ff-4mn7h.189bbde5891ab762 openshift-authentication 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication,Name:oauth-openshift-6cfdf67ff-4mn7h,UID:6018aeae-f9f1-4848-b8ea-695ddf794001,APIVersion:v1,ResourceVersion:29702,FieldPath:spec.containers{oauth-openshift},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 09:01:11.845812066 +0000 UTC m=+270.511481891,LastTimestamp:2026-03-11 09:01:11.845812066 +0000 UTC m=+270.511481891,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 09:01:11 crc kubenswrapper[4840]: E0311 09:01:11.849748 4840 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.30:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.857622 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.857666 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.857718 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.857752 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.857809 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.857838 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.857861 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.857880 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.959829 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.959906 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.959986 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.959993 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.960077 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.960074 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.960134 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.960160 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.960142 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.960195 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.960224 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.960261 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.960291 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.960312 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.960392 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 09:01:11 crc kubenswrapper[4840]: I0311 09:01:11.960422 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 09:01:12 crc kubenswrapper[4840]: I0311 09:01:12.063541 4840 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:12 crc kubenswrapper[4840]: I0311 09:01:12.064768 4840 status_manager.go:851] "Failed to get status for pod" podUID="6018aeae-f9f1-4848-b8ea-695ddf794001" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6cfdf67ff-4mn7h\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:12 crc kubenswrapper[4840]: I0311 09:01:12.150917 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 09:01:12 crc kubenswrapper[4840]: W0311 09:01:12.175341 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-e321d4847b25d77009152a06df76defef49fa234bba53f49a744aa2a24848a22 WatchSource:0}: Error finding container e321d4847b25d77009152a06df76defef49fa234bba53f49a744aa2a24848a22: Status 404 returned error can't find the container with id e321d4847b25d77009152a06df76defef49fa234bba53f49a744aa2a24848a22 Mar 11 09:01:12 crc kubenswrapper[4840]: I0311 09:01:12.325509 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" event={"ID":"6018aeae-f9f1-4848-b8ea-695ddf794001","Type":"ContainerStarted","Data":"4a26b3a34308c0c933f7068b1baca61c15e4fafb2e2e87b655fa20a3ce580125"} Mar 11 09:01:12 crc kubenswrapper[4840]: I0311 09:01:12.325578 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" event={"ID":"6018aeae-f9f1-4848-b8ea-695ddf794001","Type":"ContainerStarted","Data":"bba233ec48dad3369a3e9b4665e8958d108d35bebd3912fdca0f721c2c016143"} Mar 11 09:01:12 crc kubenswrapper[4840]: I0311 09:01:12.325726 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:12 crc kubenswrapper[4840]: I0311 09:01:12.326406 4840 status_manager.go:851] "Failed to get status for pod" podUID="6018aeae-f9f1-4848-b8ea-695ddf794001" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6cfdf67ff-4mn7h\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:12 crc kubenswrapper[4840]: I0311 09:01:12.327532 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e321d4847b25d77009152a06df76defef49fa234bba53f49a744aa2a24848a22"} Mar 11 09:01:12 crc kubenswrapper[4840]: I0311 09:01:12.329368 4840 generic.go:334] "Generic (PLEG): container finished" podID="b2771978-d705-4ef2-a98b-5b980e717c99" containerID="bbed5bf119a8c1a638090e3d865c4454144a37574fcd08054b060fc18cfd642f" exitCode=0 Mar 11 09:01:12 crc kubenswrapper[4840]: I0311 09:01:12.329425 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b2771978-d705-4ef2-a98b-5b980e717c99","Type":"ContainerDied","Data":"bbed5bf119a8c1a638090e3d865c4454144a37574fcd08054b060fc18cfd642f"} Mar 11 09:01:12 crc kubenswrapper[4840]: I0311 09:01:12.330851 4840 status_manager.go:851] "Failed to get status for pod" podUID="6018aeae-f9f1-4848-b8ea-695ddf794001" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6cfdf67ff-4mn7h\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:12 crc kubenswrapper[4840]: I0311 09:01:12.331343 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 11 09:01:12 crc kubenswrapper[4840]: I0311 09:01:12.331430 4840 status_manager.go:851] "Failed to get status for pod" podUID="b2771978-d705-4ef2-a98b-5b980e717c99" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:12 crc kubenswrapper[4840]: I0311 09:01:12.332457 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 11 09:01:12 crc kubenswrapper[4840]: I0311 09:01:12.333121 4840 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc" exitCode=0 Mar 11 09:01:12 crc kubenswrapper[4840]: I0311 09:01:12.333140 4840 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e" exitCode=0 Mar 11 09:01:12 crc kubenswrapper[4840]: I0311 09:01:12.333148 4840 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7" exitCode=0 Mar 11 09:01:12 crc kubenswrapper[4840]: I0311 09:01:12.333157 4840 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10" exitCode=2 Mar 11 09:01:12 crc kubenswrapper[4840]: I0311 09:01:12.333222 4840 scope.go:117] "RemoveContainer" containerID="58277efdbfe7c94ad785e7d31bb5ba7313d04bf930896d6ded66fd44dd6239b5" Mar 11 09:01:12 crc kubenswrapper[4840]: I0311 09:01:12.420283 4840 patch_prober.go:28] interesting pod/oauth-openshift-6cfdf67ff-4mn7h container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": read tcp 10.217.0.2:32788->10.217.0.65:6443: read: connection reset by peer" start-of-body= Mar 11 09:01:12 crc kubenswrapper[4840]: I0311 09:01:12.420352 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" podUID="6018aeae-f9f1-4848-b8ea-695ddf794001" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": read tcp 10.217.0.2:32788->10.217.0.65:6443: read: connection reset by peer" Mar 11 09:01:12 crc kubenswrapper[4840]: E0311 09:01:12.517970 4840 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:12 crc kubenswrapper[4840]: E0311 09:01:12.518917 4840 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:12 crc kubenswrapper[4840]: E0311 09:01:12.519743 4840 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:12 crc kubenswrapper[4840]: E0311 09:01:12.520523 4840 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:12 crc kubenswrapper[4840]: E0311 09:01:12.520876 4840 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:12 crc kubenswrapper[4840]: I0311 09:01:12.520913 4840 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 11 09:01:12 crc kubenswrapper[4840]: E0311 09:01:12.521227 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="200ms" Mar 11 09:01:12 crc kubenswrapper[4840]: E0311 09:01:12.723171 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="400ms" Mar 11 09:01:13 crc kubenswrapper[4840]: E0311 09:01:13.124847 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="800ms" Mar 11 09:01:13 crc kubenswrapper[4840]: I0311 09:01:13.352825 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 11 09:01:13 crc kubenswrapper[4840]: I0311 09:01:13.357137 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-6cfdf67ff-4mn7h_6018aeae-f9f1-4848-b8ea-695ddf794001/oauth-openshift/0.log" Mar 11 09:01:13 crc kubenswrapper[4840]: I0311 09:01:13.357207 4840 generic.go:334] "Generic (PLEG): container finished" podID="6018aeae-f9f1-4848-b8ea-695ddf794001" containerID="4a26b3a34308c0c933f7068b1baca61c15e4fafb2e2e87b655fa20a3ce580125" exitCode=255 Mar 11 09:01:13 crc kubenswrapper[4840]: I0311 09:01:13.357324 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" event={"ID":"6018aeae-f9f1-4848-b8ea-695ddf794001","Type":"ContainerDied","Data":"4a26b3a34308c0c933f7068b1baca61c15e4fafb2e2e87b655fa20a3ce580125"} Mar 11 09:01:13 crc kubenswrapper[4840]: I0311 09:01:13.358098 4840 status_manager.go:851] "Failed to get status for pod" podUID="6018aeae-f9f1-4848-b8ea-695ddf794001" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6cfdf67ff-4mn7h\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:13 crc kubenswrapper[4840]: I0311 09:01:13.358147 4840 scope.go:117] "RemoveContainer" containerID="4a26b3a34308c0c933f7068b1baca61c15e4fafb2e2e87b655fa20a3ce580125" Mar 11 09:01:13 crc kubenswrapper[4840]: I0311 09:01:13.358407 4840 status_manager.go:851] "Failed to get status for pod" podUID="b2771978-d705-4ef2-a98b-5b980e717c99" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:13 crc kubenswrapper[4840]: I0311 09:01:13.359514 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"57ccb6dad24165bfb2ef04afb9ee882b32dc3c00113130426f3ca6282db5ffdd"} Mar 11 09:01:13 crc kubenswrapper[4840]: I0311 09:01:13.360183 4840 status_manager.go:851] "Failed to get status for pod" podUID="6018aeae-f9f1-4848-b8ea-695ddf794001" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6cfdf67ff-4mn7h\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:13 crc kubenswrapper[4840]: E0311 09:01:13.360262 4840 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.30:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 09:01:13 crc kubenswrapper[4840]: I0311 09:01:13.360460 4840 status_manager.go:851] "Failed to get status for pod" podUID="b2771978-d705-4ef2-a98b-5b980e717c99" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:13 crc kubenswrapper[4840]: E0311 09:01:13.637162 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T09:01:13Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T09:01:13Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T09:01:13Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T09:01:13Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:13 crc kubenswrapper[4840]: E0311 09:01:13.637639 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:13 crc kubenswrapper[4840]: E0311 09:01:13.637861 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:13 crc kubenswrapper[4840]: E0311 09:01:13.638089 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:13 crc kubenswrapper[4840]: E0311 09:01:13.638313 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:13 crc kubenswrapper[4840]: E0311 09:01:13.638341 4840 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 11 09:01:13 crc kubenswrapper[4840]: I0311 09:01:13.740178 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 11 09:01:13 crc kubenswrapper[4840]: I0311 09:01:13.741523 4840 status_manager.go:851] "Failed to get status for pod" podUID="6018aeae-f9f1-4848-b8ea-695ddf794001" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6cfdf67ff-4mn7h\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:13 crc kubenswrapper[4840]: I0311 09:01:13.742043 4840 status_manager.go:851] "Failed to get status for pod" podUID="b2771978-d705-4ef2-a98b-5b980e717c99" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:13 crc kubenswrapper[4840]: I0311 09:01:13.793969 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2771978-d705-4ef2-a98b-5b980e717c99-kube-api-access\") pod \"b2771978-d705-4ef2-a98b-5b980e717c99\" (UID: \"b2771978-d705-4ef2-a98b-5b980e717c99\") " Mar 11 09:01:13 crc kubenswrapper[4840]: I0311 09:01:13.794092 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b2771978-d705-4ef2-a98b-5b980e717c99-var-lock\") pod \"b2771978-d705-4ef2-a98b-5b980e717c99\" (UID: \"b2771978-d705-4ef2-a98b-5b980e717c99\") " Mar 11 09:01:13 crc kubenswrapper[4840]: I0311 09:01:13.794175 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b2771978-d705-4ef2-a98b-5b980e717c99-kubelet-dir\") pod \"b2771978-d705-4ef2-a98b-5b980e717c99\" (UID: \"b2771978-d705-4ef2-a98b-5b980e717c99\") " Mar 11 09:01:13 crc kubenswrapper[4840]: I0311 09:01:13.794208 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2771978-d705-4ef2-a98b-5b980e717c99-var-lock" (OuterVolumeSpecName: "var-lock") pod "b2771978-d705-4ef2-a98b-5b980e717c99" (UID: "b2771978-d705-4ef2-a98b-5b980e717c99"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:01:13 crc kubenswrapper[4840]: I0311 09:01:13.794311 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2771978-d705-4ef2-a98b-5b980e717c99-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b2771978-d705-4ef2-a98b-5b980e717c99" (UID: "b2771978-d705-4ef2-a98b-5b980e717c99"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:01:13 crc kubenswrapper[4840]: I0311 09:01:13.794511 4840 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b2771978-d705-4ef2-a98b-5b980e717c99-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:13 crc kubenswrapper[4840]: I0311 09:01:13.794532 4840 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b2771978-d705-4ef2-a98b-5b980e717c99-var-lock\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:13 crc kubenswrapper[4840]: I0311 09:01:13.803291 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2771978-d705-4ef2-a98b-5b980e717c99-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b2771978-d705-4ef2-a98b-5b980e717c99" (UID: "b2771978-d705-4ef2-a98b-5b980e717c99"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:13 crc kubenswrapper[4840]: I0311 09:01:13.904987 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b2771978-d705-4ef2-a98b-5b980e717c99-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:13 crc kubenswrapper[4840]: E0311 09:01:13.928839 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="1.6s" Mar 11 09:01:14 crc kubenswrapper[4840]: E0311 09:01:14.302298 4840 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6018aeae_f9f1_4848_b8ea_695ddf794001.slice/crio-07e1cdc3292aa6461aa7de4285a6079362cb9cfa2d4e6dba931fb2af1d55a311.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6018aeae_f9f1_4848_b8ea_695ddf794001.slice/crio-conmon-07e1cdc3292aa6461aa7de4285a6079362cb9cfa2d4e6dba931fb2af1d55a311.scope\": RecentStats: unable to find data in memory cache]" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.313148 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.313987 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.314561 4840 status_manager.go:851] "Failed to get status for pod" podUID="6018aeae-f9f1-4848-b8ea-695ddf794001" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6cfdf67ff-4mn7h\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.314795 4840 status_manager.go:851] "Failed to get status for pod" podUID="b2771978-d705-4ef2-a98b-5b980e717c99" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.315121 4840 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.366088 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b2771978-d705-4ef2-a98b-5b980e717c99","Type":"ContainerDied","Data":"b14cccf60f1e8ed5f37b886068b16f94149f3ceb7944d8bbc8dec5160a56e238"} Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.366165 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b14cccf60f1e8ed5f37b886068b16f94149f3ceb7944d8bbc8dec5160a56e238" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.366116 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.368871 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.370171 4840 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2" exitCode=0 Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.370264 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.370280 4840 scope.go:117] "RemoveContainer" containerID="fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.370646 4840 status_manager.go:851] "Failed to get status for pod" podUID="b2771978-d705-4ef2-a98b-5b980e717c99" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.370865 4840 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.371052 4840 status_manager.go:851] "Failed to get status for pod" podUID="6018aeae-f9f1-4848-b8ea-695ddf794001" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6cfdf67ff-4mn7h\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.372015 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-6cfdf67ff-4mn7h_6018aeae-f9f1-4848-b8ea-695ddf794001/oauth-openshift/1.log" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.372519 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-6cfdf67ff-4mn7h_6018aeae-f9f1-4848-b8ea-695ddf794001/oauth-openshift/0.log" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.372598 4840 generic.go:334] "Generic (PLEG): container finished" podID="6018aeae-f9f1-4848-b8ea-695ddf794001" containerID="07e1cdc3292aa6461aa7de4285a6079362cb9cfa2d4e6dba931fb2af1d55a311" exitCode=255 Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.372663 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" event={"ID":"6018aeae-f9f1-4848-b8ea-695ddf794001","Type":"ContainerDied","Data":"07e1cdc3292aa6461aa7de4285a6079362cb9cfa2d4e6dba931fb2af1d55a311"} Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.373393 4840 scope.go:117] "RemoveContainer" containerID="07e1cdc3292aa6461aa7de4285a6079362cb9cfa2d4e6dba931fb2af1d55a311" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.373438 4840 status_manager.go:851] "Failed to get status for pod" podUID="6018aeae-f9f1-4848-b8ea-695ddf794001" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6cfdf67ff-4mn7h\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:14 crc kubenswrapper[4840]: E0311 09:01:14.373455 4840 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.30:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 09:01:14 crc kubenswrapper[4840]: E0311 09:01:14.373773 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-6cfdf67ff-4mn7h_openshift-authentication(6018aeae-f9f1-4848-b8ea-695ddf794001)\"" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" podUID="6018aeae-f9f1-4848-b8ea-695ddf794001" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.374000 4840 status_manager.go:851] "Failed to get status for pod" podUID="b2771978-d705-4ef2-a98b-5b980e717c99" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.374542 4840 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.385006 4840 scope.go:117] "RemoveContainer" containerID="5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.397638 4840 scope.go:117] "RemoveContainer" containerID="33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.411892 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.412005 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.412264 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.412330 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.412430 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.412503 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.412822 4840 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.412845 4840 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.412861 4840 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.434874 4840 scope.go:117] "RemoveContainer" containerID="ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.452730 4840 scope.go:117] "RemoveContainer" containerID="0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.472704 4840 scope.go:117] "RemoveContainer" containerID="e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.494890 4840 scope.go:117] "RemoveContainer" containerID="fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc" Mar 11 09:01:14 crc kubenswrapper[4840]: E0311 09:01:14.495552 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc\": container with ID starting with fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc not found: ID does not exist" containerID="fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.495616 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc"} err="failed to get container status \"fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc\": rpc error: code = NotFound desc = could not find container \"fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc\": container with ID starting with fe2e8e966f7348e19e40d9ac00cabea3da1a67e4d7b149c094505622bf5d6cbc not found: ID does not exist" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.495650 4840 scope.go:117] "RemoveContainer" containerID="5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e" Mar 11 09:01:14 crc kubenswrapper[4840]: E0311 09:01:14.496332 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\": container with ID starting with 5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e not found: ID does not exist" containerID="5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.496360 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e"} err="failed to get container status \"5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\": rpc error: code = NotFound desc = could not find container \"5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e\": container with ID starting with 5bd25fbbac425c9ba1169b1106b9ac77a80739a003bd795033d691ee273e0d3e not found: ID does not exist" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.496376 4840 scope.go:117] "RemoveContainer" containerID="33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7" Mar 11 09:01:14 crc kubenswrapper[4840]: E0311 09:01:14.496925 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\": container with ID starting with 33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7 not found: ID does not exist" containerID="33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.497016 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7"} err="failed to get container status \"33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\": rpc error: code = NotFound desc = could not find container \"33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7\": container with ID starting with 33384f01914fa6428a8c359b3de0d20963b933f5c6d47519f059e48f85c9f4c7 not found: ID does not exist" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.497083 4840 scope.go:117] "RemoveContainer" containerID="ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10" Mar 11 09:01:14 crc kubenswrapper[4840]: E0311 09:01:14.497579 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\": container with ID starting with ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10 not found: ID does not exist" containerID="ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.497616 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10"} err="failed to get container status \"ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\": rpc error: code = NotFound desc = could not find container \"ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10\": container with ID starting with ca2279f323c0a6baf645b62c496c38d3d2ad4efc5033a0819ed1d58f4d862e10 not found: ID does not exist" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.497640 4840 scope.go:117] "RemoveContainer" containerID="0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2" Mar 11 09:01:14 crc kubenswrapper[4840]: E0311 09:01:14.498106 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\": container with ID starting with 0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2 not found: ID does not exist" containerID="0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.498137 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2"} err="failed to get container status \"0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\": rpc error: code = NotFound desc = could not find container \"0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2\": container with ID starting with 0ed311d75feec58f86d1d9f435c6115a463b4e7cd3003b6dff8447360271b6a2 not found: ID does not exist" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.498151 4840 scope.go:117] "RemoveContainer" containerID="e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c" Mar 11 09:01:14 crc kubenswrapper[4840]: E0311 09:01:14.498398 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\": container with ID starting with e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c not found: ID does not exist" containerID="e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.498427 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c"} err="failed to get container status \"e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\": rpc error: code = NotFound desc = could not find container \"e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c\": container with ID starting with e27a2ab44a237582284cb5f55d2651f7b5d39c199fdb62a4a65be9921e86945c not found: ID does not exist" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.498444 4840 scope.go:117] "RemoveContainer" containerID="4a26b3a34308c0c933f7068b1baca61c15e4fafb2e2e87b655fa20a3ce580125" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.686697 4840 status_manager.go:851] "Failed to get status for pod" podUID="6018aeae-f9f1-4848-b8ea-695ddf794001" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6cfdf67ff-4mn7h\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.686952 4840 status_manager.go:851] "Failed to get status for pod" podUID="b2771978-d705-4ef2-a98b-5b980e717c99" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:14 crc kubenswrapper[4840]: I0311 09:01:14.687366 4840 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:15 crc kubenswrapper[4840]: I0311 09:01:15.383171 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-6cfdf67ff-4mn7h_6018aeae-f9f1-4848-b8ea-695ddf794001/oauth-openshift/1.log" Mar 11 09:01:15 crc kubenswrapper[4840]: I0311 09:01:15.383807 4840 scope.go:117] "RemoveContainer" containerID="07e1cdc3292aa6461aa7de4285a6079362cb9cfa2d4e6dba931fb2af1d55a311" Mar 11 09:01:15 crc kubenswrapper[4840]: E0311 09:01:15.384001 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-6cfdf67ff-4mn7h_openshift-authentication(6018aeae-f9f1-4848-b8ea-695ddf794001)\"" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" podUID="6018aeae-f9f1-4848-b8ea-695ddf794001" Mar 11 09:01:15 crc kubenswrapper[4840]: I0311 09:01:15.384341 4840 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:15 crc kubenswrapper[4840]: I0311 09:01:15.384806 4840 status_manager.go:851] "Failed to get status for pod" podUID="6018aeae-f9f1-4848-b8ea-695ddf794001" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6cfdf67ff-4mn7h\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:15 crc kubenswrapper[4840]: I0311 09:01:15.385223 4840 status_manager.go:851] "Failed to get status for pod" podUID="b2771978-d705-4ef2-a98b-5b980e717c99" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:15 crc kubenswrapper[4840]: E0311 09:01:15.530552 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="3.2s" Mar 11 09:01:16 crc kubenswrapper[4840]: I0311 09:01:16.067458 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 11 09:01:16 crc kubenswrapper[4840]: E0311 09:01:16.318221 4840 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/events\": dial tcp 38.102.83.30:6443: connect: connection refused" event="&Event{ObjectMeta:{oauth-openshift-6cfdf67ff-4mn7h.189bbde5891ab762 openshift-authentication 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication,Name:oauth-openshift-6cfdf67ff-4mn7h,UID:6018aeae-f9f1-4848-b8ea-695ddf794001,APIVersion:v1,ResourceVersion:29702,FieldPath:spec.containers{oauth-openshift},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-11 09:01:11.845812066 +0000 UTC m=+270.511481891,LastTimestamp:2026-03-11 09:01:11.845812066 +0000 UTC m=+270.511481891,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 11 09:01:18 crc kubenswrapper[4840]: E0311 09:01:18.731876 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="6.4s" Mar 11 09:01:21 crc kubenswrapper[4840]: I0311 09:01:21.390340 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:21 crc kubenswrapper[4840]: I0311 09:01:21.390410 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:21 crc kubenswrapper[4840]: I0311 09:01:21.391665 4840 scope.go:117] "RemoveContainer" containerID="07e1cdc3292aa6461aa7de4285a6079362cb9cfa2d4e6dba931fb2af1d55a311" Mar 11 09:01:21 crc kubenswrapper[4840]: E0311 09:01:21.392050 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 10s restarting failed container=oauth-openshift pod=oauth-openshift-6cfdf67ff-4mn7h_openshift-authentication(6018aeae-f9f1-4848-b8ea-695ddf794001)\"" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" podUID="6018aeae-f9f1-4848-b8ea-695ddf794001" Mar 11 09:01:22 crc kubenswrapper[4840]: I0311 09:01:22.063635 4840 status_manager.go:851] "Failed to get status for pod" podUID="6018aeae-f9f1-4848-b8ea-695ddf794001" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6cfdf67ff-4mn7h\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:22 crc kubenswrapper[4840]: I0311 09:01:22.064450 4840 status_manager.go:851] "Failed to get status for pod" podUID="b2771978-d705-4ef2-a98b-5b980e717c99" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:23 crc kubenswrapper[4840]: E0311 09:01:23.738937 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T09:01:23Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T09:01:23Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T09:01:23Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-11T09:01:23Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:23 crc kubenswrapper[4840]: E0311 09:01:23.740002 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:23 crc kubenswrapper[4840]: E0311 09:01:23.740572 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:23 crc kubenswrapper[4840]: E0311 09:01:23.741107 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:23 crc kubenswrapper[4840]: E0311 09:01:23.741376 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:23 crc kubenswrapper[4840]: E0311 09:01:23.741397 4840 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 11 09:01:24 crc kubenswrapper[4840]: I0311 09:01:24.059932 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 09:01:24 crc kubenswrapper[4840]: I0311 09:01:24.060893 4840 status_manager.go:851] "Failed to get status for pod" podUID="6018aeae-f9f1-4848-b8ea-695ddf794001" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6cfdf67ff-4mn7h\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:24 crc kubenswrapper[4840]: I0311 09:01:24.062291 4840 status_manager.go:851] "Failed to get status for pod" podUID="b2771978-d705-4ef2-a98b-5b980e717c99" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:24 crc kubenswrapper[4840]: I0311 09:01:24.078944 4840 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c3b2bde-8421-4e22-85ab-8b651c65bc9e" Mar 11 09:01:24 crc kubenswrapper[4840]: I0311 09:01:24.078990 4840 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c3b2bde-8421-4e22-85ab-8b651c65bc9e" Mar 11 09:01:24 crc kubenswrapper[4840]: E0311 09:01:24.079603 4840 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 09:01:24 crc kubenswrapper[4840]: I0311 09:01:24.080268 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 09:01:24 crc kubenswrapper[4840]: E0311 09:01:24.128329 4840 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.30:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" volumeName="registry-storage" Mar 11 09:01:24 crc kubenswrapper[4840]: I0311 09:01:24.446347 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7ad9717689928451ab5952e1cf58f11dfc69a6e6a9e0de53e7c034239b72a419"} Mar 11 09:01:24 crc kubenswrapper[4840]: I0311 09:01:24.446854 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8d44d9e8f93b5bd131a2a93ba51874335fb905babf0451c97e6489bc19c2259b"} Mar 11 09:01:24 crc kubenswrapper[4840]: I0311 09:01:24.447199 4840 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c3b2bde-8421-4e22-85ab-8b651c65bc9e" Mar 11 09:01:24 crc kubenswrapper[4840]: I0311 09:01:24.447217 4840 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c3b2bde-8421-4e22-85ab-8b651c65bc9e" Mar 11 09:01:24 crc kubenswrapper[4840]: E0311 09:01:24.447618 4840 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 09:01:24 crc kubenswrapper[4840]: I0311 09:01:24.447673 4840 status_manager.go:851] "Failed to get status for pod" podUID="6018aeae-f9f1-4848-b8ea-695ddf794001" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6cfdf67ff-4mn7h\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:24 crc kubenswrapper[4840]: I0311 09:01:24.447999 4840 status_manager.go:851] "Failed to get status for pod" podUID="b2771978-d705-4ef2-a98b-5b980e717c99" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:25 crc kubenswrapper[4840]: E0311 09:01:25.133196 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.30:6443: connect: connection refused" interval="7s" Mar 11 09:01:25 crc kubenswrapper[4840]: I0311 09:01:25.453922 4840 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="7ad9717689928451ab5952e1cf58f11dfc69a6e6a9e0de53e7c034239b72a419" exitCode=0 Mar 11 09:01:25 crc kubenswrapper[4840]: I0311 09:01:25.453989 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"7ad9717689928451ab5952e1cf58f11dfc69a6e6a9e0de53e7c034239b72a419"} Mar 11 09:01:25 crc kubenswrapper[4840]: I0311 09:01:25.454447 4840 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c3b2bde-8421-4e22-85ab-8b651c65bc9e" Mar 11 09:01:25 crc kubenswrapper[4840]: I0311 09:01:25.454501 4840 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c3b2bde-8421-4e22-85ab-8b651c65bc9e" Mar 11 09:01:25 crc kubenswrapper[4840]: E0311 09:01:25.454832 4840 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 09:01:25 crc kubenswrapper[4840]: I0311 09:01:25.454838 4840 status_manager.go:851] "Failed to get status for pod" podUID="b2771978-d705-4ef2-a98b-5b980e717c99" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:25 crc kubenswrapper[4840]: I0311 09:01:25.455405 4840 status_manager.go:851] "Failed to get status for pod" podUID="6018aeae-f9f1-4848-b8ea-695ddf794001" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-6cfdf67ff-4mn7h\": dial tcp 38.102.83.30:6443: connect: connection refused" Mar 11 09:01:26 crc kubenswrapper[4840]: I0311 09:01:26.464230 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 11 09:01:26 crc kubenswrapper[4840]: I0311 09:01:26.466501 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 11 09:01:26 crc kubenswrapper[4840]: I0311 09:01:26.466557 4840 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="f8267a35ad5296c0a02a207b535f1fe5f48c725dfb4ab7254fc2833e84148eff" exitCode=1 Mar 11 09:01:26 crc kubenswrapper[4840]: I0311 09:01:26.466637 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"f8267a35ad5296c0a02a207b535f1fe5f48c725dfb4ab7254fc2833e84148eff"} Mar 11 09:01:26 crc kubenswrapper[4840]: I0311 09:01:26.467229 4840 scope.go:117] "RemoveContainer" containerID="f8267a35ad5296c0a02a207b535f1fe5f48c725dfb4ab7254fc2833e84148eff" Mar 11 09:01:26 crc kubenswrapper[4840]: I0311 09:01:26.480495 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5c0eea0283717df6b741b86f7d6ef7e299b00b503bfb51c224f39ee2a3bee1e9"} Mar 11 09:01:26 crc kubenswrapper[4840]: I0311 09:01:26.480574 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6202ae08bd059a2853e075378d891274e4e4c2390b82d87ad9bfab96c9b78506"} Mar 11 09:01:26 crc kubenswrapper[4840]: I0311 09:01:26.480591 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"065b31bfa9043d6c7aa47fb7f5b27da9d64876d5665be7e13124bb295416abf7"} Mar 11 09:01:26 crc kubenswrapper[4840]: I0311 09:01:26.480605 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"22c9bde9473465dcb4445001c8dab4bdcb11abc3b0ed8d8098f9912cefe9b4f9"} Mar 11 09:01:27 crc kubenswrapper[4840]: I0311 09:01:27.446635 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:01:27 crc kubenswrapper[4840]: I0311 09:01:27.446720 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:01:27 crc kubenswrapper[4840]: I0311 09:01:27.446787 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 09:01:27 crc kubenswrapper[4840]: I0311 09:01:27.447638 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803"} pod="openshift-machine-config-operator/machine-config-daemon-brtht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 09:01:27 crc kubenswrapper[4840]: I0311 09:01:27.447704 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" containerID="cri-o://b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803" gracePeriod=600 Mar 11 09:01:27 crc kubenswrapper[4840]: I0311 09:01:27.491884 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 11 09:01:27 crc kubenswrapper[4840]: I0311 09:01:27.493430 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 11 09:01:27 crc kubenswrapper[4840]: I0311 09:01:27.493646 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a00411101eda12c01aa09b937194e6b2966a8fbfb34fe7344f3008af2a2c2556"} Mar 11 09:01:27 crc kubenswrapper[4840]: I0311 09:01:27.497077 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a5490b61fd6e0db19cc057cef05ffd00140880460c435c8f07514e5a20cbc341"} Mar 11 09:01:27 crc kubenswrapper[4840]: I0311 09:01:27.497353 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 09:01:27 crc kubenswrapper[4840]: I0311 09:01:27.497522 4840 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c3b2bde-8421-4e22-85ab-8b651c65bc9e" Mar 11 09:01:27 crc kubenswrapper[4840]: I0311 09:01:27.497560 4840 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c3b2bde-8421-4e22-85ab-8b651c65bc9e" Mar 11 09:01:28 crc kubenswrapper[4840]: I0311 09:01:28.505252 4840 generic.go:334] "Generic (PLEG): container finished" podID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerID="b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803" exitCode=0 Mar 11 09:01:28 crc kubenswrapper[4840]: I0311 09:01:28.505309 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerDied","Data":"b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803"} Mar 11 09:01:28 crc kubenswrapper[4840]: I0311 09:01:28.505348 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"39bfd2736dc9b7f94a3520f1fec1596fc21bf709c489bf4e66a4802a52f0ecba"} Mar 11 09:01:29 crc kubenswrapper[4840]: I0311 09:01:29.080548 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 09:01:29 crc kubenswrapper[4840]: I0311 09:01:29.080623 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 09:01:29 crc kubenswrapper[4840]: I0311 09:01:29.089159 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 09:01:31 crc kubenswrapper[4840]: I0311 09:01:31.946590 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 09:01:32 crc kubenswrapper[4840]: I0311 09:01:32.507114 4840 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 09:01:32 crc kubenswrapper[4840]: I0311 09:01:32.528261 4840 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c3b2bde-8421-4e22-85ab-8b651c65bc9e" Mar 11 09:01:32 crc kubenswrapper[4840]: I0311 09:01:32.528293 4840 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c3b2bde-8421-4e22-85ab-8b651c65bc9e" Mar 11 09:01:32 crc kubenswrapper[4840]: I0311 09:01:32.532541 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 09:01:32 crc kubenswrapper[4840]: I0311 09:01:32.535296 4840 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="aaa33c7c-d539-481e-a181-b2ee294b90d8" Mar 11 09:01:33 crc kubenswrapper[4840]: I0311 09:01:33.062434 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 09:01:33 crc kubenswrapper[4840]: I0311 09:01:33.066541 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 09:01:33 crc kubenswrapper[4840]: I0311 09:01:33.535968 4840 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c3b2bde-8421-4e22-85ab-8b651c65bc9e" Mar 11 09:01:33 crc kubenswrapper[4840]: I0311 09:01:33.536031 4840 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3c3b2bde-8421-4e22-85ab-8b651c65bc9e" Mar 11 09:01:37 crc kubenswrapper[4840]: I0311 09:01:37.060257 4840 scope.go:117] "RemoveContainer" containerID="07e1cdc3292aa6461aa7de4285a6079362cb9cfa2d4e6dba931fb2af1d55a311" Mar 11 09:01:37 crc kubenswrapper[4840]: I0311 09:01:37.564967 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-6cfdf67ff-4mn7h_6018aeae-f9f1-4848-b8ea-695ddf794001/oauth-openshift/1.log" Mar 11 09:01:37 crc kubenswrapper[4840]: I0311 09:01:37.565384 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" event={"ID":"6018aeae-f9f1-4848-b8ea-695ddf794001","Type":"ContainerStarted","Data":"d83e41b53f0dc8d6ed26dc4267618743324ad7fd47ceb3173ef096de6d362984"} Mar 11 09:01:37 crc kubenswrapper[4840]: I0311 09:01:37.565947 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:37 crc kubenswrapper[4840]: I0311 09:01:37.975869 4840 patch_prober.go:28] interesting pod/oauth-openshift-6cfdf67ff-4mn7h container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": read tcp 10.217.0.2:43548->10.217.0.65:6443: read: connection reset by peer" start-of-body= Mar 11 09:01:37 crc kubenswrapper[4840]: I0311 09:01:37.975929 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" podUID="6018aeae-f9f1-4848-b8ea-695ddf794001" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": read tcp 10.217.0.2:43548->10.217.0.65:6443: read: connection reset by peer" Mar 11 09:01:38 crc kubenswrapper[4840]: I0311 09:01:38.574505 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-6cfdf67ff-4mn7h_6018aeae-f9f1-4848-b8ea-695ddf794001/oauth-openshift/2.log" Mar 11 09:01:38 crc kubenswrapper[4840]: I0311 09:01:38.577009 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-6cfdf67ff-4mn7h_6018aeae-f9f1-4848-b8ea-695ddf794001/oauth-openshift/1.log" Mar 11 09:01:38 crc kubenswrapper[4840]: I0311 09:01:38.577140 4840 generic.go:334] "Generic (PLEG): container finished" podID="6018aeae-f9f1-4848-b8ea-695ddf794001" containerID="d83e41b53f0dc8d6ed26dc4267618743324ad7fd47ceb3173ef096de6d362984" exitCode=255 Mar 11 09:01:38 crc kubenswrapper[4840]: I0311 09:01:38.577214 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" event={"ID":"6018aeae-f9f1-4848-b8ea-695ddf794001","Type":"ContainerDied","Data":"d83e41b53f0dc8d6ed26dc4267618743324ad7fd47ceb3173ef096de6d362984"} Mar 11 09:01:38 crc kubenswrapper[4840]: I0311 09:01:38.577310 4840 scope.go:117] "RemoveContainer" containerID="07e1cdc3292aa6461aa7de4285a6079362cb9cfa2d4e6dba931fb2af1d55a311" Mar 11 09:01:38 crc kubenswrapper[4840]: I0311 09:01:38.578055 4840 scope.go:117] "RemoveContainer" containerID="d83e41b53f0dc8d6ed26dc4267618743324ad7fd47ceb3173ef096de6d362984" Mar 11 09:01:38 crc kubenswrapper[4840]: E0311 09:01:38.578607 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-6cfdf67ff-4mn7h_openshift-authentication(6018aeae-f9f1-4848-b8ea-695ddf794001)\"" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" podUID="6018aeae-f9f1-4848-b8ea-695ddf794001" Mar 11 09:01:39 crc kubenswrapper[4840]: I0311 09:01:39.586958 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-6cfdf67ff-4mn7h_6018aeae-f9f1-4848-b8ea-695ddf794001/oauth-openshift/2.log" Mar 11 09:01:39 crc kubenswrapper[4840]: I0311 09:01:39.587582 4840 scope.go:117] "RemoveContainer" containerID="d83e41b53f0dc8d6ed26dc4267618743324ad7fd47ceb3173ef096de6d362984" Mar 11 09:01:39 crc kubenswrapper[4840]: E0311 09:01:39.587766 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-6cfdf67ff-4mn7h_openshift-authentication(6018aeae-f9f1-4848-b8ea-695ddf794001)\"" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" podUID="6018aeae-f9f1-4848-b8ea-695ddf794001" Mar 11 09:01:41 crc kubenswrapper[4840]: I0311 09:01:41.390123 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:01:41 crc kubenswrapper[4840]: I0311 09:01:41.390903 4840 scope.go:117] "RemoveContainer" containerID="d83e41b53f0dc8d6ed26dc4267618743324ad7fd47ceb3173ef096de6d362984" Mar 11 09:01:41 crc kubenswrapper[4840]: E0311 09:01:41.391311 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-6cfdf67ff-4mn7h_openshift-authentication(6018aeae-f9f1-4848-b8ea-695ddf794001)\"" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" podUID="6018aeae-f9f1-4848-b8ea-695ddf794001" Mar 11 09:01:41 crc kubenswrapper[4840]: I0311 09:01:41.952298 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 11 09:01:42 crc kubenswrapper[4840]: I0311 09:01:42.087648 4840 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="aaa33c7c-d539-481e-a181-b2ee294b90d8" Mar 11 09:01:42 crc kubenswrapper[4840]: I0311 09:01:42.224596 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 11 09:01:43 crc kubenswrapper[4840]: I0311 09:01:43.275682 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 11 09:01:43 crc kubenswrapper[4840]: I0311 09:01:43.406837 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 11 09:01:43 crc kubenswrapper[4840]: I0311 09:01:43.427074 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 11 09:01:43 crc kubenswrapper[4840]: I0311 09:01:43.504653 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 11 09:01:43 crc kubenswrapper[4840]: I0311 09:01:43.699798 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 11 09:01:43 crc kubenswrapper[4840]: I0311 09:01:43.886024 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 11 09:01:44 crc kubenswrapper[4840]: I0311 09:01:44.246713 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 11 09:01:44 crc kubenswrapper[4840]: I0311 09:01:44.339600 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 11 09:01:44 crc kubenswrapper[4840]: I0311 09:01:44.362605 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 11 09:01:44 crc kubenswrapper[4840]: I0311 09:01:44.378265 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 11 09:01:44 crc kubenswrapper[4840]: I0311 09:01:44.439804 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 11 09:01:44 crc kubenswrapper[4840]: I0311 09:01:44.535109 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 11 09:01:44 crc kubenswrapper[4840]: I0311 09:01:44.653638 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 11 09:01:44 crc kubenswrapper[4840]: I0311 09:01:44.679249 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 11 09:01:44 crc kubenswrapper[4840]: I0311 09:01:44.718598 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 11 09:01:44 crc kubenswrapper[4840]: I0311 09:01:44.791586 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 11 09:01:44 crc kubenswrapper[4840]: I0311 09:01:44.958729 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 11 09:01:45 crc kubenswrapper[4840]: I0311 09:01:45.150943 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 11 09:01:45 crc kubenswrapper[4840]: I0311 09:01:45.185433 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 11 09:01:45 crc kubenswrapper[4840]: I0311 09:01:45.491740 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 11 09:01:45 crc kubenswrapper[4840]: I0311 09:01:45.663523 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 11 09:01:45 crc kubenswrapper[4840]: I0311 09:01:45.706903 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 11 09:01:45 crc kubenswrapper[4840]: I0311 09:01:45.970676 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 11 09:01:46 crc kubenswrapper[4840]: I0311 09:01:46.240710 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 11 09:01:46 crc kubenswrapper[4840]: I0311 09:01:46.243518 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 11 09:01:46 crc kubenswrapper[4840]: I0311 09:01:46.260252 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 11 09:01:46 crc kubenswrapper[4840]: I0311 09:01:46.285698 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 11 09:01:46 crc kubenswrapper[4840]: I0311 09:01:46.454440 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 11 09:01:46 crc kubenswrapper[4840]: I0311 09:01:46.484792 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 11 09:01:46 crc kubenswrapper[4840]: I0311 09:01:46.567663 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 11 09:01:46 crc kubenswrapper[4840]: I0311 09:01:46.597183 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 11 09:01:46 crc kubenswrapper[4840]: I0311 09:01:46.635075 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 11 09:01:46 crc kubenswrapper[4840]: I0311 09:01:46.636433 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 11 09:01:46 crc kubenswrapper[4840]: I0311 09:01:46.703637 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 11 09:01:46 crc kubenswrapper[4840]: I0311 09:01:46.711739 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 11 09:01:46 crc kubenswrapper[4840]: I0311 09:01:46.733999 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 11 09:01:46 crc kubenswrapper[4840]: I0311 09:01:46.761263 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 11 09:01:46 crc kubenswrapper[4840]: I0311 09:01:46.774720 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 11 09:01:46 crc kubenswrapper[4840]: I0311 09:01:46.845311 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 11 09:01:46 crc kubenswrapper[4840]: I0311 09:01:46.868152 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 11 09:01:46 crc kubenswrapper[4840]: I0311 09:01:46.894134 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 11 09:01:46 crc kubenswrapper[4840]: I0311 09:01:46.946409 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 11 09:01:46 crc kubenswrapper[4840]: I0311 09:01:46.976397 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 11 09:01:47 crc kubenswrapper[4840]: I0311 09:01:47.109868 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 11 09:01:47 crc kubenswrapper[4840]: I0311 09:01:47.110034 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 11 09:01:47 crc kubenswrapper[4840]: I0311 09:01:47.169986 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 11 09:01:47 crc kubenswrapper[4840]: I0311 09:01:47.264014 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 11 09:01:47 crc kubenswrapper[4840]: I0311 09:01:47.266666 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 11 09:01:47 crc kubenswrapper[4840]: I0311 09:01:47.287066 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 11 09:01:47 crc kubenswrapper[4840]: I0311 09:01:47.513812 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 11 09:01:47 crc kubenswrapper[4840]: I0311 09:01:47.525127 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 11 09:01:47 crc kubenswrapper[4840]: I0311 09:01:47.584914 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 11 09:01:47 crc kubenswrapper[4840]: I0311 09:01:47.602513 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 11 09:01:47 crc kubenswrapper[4840]: I0311 09:01:47.663965 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 11 09:01:47 crc kubenswrapper[4840]: I0311 09:01:47.730122 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 11 09:01:47 crc kubenswrapper[4840]: I0311 09:01:47.735150 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 11 09:01:47 crc kubenswrapper[4840]: I0311 09:01:47.764675 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 11 09:01:47 crc kubenswrapper[4840]: I0311 09:01:47.765269 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 11 09:01:47 crc kubenswrapper[4840]: I0311 09:01:47.801087 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 11 09:01:47 crc kubenswrapper[4840]: I0311 09:01:47.804681 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 11 09:01:47 crc kubenswrapper[4840]: I0311 09:01:47.856207 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 11 09:01:47 crc kubenswrapper[4840]: I0311 09:01:47.907092 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 11 09:01:47 crc kubenswrapper[4840]: I0311 09:01:47.953354 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 11 09:01:47 crc kubenswrapper[4840]: I0311 09:01:47.973095 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 11 09:01:48 crc kubenswrapper[4840]: I0311 09:01:48.063130 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 11 09:01:48 crc kubenswrapper[4840]: I0311 09:01:48.219789 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 11 09:01:48 crc kubenswrapper[4840]: I0311 09:01:48.322594 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 11 09:01:48 crc kubenswrapper[4840]: I0311 09:01:48.349033 4840 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 11 09:01:48 crc kubenswrapper[4840]: I0311 09:01:48.410670 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 11 09:01:48 crc kubenswrapper[4840]: I0311 09:01:48.419284 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 11 09:01:48 crc kubenswrapper[4840]: I0311 09:01:48.469789 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 11 09:01:48 crc kubenswrapper[4840]: I0311 09:01:48.539402 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 11 09:01:48 crc kubenswrapper[4840]: I0311 09:01:48.590024 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 11 09:01:48 crc kubenswrapper[4840]: I0311 09:01:48.608007 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 11 09:01:48 crc kubenswrapper[4840]: I0311 09:01:48.661836 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 11 09:01:48 crc kubenswrapper[4840]: I0311 09:01:48.684943 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 11 09:01:48 crc kubenswrapper[4840]: I0311 09:01:48.694786 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 11 09:01:48 crc kubenswrapper[4840]: I0311 09:01:48.718204 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.070806 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.139925 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.187846 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.197263 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.203238 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.208148 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.238155 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.325993 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.333258 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.376279 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.384225 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.487227 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.503770 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.527746 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.538413 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.618020 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.622619 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.636853 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.673760 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.675357 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.689161 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.690092 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.705348 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.716706 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.746103 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.856721 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.868231 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.907892 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 11 09:01:49 crc kubenswrapper[4840]: I0311 09:01:49.939596 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 11 09:01:50 crc kubenswrapper[4840]: I0311 09:01:50.108516 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 11 09:01:50 crc kubenswrapper[4840]: I0311 09:01:50.170301 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 11 09:01:50 crc kubenswrapper[4840]: I0311 09:01:50.191412 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 11 09:01:50 crc kubenswrapper[4840]: I0311 09:01:50.335418 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 11 09:01:50 crc kubenswrapper[4840]: I0311 09:01:50.562137 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 11 09:01:50 crc kubenswrapper[4840]: I0311 09:01:50.698507 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 11 09:01:50 crc kubenswrapper[4840]: I0311 09:01:50.762895 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 11 09:01:50 crc kubenswrapper[4840]: I0311 09:01:50.792074 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 11 09:01:50 crc kubenswrapper[4840]: I0311 09:01:50.883786 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 11 09:01:50 crc kubenswrapper[4840]: I0311 09:01:50.914033 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 11 09:01:50 crc kubenswrapper[4840]: I0311 09:01:50.936776 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 11 09:01:51 crc kubenswrapper[4840]: I0311 09:01:51.232507 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 11 09:01:51 crc kubenswrapper[4840]: I0311 09:01:51.252677 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 11 09:01:51 crc kubenswrapper[4840]: I0311 09:01:51.282371 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 11 09:01:51 crc kubenswrapper[4840]: I0311 09:01:51.341426 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 11 09:01:51 crc kubenswrapper[4840]: I0311 09:01:51.493980 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 11 09:01:51 crc kubenswrapper[4840]: I0311 09:01:51.586909 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 11 09:01:51 crc kubenswrapper[4840]: I0311 09:01:51.595706 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 11 09:01:51 crc kubenswrapper[4840]: I0311 09:01:51.631995 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 11 09:01:51 crc kubenswrapper[4840]: I0311 09:01:51.639200 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 11 09:01:51 crc kubenswrapper[4840]: I0311 09:01:51.641257 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 11 09:01:51 crc kubenswrapper[4840]: I0311 09:01:51.649078 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 11 09:01:51 crc kubenswrapper[4840]: I0311 09:01:51.649120 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 11 09:01:51 crc kubenswrapper[4840]: I0311 09:01:51.662218 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 11 09:01:51 crc kubenswrapper[4840]: I0311 09:01:51.744685 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 11 09:01:51 crc kubenswrapper[4840]: I0311 09:01:51.844210 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 11 09:01:51 crc kubenswrapper[4840]: I0311 09:01:51.851457 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.129087 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.169618 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.174841 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.178007 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.284700 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.307190 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.348060 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.401271 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.407482 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.436928 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.467062 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.491386 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.533835 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.538565 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.553908 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.566259 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.585424 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.624577 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.629846 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.676986 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.680064 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.716739 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.724747 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.751978 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.795129 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.796765 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.803346 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.887955 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.952247 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 11 09:01:52 crc kubenswrapper[4840]: I0311 09:01:52.967560 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 11 09:01:53 crc kubenswrapper[4840]: I0311 09:01:53.232643 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 11 09:01:53 crc kubenswrapper[4840]: I0311 09:01:53.290997 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 11 09:01:53 crc kubenswrapper[4840]: I0311 09:01:53.339286 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 11 09:01:53 crc kubenswrapper[4840]: I0311 09:01:53.356353 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 11 09:01:53 crc kubenswrapper[4840]: I0311 09:01:53.375271 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 11 09:01:53 crc kubenswrapper[4840]: I0311 09:01:53.400279 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 11 09:01:53 crc kubenswrapper[4840]: I0311 09:01:53.429453 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 11 09:01:53 crc kubenswrapper[4840]: I0311 09:01:53.449423 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 11 09:01:53 crc kubenswrapper[4840]: I0311 09:01:53.516827 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 11 09:01:53 crc kubenswrapper[4840]: I0311 09:01:53.558015 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 11 09:01:53 crc kubenswrapper[4840]: I0311 09:01:53.559061 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 11 09:01:53 crc kubenswrapper[4840]: I0311 09:01:53.609198 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 11 09:01:53 crc kubenswrapper[4840]: I0311 09:01:53.658150 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 11 09:01:53 crc kubenswrapper[4840]: I0311 09:01:53.665777 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 11 09:01:53 crc kubenswrapper[4840]: I0311 09:01:53.685586 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 11 09:01:53 crc kubenswrapper[4840]: I0311 09:01:53.687334 4840 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 11 09:01:53 crc kubenswrapper[4840]: I0311 09:01:53.691030 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 11 09:01:53 crc kubenswrapper[4840]: I0311 09:01:53.744246 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 11 09:01:53 crc kubenswrapper[4840]: I0311 09:01:53.779260 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 11 09:01:53 crc kubenswrapper[4840]: I0311 09:01:53.806236 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 11 09:01:53 crc kubenswrapper[4840]: I0311 09:01:53.871878 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 11 09:01:53 crc kubenswrapper[4840]: I0311 09:01:53.892711 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 11 09:01:53 crc kubenswrapper[4840]: I0311 09:01:53.923972 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 11 09:01:53 crc kubenswrapper[4840]: I0311 09:01:53.972839 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 11 09:01:53 crc kubenswrapper[4840]: I0311 09:01:53.986629 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.041639 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.060543 4840 scope.go:117] "RemoveContainer" containerID="d83e41b53f0dc8d6ed26dc4267618743324ad7fd47ceb3173ef096de6d362984" Mar 11 09:01:54 crc kubenswrapper[4840]: E0311 09:01:54.060960 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oauth-openshift\" with CrashLoopBackOff: \"back-off 20s restarting failed container=oauth-openshift pod=oauth-openshift-6cfdf67ff-4mn7h_openshift-authentication(6018aeae-f9f1-4848-b8ea-695ddf794001)\"" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" podUID="6018aeae-f9f1-4848-b8ea-695ddf794001" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.113411 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.157812 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.210980 4840 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.215644 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.215696 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.215717 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q","openshift-controller-manager/controller-manager-7df4b549f7-dgdzz"] Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.215944 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" podUID="8547661a-f824-4f88-9c4e-d44727e4430a" containerName="controller-manager" containerID="cri-o://5388b1d9122b8fb92750b5e93c9fbf477091e36aab959880089f67c055bab8d4" gracePeriod=30 Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.216211 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q" podUID="8363ed47-d2dd-42f0-a4fd-22029a4bc1e4" containerName="route-controller-manager" containerID="cri-o://9642d4cfb376c9d6d99720e1803625a4e0314489ef59ff0bd4c5c339110bd66e" gracePeriod=30 Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.225286 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.248692 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.248673932 podStartE2EDuration="22.248673932s" podCreationTimestamp="2026-03-11 09:01:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:01:54.247681217 +0000 UTC m=+312.913351052" watchObservedRunningTime="2026-03-11 09:01:54.248673932 +0000 UTC m=+312.914343747" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.269015 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.312609 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.336446 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.343461 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.520899 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.551412 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.553455 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.599213 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.679377 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.686949 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.698685 4840 generic.go:334] "Generic (PLEG): container finished" podID="8363ed47-d2dd-42f0-a4fd-22029a4bc1e4" containerID="9642d4cfb376c9d6d99720e1803625a4e0314489ef59ff0bd4c5c339110bd66e" exitCode=0 Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.698758 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.698817 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q" event={"ID":"8363ed47-d2dd-42f0-a4fd-22029a4bc1e4","Type":"ContainerDied","Data":"9642d4cfb376c9d6d99720e1803625a4e0314489ef59ff0bd4c5c339110bd66e"} Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.698917 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q" event={"ID":"8363ed47-d2dd-42f0-a4fd-22029a4bc1e4","Type":"ContainerDied","Data":"be3aa0d883030d328062b0e186945b3b0ba9979d0eed155e95ceb801fa2f9ead"} Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.698943 4840 scope.go:117] "RemoveContainer" containerID="9642d4cfb376c9d6d99720e1803625a4e0314489ef59ff0bd4c5c339110bd66e" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.703153 4840 generic.go:334] "Generic (PLEG): container finished" podID="8547661a-f824-4f88-9c4e-d44727e4430a" containerID="5388b1d9122b8fb92750b5e93c9fbf477091e36aab959880089f67c055bab8d4" exitCode=0 Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.703200 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" event={"ID":"8547661a-f824-4f88-9c4e-d44727e4430a","Type":"ContainerDied","Data":"5388b1d9122b8fb92750b5e93c9fbf477091e36aab959880089f67c055bab8d4"} Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.703254 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" event={"ID":"8547661a-f824-4f88-9c4e-d44727e4430a","Type":"ContainerDied","Data":"6cb96e8ee1e0f09446429ca6ba89f21bd2d6240de4ec5176ecb8722c3e67575d"} Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.703323 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7df4b549f7-dgdzz" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.723160 4840 scope.go:117] "RemoveContainer" containerID="9642d4cfb376c9d6d99720e1803625a4e0314489ef59ff0bd4c5c339110bd66e" Mar 11 09:01:54 crc kubenswrapper[4840]: E0311 09:01:54.726879 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9642d4cfb376c9d6d99720e1803625a4e0314489ef59ff0bd4c5c339110bd66e\": container with ID starting with 9642d4cfb376c9d6d99720e1803625a4e0314489ef59ff0bd4c5c339110bd66e not found: ID does not exist" containerID="9642d4cfb376c9d6d99720e1803625a4e0314489ef59ff0bd4c5c339110bd66e" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.726935 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9642d4cfb376c9d6d99720e1803625a4e0314489ef59ff0bd4c5c339110bd66e"} err="failed to get container status \"9642d4cfb376c9d6d99720e1803625a4e0314489ef59ff0bd4c5c339110bd66e\": rpc error: code = NotFound desc = could not find container \"9642d4cfb376c9d6d99720e1803625a4e0314489ef59ff0bd4c5c339110bd66e\": container with ID starting with 9642d4cfb376c9d6d99720e1803625a4e0314489ef59ff0bd4c5c339110bd66e not found: ID does not exist" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.726960 4840 scope.go:117] "RemoveContainer" containerID="5388b1d9122b8fb92750b5e93c9fbf477091e36aab959880089f67c055bab8d4" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.743710 4840 scope.go:117] "RemoveContainer" containerID="5388b1d9122b8fb92750b5e93c9fbf477091e36aab959880089f67c055bab8d4" Mar 11 09:01:54 crc kubenswrapper[4840]: E0311 09:01:54.744275 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5388b1d9122b8fb92750b5e93c9fbf477091e36aab959880089f67c055bab8d4\": container with ID starting with 5388b1d9122b8fb92750b5e93c9fbf477091e36aab959880089f67c055bab8d4 not found: ID does not exist" containerID="5388b1d9122b8fb92750b5e93c9fbf477091e36aab959880089f67c055bab8d4" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.744325 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5388b1d9122b8fb92750b5e93c9fbf477091e36aab959880089f67c055bab8d4"} err="failed to get container status \"5388b1d9122b8fb92750b5e93c9fbf477091e36aab959880089f67c055bab8d4\": rpc error: code = NotFound desc = could not find container \"5388b1d9122b8fb92750b5e93c9fbf477091e36aab959880089f67c055bab8d4\": container with ID starting with 5388b1d9122b8fb92750b5e93c9fbf477091e36aab959880089f67c055bab8d4 not found: ID does not exist" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.751381 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.774101 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwtjb\" (UniqueName: \"kubernetes.io/projected/8547661a-f824-4f88-9c4e-d44727e4430a-kube-api-access-qwtjb\") pod \"8547661a-f824-4f88-9c4e-d44727e4430a\" (UID: \"8547661a-f824-4f88-9c4e-d44727e4430a\") " Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.774166 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8547661a-f824-4f88-9c4e-d44727e4430a-client-ca\") pod \"8547661a-f824-4f88-9c4e-d44727e4430a\" (UID: \"8547661a-f824-4f88-9c4e-d44727e4430a\") " Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.774202 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r899n\" (UniqueName: \"kubernetes.io/projected/8363ed47-d2dd-42f0-a4fd-22029a4bc1e4-kube-api-access-r899n\") pod \"8363ed47-d2dd-42f0-a4fd-22029a4bc1e4\" (UID: \"8363ed47-d2dd-42f0-a4fd-22029a4bc1e4\") " Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.774261 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8363ed47-d2dd-42f0-a4fd-22029a4bc1e4-config\") pod \"8363ed47-d2dd-42f0-a4fd-22029a4bc1e4\" (UID: \"8363ed47-d2dd-42f0-a4fd-22029a4bc1e4\") " Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.774320 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8363ed47-d2dd-42f0-a4fd-22029a4bc1e4-serving-cert\") pod \"8363ed47-d2dd-42f0-a4fd-22029a4bc1e4\" (UID: \"8363ed47-d2dd-42f0-a4fd-22029a4bc1e4\") " Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.774387 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8363ed47-d2dd-42f0-a4fd-22029a4bc1e4-client-ca\") pod \"8363ed47-d2dd-42f0-a4fd-22029a4bc1e4\" (UID: \"8363ed47-d2dd-42f0-a4fd-22029a4bc1e4\") " Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.774443 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8547661a-f824-4f88-9c4e-d44727e4430a-serving-cert\") pod \"8547661a-f824-4f88-9c4e-d44727e4430a\" (UID: \"8547661a-f824-4f88-9c4e-d44727e4430a\") " Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.775238 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8547661a-f824-4f88-9c4e-d44727e4430a-client-ca" (OuterVolumeSpecName: "client-ca") pod "8547661a-f824-4f88-9c4e-d44727e4430a" (UID: "8547661a-f824-4f88-9c4e-d44727e4430a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.775256 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8363ed47-d2dd-42f0-a4fd-22029a4bc1e4-config" (OuterVolumeSpecName: "config") pod "8363ed47-d2dd-42f0-a4fd-22029a4bc1e4" (UID: "8363ed47-d2dd-42f0-a4fd-22029a4bc1e4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.775377 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8363ed47-d2dd-42f0-a4fd-22029a4bc1e4-client-ca" (OuterVolumeSpecName: "client-ca") pod "8363ed47-d2dd-42f0-a4fd-22029a4bc1e4" (UID: "8363ed47-d2dd-42f0-a4fd-22029a4bc1e4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.775540 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8547661a-f824-4f88-9c4e-d44727e4430a-proxy-ca-bundles\") pod \"8547661a-f824-4f88-9c4e-d44727e4430a\" (UID: \"8547661a-f824-4f88-9c4e-d44727e4430a\") " Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.776582 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8547661a-f824-4f88-9c4e-d44727e4430a-config" (OuterVolumeSpecName: "config") pod "8547661a-f824-4f88-9c4e-d44727e4430a" (UID: "8547661a-f824-4f88-9c4e-d44727e4430a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.776596 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8547661a-f824-4f88-9c4e-d44727e4430a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8547661a-f824-4f88-9c4e-d44727e4430a" (UID: "8547661a-f824-4f88-9c4e-d44727e4430a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.775580 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8547661a-f824-4f88-9c4e-d44727e4430a-config\") pod \"8547661a-f824-4f88-9c4e-d44727e4430a\" (UID: \"8547661a-f824-4f88-9c4e-d44727e4430a\") " Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.776998 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8547661a-f824-4f88-9c4e-d44727e4430a-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.777039 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8363ed47-d2dd-42f0-a4fd-22029a4bc1e4-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.777054 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8363ed47-d2dd-42f0-a4fd-22029a4bc1e4-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.777067 4840 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8547661a-f824-4f88-9c4e-d44727e4430a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.777082 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8547661a-f824-4f88-9c4e-d44727e4430a-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.781240 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8363ed47-d2dd-42f0-a4fd-22029a4bc1e4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8363ed47-d2dd-42f0-a4fd-22029a4bc1e4" (UID: "8363ed47-d2dd-42f0-a4fd-22029a4bc1e4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.781299 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8363ed47-d2dd-42f0-a4fd-22029a4bc1e4-kube-api-access-r899n" (OuterVolumeSpecName: "kube-api-access-r899n") pod "8363ed47-d2dd-42f0-a4fd-22029a4bc1e4" (UID: "8363ed47-d2dd-42f0-a4fd-22029a4bc1e4"). InnerVolumeSpecName "kube-api-access-r899n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.781350 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8547661a-f824-4f88-9c4e-d44727e4430a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8547661a-f824-4f88-9c4e-d44727e4430a" (UID: "8547661a-f824-4f88-9c4e-d44727e4430a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.782762 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8547661a-f824-4f88-9c4e-d44727e4430a-kube-api-access-qwtjb" (OuterVolumeSpecName: "kube-api-access-qwtjb") pod "8547661a-f824-4f88-9c4e-d44727e4430a" (UID: "8547661a-f824-4f88-9c4e-d44727e4430a"). InnerVolumeSpecName "kube-api-access-qwtjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.803475 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.854033 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.878232 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwtjb\" (UniqueName: \"kubernetes.io/projected/8547661a-f824-4f88-9c4e-d44727e4430a-kube-api-access-qwtjb\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.878259 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r899n\" (UniqueName: \"kubernetes.io/projected/8363ed47-d2dd-42f0-a4fd-22029a4bc1e4-kube-api-access-r899n\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.878270 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8363ed47-d2dd-42f0-a4fd-22029a4bc1e4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:54 crc kubenswrapper[4840]: I0311 09:01:54.878282 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8547661a-f824-4f88-9c4e-d44727e4430a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.000597 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.033565 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q"] Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.044076 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6999b6d5db-jnf9q"] Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.049824 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7df4b549f7-dgdzz"] Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.052689 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7df4b549f7-dgdzz"] Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.091411 4840 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.634340 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58"] Mar 11 09:01:55 crc kubenswrapper[4840]: E0311 09:01:55.634754 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8547661a-f824-4f88-9c4e-d44727e4430a" containerName="controller-manager" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.634828 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="8547661a-f824-4f88-9c4e-d44727e4430a" containerName="controller-manager" Mar 11 09:01:55 crc kubenswrapper[4840]: E0311 09:01:55.634865 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2771978-d705-4ef2-a98b-5b980e717c99" containerName="installer" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.634880 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2771978-d705-4ef2-a98b-5b980e717c99" containerName="installer" Mar 11 09:01:55 crc kubenswrapper[4840]: E0311 09:01:55.634901 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8363ed47-d2dd-42f0-a4fd-22029a4bc1e4" containerName="route-controller-manager" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.634915 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="8363ed47-d2dd-42f0-a4fd-22029a4bc1e4" containerName="route-controller-manager" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.635096 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2771978-d705-4ef2-a98b-5b980e717c99" containerName="installer" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.635126 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="8547661a-f824-4f88-9c4e-d44727e4430a" containerName="controller-manager" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.635144 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="8363ed47-d2dd-42f0-a4fd-22029a4bc1e4" containerName="route-controller-manager" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.635819 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.639187 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.639388 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.639500 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.639879 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.639974 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.642282 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.650539 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.652448 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58"] Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.757870 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.789875 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7343cb44-cc30-46b3-b1a9-102dcff7c1bd-client-ca\") pod \"route-controller-manager-57848b8fc5-rdf58\" (UID: \"7343cb44-cc30-46b3-b1a9-102dcff7c1bd\") " pod="openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.790162 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7343cb44-cc30-46b3-b1a9-102dcff7c1bd-serving-cert\") pod \"route-controller-manager-57848b8fc5-rdf58\" (UID: \"7343cb44-cc30-46b3-b1a9-102dcff7c1bd\") " pod="openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.790317 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nvm4\" (UniqueName: \"kubernetes.io/projected/7343cb44-cc30-46b3-b1a9-102dcff7c1bd-kube-api-access-4nvm4\") pod \"route-controller-manager-57848b8fc5-rdf58\" (UID: \"7343cb44-cc30-46b3-b1a9-102dcff7c1bd\") " pod="openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.790392 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7343cb44-cc30-46b3-b1a9-102dcff7c1bd-config\") pod \"route-controller-manager-57848b8fc5-rdf58\" (UID: \"7343cb44-cc30-46b3-b1a9-102dcff7c1bd\") " pod="openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.830147 4840 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.830664 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.892402 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7343cb44-cc30-46b3-b1a9-102dcff7c1bd-client-ca\") pod \"route-controller-manager-57848b8fc5-rdf58\" (UID: \"7343cb44-cc30-46b3-b1a9-102dcff7c1bd\") " pod="openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.892527 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7343cb44-cc30-46b3-b1a9-102dcff7c1bd-serving-cert\") pod \"route-controller-manager-57848b8fc5-rdf58\" (UID: \"7343cb44-cc30-46b3-b1a9-102dcff7c1bd\") " pod="openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.892594 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nvm4\" (UniqueName: \"kubernetes.io/projected/7343cb44-cc30-46b3-b1a9-102dcff7c1bd-kube-api-access-4nvm4\") pod \"route-controller-manager-57848b8fc5-rdf58\" (UID: \"7343cb44-cc30-46b3-b1a9-102dcff7c1bd\") " pod="openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.892643 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7343cb44-cc30-46b3-b1a9-102dcff7c1bd-config\") pod \"route-controller-manager-57848b8fc5-rdf58\" (UID: \"7343cb44-cc30-46b3-b1a9-102dcff7c1bd\") " pod="openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.894280 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7343cb44-cc30-46b3-b1a9-102dcff7c1bd-client-ca\") pod \"route-controller-manager-57848b8fc5-rdf58\" (UID: \"7343cb44-cc30-46b3-b1a9-102dcff7c1bd\") " pod="openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.894380 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7343cb44-cc30-46b3-b1a9-102dcff7c1bd-config\") pod \"route-controller-manager-57848b8fc5-rdf58\" (UID: \"7343cb44-cc30-46b3-b1a9-102dcff7c1bd\") " pod="openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.903489 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7343cb44-cc30-46b3-b1a9-102dcff7c1bd-serving-cert\") pod \"route-controller-manager-57848b8fc5-rdf58\" (UID: \"7343cb44-cc30-46b3-b1a9-102dcff7c1bd\") " pod="openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.912711 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nvm4\" (UniqueName: \"kubernetes.io/projected/7343cb44-cc30-46b3-b1a9-102dcff7c1bd-kube-api-access-4nvm4\") pod \"route-controller-manager-57848b8fc5-rdf58\" (UID: \"7343cb44-cc30-46b3-b1a9-102dcff7c1bd\") " pod="openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.939668 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.969375 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58" Mar 11 09:01:55 crc kubenswrapper[4840]: I0311 09:01:55.995711 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 11 09:01:56 crc kubenswrapper[4840]: I0311 09:01:56.080340 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8363ed47-d2dd-42f0-a4fd-22029a4bc1e4" path="/var/lib/kubelet/pods/8363ed47-d2dd-42f0-a4fd-22029a4bc1e4/volumes" Mar 11 09:01:56 crc kubenswrapper[4840]: I0311 09:01:56.081025 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8547661a-f824-4f88-9c4e-d44727e4430a" path="/var/lib/kubelet/pods/8547661a-f824-4f88-9c4e-d44727e4430a/volumes" Mar 11 09:01:56 crc kubenswrapper[4840]: I0311 09:01:56.240844 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 11 09:01:56 crc kubenswrapper[4840]: I0311 09:01:56.242069 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 11 09:01:56 crc kubenswrapper[4840]: I0311 09:01:56.243277 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 11 09:01:56 crc kubenswrapper[4840]: I0311 09:01:56.285998 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 11 09:01:56 crc kubenswrapper[4840]: I0311 09:01:56.331592 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 11 09:01:56 crc kubenswrapper[4840]: I0311 09:01:56.399588 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58"] Mar 11 09:01:56 crc kubenswrapper[4840]: I0311 09:01:56.426436 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 11 09:01:56 crc kubenswrapper[4840]: I0311 09:01:56.440307 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 11 09:01:56 crc kubenswrapper[4840]: I0311 09:01:56.444529 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 11 09:01:56 crc kubenswrapper[4840]: I0311 09:01:56.563766 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 11 09:01:56 crc kubenswrapper[4840]: I0311 09:01:56.564911 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 11 09:01:56 crc kubenswrapper[4840]: I0311 09:01:56.724421 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58" event={"ID":"7343cb44-cc30-46b3-b1a9-102dcff7c1bd","Type":"ContainerStarted","Data":"fd9e35c588bbd6a9b61582a3292b9e340da9446c1ad9239078cf2a0e386ea4f7"} Mar 11 09:01:56 crc kubenswrapper[4840]: I0311 09:01:56.724495 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58" event={"ID":"7343cb44-cc30-46b3-b1a9-102dcff7c1bd","Type":"ContainerStarted","Data":"c13225cfd451b70cd34f8fe906e8f7bc8c1271d3ca823f6eb17c3e4e5bce4a39"} Mar 11 09:01:56 crc kubenswrapper[4840]: I0311 09:01:56.724764 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58" Mar 11 09:01:56 crc kubenswrapper[4840]: I0311 09:01:56.748143 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58" podStartSLOduration=4.748110003 podStartE2EDuration="4.748110003s" podCreationTimestamp="2026-03-11 09:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:01:56.744691297 +0000 UTC m=+315.410361122" watchObservedRunningTime="2026-03-11 09:01:56.748110003 +0000 UTC m=+315.413779858" Mar 11 09:01:56 crc kubenswrapper[4840]: I0311 09:01:56.762740 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 11 09:01:56 crc kubenswrapper[4840]: I0311 09:01:56.802811 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 11 09:01:56 crc kubenswrapper[4840]: I0311 09:01:56.889695 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 11 09:01:56 crc kubenswrapper[4840]: I0311 09:01:56.944346 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.107654 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-75c49644d7-pwrsq"] Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.108535 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.111827 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.112025 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.112828 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.112839 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.112841 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.112913 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.121145 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.134165 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75c49644d7-pwrsq"] Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.136798 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.221971 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b8a15b6-e285-481e-a580-17a4b6f77fb6-client-ca\") pod \"controller-manager-75c49644d7-pwrsq\" (UID: \"2b8a15b6-e285-481e-a580-17a4b6f77fb6\") " pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.222104 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rcvf\" (UniqueName: \"kubernetes.io/projected/2b8a15b6-e285-481e-a580-17a4b6f77fb6-kube-api-access-6rcvf\") pod \"controller-manager-75c49644d7-pwrsq\" (UID: \"2b8a15b6-e285-481e-a580-17a4b6f77fb6\") " pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.222209 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b8a15b6-e285-481e-a580-17a4b6f77fb6-config\") pod \"controller-manager-75c49644d7-pwrsq\" (UID: \"2b8a15b6-e285-481e-a580-17a4b6f77fb6\") " pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.222289 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b8a15b6-e285-481e-a580-17a4b6f77fb6-proxy-ca-bundles\") pod \"controller-manager-75c49644d7-pwrsq\" (UID: \"2b8a15b6-e285-481e-a580-17a4b6f77fb6\") " pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.222336 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b8a15b6-e285-481e-a580-17a4b6f77fb6-serving-cert\") pod \"controller-manager-75c49644d7-pwrsq\" (UID: \"2b8a15b6-e285-481e-a580-17a4b6f77fb6\") " pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.300845 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.307319 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.323459 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b8a15b6-e285-481e-a580-17a4b6f77fb6-serving-cert\") pod \"controller-manager-75c49644d7-pwrsq\" (UID: \"2b8a15b6-e285-481e-a580-17a4b6f77fb6\") " pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.323731 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b8a15b6-e285-481e-a580-17a4b6f77fb6-client-ca\") pod \"controller-manager-75c49644d7-pwrsq\" (UID: \"2b8a15b6-e285-481e-a580-17a4b6f77fb6\") " pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.323788 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rcvf\" (UniqueName: \"kubernetes.io/projected/2b8a15b6-e285-481e-a580-17a4b6f77fb6-kube-api-access-6rcvf\") pod \"controller-manager-75c49644d7-pwrsq\" (UID: \"2b8a15b6-e285-481e-a580-17a4b6f77fb6\") " pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.323855 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b8a15b6-e285-481e-a580-17a4b6f77fb6-config\") pod \"controller-manager-75c49644d7-pwrsq\" (UID: \"2b8a15b6-e285-481e-a580-17a4b6f77fb6\") " pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.323921 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b8a15b6-e285-481e-a580-17a4b6f77fb6-proxy-ca-bundles\") pod \"controller-manager-75c49644d7-pwrsq\" (UID: \"2b8a15b6-e285-481e-a580-17a4b6f77fb6\") " pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.326102 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b8a15b6-e285-481e-a580-17a4b6f77fb6-client-ca\") pod \"controller-manager-75c49644d7-pwrsq\" (UID: \"2b8a15b6-e285-481e-a580-17a4b6f77fb6\") " pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.328998 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b8a15b6-e285-481e-a580-17a4b6f77fb6-proxy-ca-bundles\") pod \"controller-manager-75c49644d7-pwrsq\" (UID: \"2b8a15b6-e285-481e-a580-17a4b6f77fb6\") " pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.329849 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b8a15b6-e285-481e-a580-17a4b6f77fb6-config\") pod \"controller-manager-75c49644d7-pwrsq\" (UID: \"2b8a15b6-e285-481e-a580-17a4b6f77fb6\") " pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.348619 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.351714 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b8a15b6-e285-481e-a580-17a4b6f77fb6-serving-cert\") pod \"controller-manager-75c49644d7-pwrsq\" (UID: \"2b8a15b6-e285-481e-a580-17a4b6f77fb6\") " pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.357338 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.357866 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rcvf\" (UniqueName: \"kubernetes.io/projected/2b8a15b6-e285-481e-a580-17a4b6f77fb6-kube-api-access-6rcvf\") pod \"controller-manager-75c49644d7-pwrsq\" (UID: \"2b8a15b6-e285-481e-a580-17a4b6f77fb6\") " pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.389572 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.423858 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.635531 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.640219 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75c49644d7-pwrsq"] Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.725713 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.735010 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" event={"ID":"2b8a15b6-e285-481e-a580-17a4b6f77fb6","Type":"ContainerStarted","Data":"f4001962d8e1c64111d55b9a706b7fb6eb954e7558d96efe11266d550ca19b8a"} Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.847633 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 11 09:01:57 crc kubenswrapper[4840]: I0311 09:01:57.934608 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 11 09:01:58 crc kubenswrapper[4840]: I0311 09:01:58.040979 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 11 09:01:58 crc kubenswrapper[4840]: I0311 09:01:58.240058 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 11 09:01:58 crc kubenswrapper[4840]: I0311 09:01:58.329534 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 11 09:01:58 crc kubenswrapper[4840]: I0311 09:01:58.402535 4840 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 11 09:01:58 crc kubenswrapper[4840]: I0311 09:01:58.566005 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 11 09:01:58 crc kubenswrapper[4840]: I0311 09:01:58.647479 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 11 09:01:58 crc kubenswrapper[4840]: I0311 09:01:58.667828 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 11 09:01:58 crc kubenswrapper[4840]: I0311 09:01:58.743389 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" event={"ID":"2b8a15b6-e285-481e-a580-17a4b6f77fb6","Type":"ContainerStarted","Data":"47cbdbcc2607d94eba7a73198fbee751082218c6ce0e0c8f81c73631b3440a20"} Mar 11 09:01:58 crc kubenswrapper[4840]: I0311 09:01:58.762933 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" podStartSLOduration=6.76291057 podStartE2EDuration="6.76291057s" podCreationTimestamp="2026-03-11 09:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:01:58.761665539 +0000 UTC m=+317.427335344" watchObservedRunningTime="2026-03-11 09:01:58.76291057 +0000 UTC m=+317.428580385" Mar 11 09:01:58 crc kubenswrapper[4840]: I0311 09:01:58.927379 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 11 09:01:58 crc kubenswrapper[4840]: I0311 09:01:58.949513 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 11 09:01:58 crc kubenswrapper[4840]: I0311 09:01:58.998121 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 11 09:01:59 crc kubenswrapper[4840]: I0311 09:01:59.125184 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 11 09:01:59 crc kubenswrapper[4840]: I0311 09:01:59.421377 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 11 09:01:59 crc kubenswrapper[4840]: I0311 09:01:59.677281 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 11 09:01:59 crc kubenswrapper[4840]: I0311 09:01:59.750145 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" Mar 11 09:01:59 crc kubenswrapper[4840]: I0311 09:01:59.756220 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" Mar 11 09:02:00 crc kubenswrapper[4840]: I0311 09:02:00.166688 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553662-vnks9"] Mar 11 09:02:00 crc kubenswrapper[4840]: I0311 09:02:00.168069 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553662-vnks9" Mar 11 09:02:00 crc kubenswrapper[4840]: I0311 09:02:00.170827 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:02:00 crc kubenswrapper[4840]: I0311 09:02:00.171054 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:02:00 crc kubenswrapper[4840]: I0311 09:02:00.171704 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:02:00 crc kubenswrapper[4840]: I0311 09:02:00.175888 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553662-vnks9"] Mar 11 09:02:00 crc kubenswrapper[4840]: I0311 09:02:00.365794 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx5wv\" (UniqueName: \"kubernetes.io/projected/fe10c40d-babc-4f67-831a-efc94c086e64-kube-api-access-dx5wv\") pod \"auto-csr-approver-29553662-vnks9\" (UID: \"fe10c40d-babc-4f67-831a-efc94c086e64\") " pod="openshift-infra/auto-csr-approver-29553662-vnks9" Mar 11 09:02:00 crc kubenswrapper[4840]: I0311 09:02:00.467448 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx5wv\" (UniqueName: \"kubernetes.io/projected/fe10c40d-babc-4f67-831a-efc94c086e64-kube-api-access-dx5wv\") pod \"auto-csr-approver-29553662-vnks9\" (UID: \"fe10c40d-babc-4f67-831a-efc94c086e64\") " pod="openshift-infra/auto-csr-approver-29553662-vnks9" Mar 11 09:02:00 crc kubenswrapper[4840]: I0311 09:02:00.490532 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx5wv\" (UniqueName: \"kubernetes.io/projected/fe10c40d-babc-4f67-831a-efc94c086e64-kube-api-access-dx5wv\") pod \"auto-csr-approver-29553662-vnks9\" (UID: \"fe10c40d-babc-4f67-831a-efc94c086e64\") " pod="openshift-infra/auto-csr-approver-29553662-vnks9" Mar 11 09:02:00 crc kubenswrapper[4840]: I0311 09:02:00.762305 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 11 09:02:00 crc kubenswrapper[4840]: I0311 09:02:00.784179 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553662-vnks9" Mar 11 09:02:00 crc kubenswrapper[4840]: I0311 09:02:00.896399 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 11 09:02:01 crc kubenswrapper[4840]: I0311 09:02:01.267290 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553662-vnks9"] Mar 11 09:02:01 crc kubenswrapper[4840]: W0311 09:02:01.272027 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe10c40d_babc_4f67_831a_efc94c086e64.slice/crio-3690752ebb894d575a7952eed65963afc93c3b40e7a002f3aadacba71f00d6e9 WatchSource:0}: Error finding container 3690752ebb894d575a7952eed65963afc93c3b40e7a002f3aadacba71f00d6e9: Status 404 returned error can't find the container with id 3690752ebb894d575a7952eed65963afc93c3b40e7a002f3aadacba71f00d6e9 Mar 11 09:02:01 crc kubenswrapper[4840]: I0311 09:02:01.763204 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553662-vnks9" event={"ID":"fe10c40d-babc-4f67-831a-efc94c086e64","Type":"ContainerStarted","Data":"3690752ebb894d575a7952eed65963afc93c3b40e7a002f3aadacba71f00d6e9"} Mar 11 09:02:03 crc kubenswrapper[4840]: I0311 09:02:03.778613 4840 generic.go:334] "Generic (PLEG): container finished" podID="fe10c40d-babc-4f67-831a-efc94c086e64" containerID="e017eab064c5fac7fdf9d503edd02db88f5af22a912f0030c82136a9cdfe0713" exitCode=0 Mar 11 09:02:03 crc kubenswrapper[4840]: I0311 09:02:03.778732 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553662-vnks9" event={"ID":"fe10c40d-babc-4f67-831a-efc94c086e64","Type":"ContainerDied","Data":"e017eab064c5fac7fdf9d503edd02db88f5af22a912f0030c82136a9cdfe0713"} Mar 11 09:02:04 crc kubenswrapper[4840]: I0311 09:02:04.085049 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 11 09:02:05 crc kubenswrapper[4840]: I0311 09:02:05.175639 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553662-vnks9" Mar 11 09:02:05 crc kubenswrapper[4840]: I0311 09:02:05.307599 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx5wv\" (UniqueName: \"kubernetes.io/projected/fe10c40d-babc-4f67-831a-efc94c086e64-kube-api-access-dx5wv\") pod \"fe10c40d-babc-4f67-831a-efc94c086e64\" (UID: \"fe10c40d-babc-4f67-831a-efc94c086e64\") " Mar 11 09:02:05 crc kubenswrapper[4840]: I0311 09:02:05.318107 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe10c40d-babc-4f67-831a-efc94c086e64-kube-api-access-dx5wv" (OuterVolumeSpecName: "kube-api-access-dx5wv") pod "fe10c40d-babc-4f67-831a-efc94c086e64" (UID: "fe10c40d-babc-4f67-831a-efc94c086e64"). InnerVolumeSpecName "kube-api-access-dx5wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:05 crc kubenswrapper[4840]: I0311 09:02:05.409006 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx5wv\" (UniqueName: \"kubernetes.io/projected/fe10c40d-babc-4f67-831a-efc94c086e64-kube-api-access-dx5wv\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:05 crc kubenswrapper[4840]: I0311 09:02:05.793441 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553662-vnks9" event={"ID":"fe10c40d-babc-4f67-831a-efc94c086e64","Type":"ContainerDied","Data":"3690752ebb894d575a7952eed65963afc93c3b40e7a002f3aadacba71f00d6e9"} Mar 11 09:02:05 crc kubenswrapper[4840]: I0311 09:02:05.793787 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3690752ebb894d575a7952eed65963afc93c3b40e7a002f3aadacba71f00d6e9" Mar 11 09:02:05 crc kubenswrapper[4840]: I0311 09:02:05.793534 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553662-vnks9" Mar 11 09:02:06 crc kubenswrapper[4840]: I0311 09:02:06.130967 4840 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 11 09:02:06 crc kubenswrapper[4840]: I0311 09:02:06.131324 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://57ccb6dad24165bfb2ef04afb9ee882b32dc3c00113130426f3ca6282db5ffdd" gracePeriod=5 Mar 11 09:02:08 crc kubenswrapper[4840]: I0311 09:02:08.059542 4840 scope.go:117] "RemoveContainer" containerID="d83e41b53f0dc8d6ed26dc4267618743324ad7fd47ceb3173ef096de6d362984" Mar 11 09:02:08 crc kubenswrapper[4840]: I0311 09:02:08.813636 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-6cfdf67ff-4mn7h_6018aeae-f9f1-4848-b8ea-695ddf794001/oauth-openshift/2.log" Mar 11 09:02:08 crc kubenswrapper[4840]: I0311 09:02:08.814037 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" event={"ID":"6018aeae-f9f1-4848-b8ea-695ddf794001","Type":"ContainerStarted","Data":"ec57d5a4849f0fd9b7be3506a0857cf15518e52c9204c6654bb423c1feabc38c"} Mar 11 09:02:08 crc kubenswrapper[4840]: I0311 09:02:08.814651 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:02:08 crc kubenswrapper[4840]: I0311 09:02:08.821647 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" Mar 11 09:02:08 crc kubenswrapper[4840]: I0311 09:02:08.841120 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6cfdf67ff-4mn7h" podStartSLOduration=86.841103664 podStartE2EDuration="1m26.841103664s" podCreationTimestamp="2026-03-11 09:00:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:01:37.598036223 +0000 UTC m=+296.263706038" watchObservedRunningTime="2026-03-11 09:02:08.841103664 +0000 UTC m=+327.506773479" Mar 11 09:02:11 crc kubenswrapper[4840]: I0311 09:02:11.749984 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 11 09:02:11 crc kubenswrapper[4840]: I0311 09:02:11.750418 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 09:02:11 crc kubenswrapper[4840]: I0311 09:02:11.842297 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 11 09:02:11 crc kubenswrapper[4840]: I0311 09:02:11.842364 4840 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="57ccb6dad24165bfb2ef04afb9ee882b32dc3c00113130426f3ca6282db5ffdd" exitCode=137 Mar 11 09:02:11 crc kubenswrapper[4840]: I0311 09:02:11.842425 4840 scope.go:117] "RemoveContainer" containerID="57ccb6dad24165bfb2ef04afb9ee882b32dc3c00113130426f3ca6282db5ffdd" Mar 11 09:02:11 crc kubenswrapper[4840]: I0311 09:02:11.842534 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 11 09:02:11 crc kubenswrapper[4840]: I0311 09:02:11.871886 4840 scope.go:117] "RemoveContainer" containerID="57ccb6dad24165bfb2ef04afb9ee882b32dc3c00113130426f3ca6282db5ffdd" Mar 11 09:02:11 crc kubenswrapper[4840]: E0311 09:02:11.872636 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57ccb6dad24165bfb2ef04afb9ee882b32dc3c00113130426f3ca6282db5ffdd\": container with ID starting with 57ccb6dad24165bfb2ef04afb9ee882b32dc3c00113130426f3ca6282db5ffdd not found: ID does not exist" containerID="57ccb6dad24165bfb2ef04afb9ee882b32dc3c00113130426f3ca6282db5ffdd" Mar 11 09:02:11 crc kubenswrapper[4840]: I0311 09:02:11.872741 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57ccb6dad24165bfb2ef04afb9ee882b32dc3c00113130426f3ca6282db5ffdd"} err="failed to get container status \"57ccb6dad24165bfb2ef04afb9ee882b32dc3c00113130426f3ca6282db5ffdd\": rpc error: code = NotFound desc = could not find container \"57ccb6dad24165bfb2ef04afb9ee882b32dc3c00113130426f3ca6282db5ffdd\": container with ID starting with 57ccb6dad24165bfb2ef04afb9ee882b32dc3c00113130426f3ca6282db5ffdd not found: ID does not exist" Mar 11 09:02:11 crc kubenswrapper[4840]: I0311 09:02:11.906283 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 11 09:02:11 crc kubenswrapper[4840]: I0311 09:02:11.906375 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 11 09:02:11 crc kubenswrapper[4840]: I0311 09:02:11.906429 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 11 09:02:11 crc kubenswrapper[4840]: I0311 09:02:11.906440 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:02:11 crc kubenswrapper[4840]: I0311 09:02:11.906568 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 11 09:02:11 crc kubenswrapper[4840]: I0311 09:02:11.906603 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:02:11 crc kubenswrapper[4840]: I0311 09:02:11.906602 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:02:11 crc kubenswrapper[4840]: I0311 09:02:11.906636 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 11 09:02:11 crc kubenswrapper[4840]: I0311 09:02:11.906753 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:02:11 crc kubenswrapper[4840]: I0311 09:02:11.907196 4840 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:11 crc kubenswrapper[4840]: I0311 09:02:11.907218 4840 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:11 crc kubenswrapper[4840]: I0311 09:02:11.907229 4840 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:11 crc kubenswrapper[4840]: I0311 09:02:11.907239 4840 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:11 crc kubenswrapper[4840]: I0311 09:02:11.926975 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:02:12 crc kubenswrapper[4840]: I0311 09:02:12.009004 4840 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:12 crc kubenswrapper[4840]: I0311 09:02:12.070031 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 11 09:02:12 crc kubenswrapper[4840]: I0311 09:02:12.581902 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75c49644d7-pwrsq"] Mar 11 09:02:12 crc kubenswrapper[4840]: I0311 09:02:12.583054 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" podUID="2b8a15b6-e285-481e-a580-17a4b6f77fb6" containerName="controller-manager" containerID="cri-o://47cbdbcc2607d94eba7a73198fbee751082218c6ce0e0c8f81c73631b3440a20" gracePeriod=30 Mar 11 09:02:12 crc kubenswrapper[4840]: I0311 09:02:12.602222 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58"] Mar 11 09:02:12 crc kubenswrapper[4840]: I0311 09:02:12.602555 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58" podUID="7343cb44-cc30-46b3-b1a9-102dcff7c1bd" containerName="route-controller-manager" containerID="cri-o://fd9e35c588bbd6a9b61582a3292b9e340da9446c1ad9239078cf2a0e386ea4f7" gracePeriod=30 Mar 11 09:02:12 crc kubenswrapper[4840]: I0311 09:02:12.850263 4840 generic.go:334] "Generic (PLEG): container finished" podID="2b8a15b6-e285-481e-a580-17a4b6f77fb6" containerID="47cbdbcc2607d94eba7a73198fbee751082218c6ce0e0c8f81c73631b3440a20" exitCode=0 Mar 11 09:02:12 crc kubenswrapper[4840]: I0311 09:02:12.850357 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" event={"ID":"2b8a15b6-e285-481e-a580-17a4b6f77fb6","Type":"ContainerDied","Data":"47cbdbcc2607d94eba7a73198fbee751082218c6ce0e0c8f81c73631b3440a20"} Mar 11 09:02:12 crc kubenswrapper[4840]: I0311 09:02:12.852183 4840 generic.go:334] "Generic (PLEG): container finished" podID="7343cb44-cc30-46b3-b1a9-102dcff7c1bd" containerID="fd9e35c588bbd6a9b61582a3292b9e340da9446c1ad9239078cf2a0e386ea4f7" exitCode=0 Mar 11 09:02:12 crc kubenswrapper[4840]: I0311 09:02:12.852246 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58" event={"ID":"7343cb44-cc30-46b3-b1a9-102dcff7c1bd","Type":"ContainerDied","Data":"fd9e35c588bbd6a9b61582a3292b9e340da9446c1ad9239078cf2a0e386ea4f7"} Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.128165 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58" Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.229083 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.328189 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7343cb44-cc30-46b3-b1a9-102dcff7c1bd-serving-cert\") pod \"7343cb44-cc30-46b3-b1a9-102dcff7c1bd\" (UID: \"7343cb44-cc30-46b3-b1a9-102dcff7c1bd\") " Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.328306 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7343cb44-cc30-46b3-b1a9-102dcff7c1bd-client-ca\") pod \"7343cb44-cc30-46b3-b1a9-102dcff7c1bd\" (UID: \"7343cb44-cc30-46b3-b1a9-102dcff7c1bd\") " Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.328410 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nvm4\" (UniqueName: \"kubernetes.io/projected/7343cb44-cc30-46b3-b1a9-102dcff7c1bd-kube-api-access-4nvm4\") pod \"7343cb44-cc30-46b3-b1a9-102dcff7c1bd\" (UID: \"7343cb44-cc30-46b3-b1a9-102dcff7c1bd\") " Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.328437 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7343cb44-cc30-46b3-b1a9-102dcff7c1bd-config\") pod \"7343cb44-cc30-46b3-b1a9-102dcff7c1bd\" (UID: \"7343cb44-cc30-46b3-b1a9-102dcff7c1bd\") " Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.330017 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7343cb44-cc30-46b3-b1a9-102dcff7c1bd-config" (OuterVolumeSpecName: "config") pod "7343cb44-cc30-46b3-b1a9-102dcff7c1bd" (UID: "7343cb44-cc30-46b3-b1a9-102dcff7c1bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.330301 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7343cb44-cc30-46b3-b1a9-102dcff7c1bd-client-ca" (OuterVolumeSpecName: "client-ca") pod "7343cb44-cc30-46b3-b1a9-102dcff7c1bd" (UID: "7343cb44-cc30-46b3-b1a9-102dcff7c1bd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.334918 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7343cb44-cc30-46b3-b1a9-102dcff7c1bd-kube-api-access-4nvm4" (OuterVolumeSpecName: "kube-api-access-4nvm4") pod "7343cb44-cc30-46b3-b1a9-102dcff7c1bd" (UID: "7343cb44-cc30-46b3-b1a9-102dcff7c1bd"). InnerVolumeSpecName "kube-api-access-4nvm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.335571 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7343cb44-cc30-46b3-b1a9-102dcff7c1bd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7343cb44-cc30-46b3-b1a9-102dcff7c1bd" (UID: "7343cb44-cc30-46b3-b1a9-102dcff7c1bd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.430105 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b8a15b6-e285-481e-a580-17a4b6f77fb6-proxy-ca-bundles\") pod \"2b8a15b6-e285-481e-a580-17a4b6f77fb6\" (UID: \"2b8a15b6-e285-481e-a580-17a4b6f77fb6\") " Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.430181 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b8a15b6-e285-481e-a580-17a4b6f77fb6-config\") pod \"2b8a15b6-e285-481e-a580-17a4b6f77fb6\" (UID: \"2b8a15b6-e285-481e-a580-17a4b6f77fb6\") " Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.430223 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b8a15b6-e285-481e-a580-17a4b6f77fb6-serving-cert\") pod \"2b8a15b6-e285-481e-a580-17a4b6f77fb6\" (UID: \"2b8a15b6-e285-481e-a580-17a4b6f77fb6\") " Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.430299 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rcvf\" (UniqueName: \"kubernetes.io/projected/2b8a15b6-e285-481e-a580-17a4b6f77fb6-kube-api-access-6rcvf\") pod \"2b8a15b6-e285-481e-a580-17a4b6f77fb6\" (UID: \"2b8a15b6-e285-481e-a580-17a4b6f77fb6\") " Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.430351 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b8a15b6-e285-481e-a580-17a4b6f77fb6-client-ca\") pod \"2b8a15b6-e285-481e-a580-17a4b6f77fb6\" (UID: \"2b8a15b6-e285-481e-a580-17a4b6f77fb6\") " Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.431249 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nvm4\" (UniqueName: \"kubernetes.io/projected/7343cb44-cc30-46b3-b1a9-102dcff7c1bd-kube-api-access-4nvm4\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.431303 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7343cb44-cc30-46b3-b1a9-102dcff7c1bd-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.431315 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7343cb44-cc30-46b3-b1a9-102dcff7c1bd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.431325 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7343cb44-cc30-46b3-b1a9-102dcff7c1bd-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.431451 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b8a15b6-e285-481e-a580-17a4b6f77fb6-client-ca" (OuterVolumeSpecName: "client-ca") pod "2b8a15b6-e285-481e-a580-17a4b6f77fb6" (UID: "2b8a15b6-e285-481e-a580-17a4b6f77fb6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.432147 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b8a15b6-e285-481e-a580-17a4b6f77fb6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2b8a15b6-e285-481e-a580-17a4b6f77fb6" (UID: "2b8a15b6-e285-481e-a580-17a4b6f77fb6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.432555 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b8a15b6-e285-481e-a580-17a4b6f77fb6-config" (OuterVolumeSpecName: "config") pod "2b8a15b6-e285-481e-a580-17a4b6f77fb6" (UID: "2b8a15b6-e285-481e-a580-17a4b6f77fb6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.434164 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b8a15b6-e285-481e-a580-17a4b6f77fb6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2b8a15b6-e285-481e-a580-17a4b6f77fb6" (UID: "2b8a15b6-e285-481e-a580-17a4b6f77fb6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.434395 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b8a15b6-e285-481e-a580-17a4b6f77fb6-kube-api-access-6rcvf" (OuterVolumeSpecName: "kube-api-access-6rcvf") pod "2b8a15b6-e285-481e-a580-17a4b6f77fb6" (UID: "2b8a15b6-e285-481e-a580-17a4b6f77fb6"). InnerVolumeSpecName "kube-api-access-6rcvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.532610 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rcvf\" (UniqueName: \"kubernetes.io/projected/2b8a15b6-e285-481e-a580-17a4b6f77fb6-kube-api-access-6rcvf\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.532658 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b8a15b6-e285-481e-a580-17a4b6f77fb6-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.532668 4840 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2b8a15b6-e285-481e-a580-17a4b6f77fb6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.532677 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b8a15b6-e285-481e-a580-17a4b6f77fb6-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.532686 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b8a15b6-e285-481e-a580-17a4b6f77fb6-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.864698 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58" event={"ID":"7343cb44-cc30-46b3-b1a9-102dcff7c1bd","Type":"ContainerDied","Data":"c13225cfd451b70cd34f8fe906e8f7bc8c1271d3ca823f6eb17c3e4e5bce4a39"} Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.864747 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58" Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.864791 4840 scope.go:117] "RemoveContainer" containerID="fd9e35c588bbd6a9b61582a3292b9e340da9446c1ad9239078cf2a0e386ea4f7" Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.870206 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" event={"ID":"2b8a15b6-e285-481e-a580-17a4b6f77fb6","Type":"ContainerDied","Data":"f4001962d8e1c64111d55b9a706b7fb6eb954e7558d96efe11266d550ca19b8a"} Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.870268 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75c49644d7-pwrsq" Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.889939 4840 scope.go:117] "RemoveContainer" containerID="47cbdbcc2607d94eba7a73198fbee751082218c6ce0e0c8f81c73631b3440a20" Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.915432 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-75c49644d7-pwrsq"] Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.918364 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-75c49644d7-pwrsq"] Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.929840 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58"] Mar 11 09:02:13 crc kubenswrapper[4840]: I0311 09:02:13.934244 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-57848b8fc5-rdf58"] Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.067742 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b8a15b6-e285-481e-a580-17a4b6f77fb6" path="/var/lib/kubelet/pods/2b8a15b6-e285-481e-a580-17a4b6f77fb6/volumes" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.068337 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7343cb44-cc30-46b3-b1a9-102dcff7c1bd" path="/var/lib/kubelet/pods/7343cb44-cc30-46b3-b1a9-102dcff7c1bd/volumes" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.121634 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2"] Mar 11 09:02:14 crc kubenswrapper[4840]: E0311 09:02:14.122102 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe10c40d-babc-4f67-831a-efc94c086e64" containerName="oc" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.122124 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe10c40d-babc-4f67-831a-efc94c086e64" containerName="oc" Mar 11 09:02:14 crc kubenswrapper[4840]: E0311 09:02:14.122142 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7343cb44-cc30-46b3-b1a9-102dcff7c1bd" containerName="route-controller-manager" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.122151 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="7343cb44-cc30-46b3-b1a9-102dcff7c1bd" containerName="route-controller-manager" Mar 11 09:02:14 crc kubenswrapper[4840]: E0311 09:02:14.122169 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b8a15b6-e285-481e-a580-17a4b6f77fb6" containerName="controller-manager" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.122179 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b8a15b6-e285-481e-a580-17a4b6f77fb6" containerName="controller-manager" Mar 11 09:02:14 crc kubenswrapper[4840]: E0311 09:02:14.122192 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.122204 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.122366 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="7343cb44-cc30-46b3-b1a9-102dcff7c1bd" containerName="route-controller-manager" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.123300 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.123322 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b8a15b6-e285-481e-a580-17a4b6f77fb6" containerName="controller-manager" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.123337 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe10c40d-babc-4f67-831a-efc94c086e64" containerName="oc" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.124322 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.128549 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-569c97bdd5-276pr"] Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.128793 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.129485 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.130090 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.131539 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-569c97bdd5-276pr"] Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.132304 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.132378 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.132760 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.132846 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.132957 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.133286 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.133338 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.133293 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.133737 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.133912 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.135072 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2"] Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.140819 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.242614 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c0da909-9069-4e7b-8d2c-6853addf2cf8-proxy-ca-bundles\") pod \"controller-manager-569c97bdd5-276pr\" (UID: \"9c0da909-9069-4e7b-8d2c-6853addf2cf8\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.242709 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljdwt\" (UniqueName: \"kubernetes.io/projected/7f13af28-409e-41c3-91f7-b8108aa8a032-kube-api-access-ljdwt\") pod \"route-controller-manager-7f7cf44877-9npp2\" (UID: \"7f13af28-409e-41c3-91f7-b8108aa8a032\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.242738 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f13af28-409e-41c3-91f7-b8108aa8a032-serving-cert\") pod \"route-controller-manager-7f7cf44877-9npp2\" (UID: \"7f13af28-409e-41c3-91f7-b8108aa8a032\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.244016 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c0da909-9069-4e7b-8d2c-6853addf2cf8-client-ca\") pod \"controller-manager-569c97bdd5-276pr\" (UID: \"9c0da909-9069-4e7b-8d2c-6853addf2cf8\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.244060 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvcsh\" (UniqueName: \"kubernetes.io/projected/9c0da909-9069-4e7b-8d2c-6853addf2cf8-kube-api-access-vvcsh\") pod \"controller-manager-569c97bdd5-276pr\" (UID: \"9c0da909-9069-4e7b-8d2c-6853addf2cf8\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.244138 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c0da909-9069-4e7b-8d2c-6853addf2cf8-serving-cert\") pod \"controller-manager-569c97bdd5-276pr\" (UID: \"9c0da909-9069-4e7b-8d2c-6853addf2cf8\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.244191 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f13af28-409e-41c3-91f7-b8108aa8a032-config\") pod \"route-controller-manager-7f7cf44877-9npp2\" (UID: \"7f13af28-409e-41c3-91f7-b8108aa8a032\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.244209 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f13af28-409e-41c3-91f7-b8108aa8a032-client-ca\") pod \"route-controller-manager-7f7cf44877-9npp2\" (UID: \"7f13af28-409e-41c3-91f7-b8108aa8a032\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.244228 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0da909-9069-4e7b-8d2c-6853addf2cf8-config\") pod \"controller-manager-569c97bdd5-276pr\" (UID: \"9c0da909-9069-4e7b-8d2c-6853addf2cf8\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.345453 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c0da909-9069-4e7b-8d2c-6853addf2cf8-client-ca\") pod \"controller-manager-569c97bdd5-276pr\" (UID: \"9c0da909-9069-4e7b-8d2c-6853addf2cf8\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.345578 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvcsh\" (UniqueName: \"kubernetes.io/projected/9c0da909-9069-4e7b-8d2c-6853addf2cf8-kube-api-access-vvcsh\") pod \"controller-manager-569c97bdd5-276pr\" (UID: \"9c0da909-9069-4e7b-8d2c-6853addf2cf8\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.345622 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c0da909-9069-4e7b-8d2c-6853addf2cf8-serving-cert\") pod \"controller-manager-569c97bdd5-276pr\" (UID: \"9c0da909-9069-4e7b-8d2c-6853addf2cf8\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.345665 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f13af28-409e-41c3-91f7-b8108aa8a032-config\") pod \"route-controller-manager-7f7cf44877-9npp2\" (UID: \"7f13af28-409e-41c3-91f7-b8108aa8a032\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.345700 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f13af28-409e-41c3-91f7-b8108aa8a032-client-ca\") pod \"route-controller-manager-7f7cf44877-9npp2\" (UID: \"7f13af28-409e-41c3-91f7-b8108aa8a032\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.345724 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0da909-9069-4e7b-8d2c-6853addf2cf8-config\") pod \"controller-manager-569c97bdd5-276pr\" (UID: \"9c0da909-9069-4e7b-8d2c-6853addf2cf8\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.345756 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c0da909-9069-4e7b-8d2c-6853addf2cf8-proxy-ca-bundles\") pod \"controller-manager-569c97bdd5-276pr\" (UID: \"9c0da909-9069-4e7b-8d2c-6853addf2cf8\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.345815 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljdwt\" (UniqueName: \"kubernetes.io/projected/7f13af28-409e-41c3-91f7-b8108aa8a032-kube-api-access-ljdwt\") pod \"route-controller-manager-7f7cf44877-9npp2\" (UID: \"7f13af28-409e-41c3-91f7-b8108aa8a032\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.345842 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f13af28-409e-41c3-91f7-b8108aa8a032-serving-cert\") pod \"route-controller-manager-7f7cf44877-9npp2\" (UID: \"7f13af28-409e-41c3-91f7-b8108aa8a032\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.346638 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c0da909-9069-4e7b-8d2c-6853addf2cf8-client-ca\") pod \"controller-manager-569c97bdd5-276pr\" (UID: \"9c0da909-9069-4e7b-8d2c-6853addf2cf8\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.347363 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0da909-9069-4e7b-8d2c-6853addf2cf8-config\") pod \"controller-manager-569c97bdd5-276pr\" (UID: \"9c0da909-9069-4e7b-8d2c-6853addf2cf8\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.347426 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c0da909-9069-4e7b-8d2c-6853addf2cf8-proxy-ca-bundles\") pod \"controller-manager-569c97bdd5-276pr\" (UID: \"9c0da909-9069-4e7b-8d2c-6853addf2cf8\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.348158 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f13af28-409e-41c3-91f7-b8108aa8a032-client-ca\") pod \"route-controller-manager-7f7cf44877-9npp2\" (UID: \"7f13af28-409e-41c3-91f7-b8108aa8a032\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.348432 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f13af28-409e-41c3-91f7-b8108aa8a032-config\") pod \"route-controller-manager-7f7cf44877-9npp2\" (UID: \"7f13af28-409e-41c3-91f7-b8108aa8a032\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.350540 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c0da909-9069-4e7b-8d2c-6853addf2cf8-serving-cert\") pod \"controller-manager-569c97bdd5-276pr\" (UID: \"9c0da909-9069-4e7b-8d2c-6853addf2cf8\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.358213 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f13af28-409e-41c3-91f7-b8108aa8a032-serving-cert\") pod \"route-controller-manager-7f7cf44877-9npp2\" (UID: \"7f13af28-409e-41c3-91f7-b8108aa8a032\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.365409 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvcsh\" (UniqueName: \"kubernetes.io/projected/9c0da909-9069-4e7b-8d2c-6853addf2cf8-kube-api-access-vvcsh\") pod \"controller-manager-569c97bdd5-276pr\" (UID: \"9c0da909-9069-4e7b-8d2c-6853addf2cf8\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.368704 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljdwt\" (UniqueName: \"kubernetes.io/projected/7f13af28-409e-41c3-91f7-b8108aa8a032-kube-api-access-ljdwt\") pod \"route-controller-manager-7f7cf44877-9npp2\" (UID: \"7f13af28-409e-41c3-91f7-b8108aa8a032\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.467351 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.483629 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.799660 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2"] Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.880367 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2" event={"ID":"7f13af28-409e-41c3-91f7-b8108aa8a032","Type":"ContainerStarted","Data":"6774315ef70ea0c1cca586e82088a2b35d5791bc9f2b2926fdf4651987fcc37c"} Mar 11 09:02:14 crc kubenswrapper[4840]: I0311 09:02:14.952645 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-569c97bdd5-276pr"] Mar 11 09:02:14 crc kubenswrapper[4840]: W0311 09:02:14.955396 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c0da909_9069_4e7b_8d2c_6853addf2cf8.slice/crio-221834685a8e0eb5ee1bad022f1a9de7c9665c5fd6c3c59e8c97f5368fa7e801 WatchSource:0}: Error finding container 221834685a8e0eb5ee1bad022f1a9de7c9665c5fd6c3c59e8c97f5368fa7e801: Status 404 returned error can't find the container with id 221834685a8e0eb5ee1bad022f1a9de7c9665c5fd6c3c59e8c97f5368fa7e801 Mar 11 09:02:15 crc kubenswrapper[4840]: I0311 09:02:15.889767 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2" event={"ID":"7f13af28-409e-41c3-91f7-b8108aa8a032","Type":"ContainerStarted","Data":"b5ba59ec24ed5f02813fa65e26cafbd0be6238810010d631246125357b8b0dc3"} Mar 11 09:02:15 crc kubenswrapper[4840]: I0311 09:02:15.890299 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2" Mar 11 09:02:15 crc kubenswrapper[4840]: I0311 09:02:15.893800 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" event={"ID":"9c0da909-9069-4e7b-8d2c-6853addf2cf8","Type":"ContainerStarted","Data":"cd323413e783373adc7c322ddc4d41ff1d85855a8400a39f4e73f2193189d15f"} Mar 11 09:02:15 crc kubenswrapper[4840]: I0311 09:02:15.893873 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" event={"ID":"9c0da909-9069-4e7b-8d2c-6853addf2cf8","Type":"ContainerStarted","Data":"221834685a8e0eb5ee1bad022f1a9de7c9665c5fd6c3c59e8c97f5368fa7e801"} Mar 11 09:02:15 crc kubenswrapper[4840]: I0311 09:02:15.894070 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" Mar 11 09:02:15 crc kubenswrapper[4840]: I0311 09:02:15.895218 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2" Mar 11 09:02:15 crc kubenswrapper[4840]: I0311 09:02:15.897864 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" Mar 11 09:02:15 crc kubenswrapper[4840]: I0311 09:02:15.914323 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2" podStartSLOduration=3.914298991 podStartE2EDuration="3.914298991s" podCreationTimestamp="2026-03-11 09:02:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:02:15.911734348 +0000 UTC m=+334.577404183" watchObservedRunningTime="2026-03-11 09:02:15.914298991 +0000 UTC m=+334.579968806" Mar 11 09:02:15 crc kubenswrapper[4840]: I0311 09:02:15.962425 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" podStartSLOduration=3.962394989 podStartE2EDuration="3.962394989s" podCreationTimestamp="2026-03-11 09:02:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:02:15.958952484 +0000 UTC m=+334.624622299" watchObservedRunningTime="2026-03-11 09:02:15.962394989 +0000 UTC m=+334.628064804" Mar 11 09:02:19 crc kubenswrapper[4840]: I0311 09:02:19.922401 4840 generic.go:334] "Generic (PLEG): container finished" podID="33326f34-f442-42be-9bd2-39cf5627b953" containerID="f5b71f97eb6b7af7a3e8c7066dd678dfc244df8ee1e9045fd01005014d052509" exitCode=0 Mar 11 09:02:19 crc kubenswrapper[4840]: I0311 09:02:19.922504 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" event={"ID":"33326f34-f442-42be-9bd2-39cf5627b953","Type":"ContainerDied","Data":"f5b71f97eb6b7af7a3e8c7066dd678dfc244df8ee1e9045fd01005014d052509"} Mar 11 09:02:19 crc kubenswrapper[4840]: I0311 09:02:19.924715 4840 scope.go:117] "RemoveContainer" containerID="f5b71f97eb6b7af7a3e8c7066dd678dfc244df8ee1e9045fd01005014d052509" Mar 11 09:02:20 crc kubenswrapper[4840]: I0311 09:02:20.930765 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" event={"ID":"33326f34-f442-42be-9bd2-39cf5627b953","Type":"ContainerStarted","Data":"9fffa79656eba5d6afe49decfded5d4c1998218eac5dbfc9a9bb4f5e71df022e"} Mar 11 09:02:20 crc kubenswrapper[4840]: I0311 09:02:20.931594 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" Mar 11 09:02:20 crc kubenswrapper[4840]: I0311 09:02:20.934242 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" Mar 11 09:02:32 crc kubenswrapper[4840]: I0311 09:02:32.580884 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-569c97bdd5-276pr"] Mar 11 09:02:32 crc kubenswrapper[4840]: I0311 09:02:32.583121 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" podUID="9c0da909-9069-4e7b-8d2c-6853addf2cf8" containerName="controller-manager" containerID="cri-o://cd323413e783373adc7c322ddc4d41ff1d85855a8400a39f4e73f2193189d15f" gracePeriod=30 Mar 11 09:02:32 crc kubenswrapper[4840]: I0311 09:02:32.683388 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2"] Mar 11 09:02:32 crc kubenswrapper[4840]: I0311 09:02:32.684112 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2" podUID="7f13af28-409e-41c3-91f7-b8108aa8a032" containerName="route-controller-manager" containerID="cri-o://b5ba59ec24ed5f02813fa65e26cafbd0be6238810010d631246125357b8b0dc3" gracePeriod=30 Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.030647 4840 generic.go:334] "Generic (PLEG): container finished" podID="7f13af28-409e-41c3-91f7-b8108aa8a032" containerID="b5ba59ec24ed5f02813fa65e26cafbd0be6238810010d631246125357b8b0dc3" exitCode=0 Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.030738 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2" event={"ID":"7f13af28-409e-41c3-91f7-b8108aa8a032","Type":"ContainerDied","Data":"b5ba59ec24ed5f02813fa65e26cafbd0be6238810010d631246125357b8b0dc3"} Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.036205 4840 generic.go:334] "Generic (PLEG): container finished" podID="9c0da909-9069-4e7b-8d2c-6853addf2cf8" containerID="cd323413e783373adc7c322ddc4d41ff1d85855a8400a39f4e73f2193189d15f" exitCode=0 Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.036267 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" event={"ID":"9c0da909-9069-4e7b-8d2c-6853addf2cf8","Type":"ContainerDied","Data":"cd323413e783373adc7c322ddc4d41ff1d85855a8400a39f4e73f2193189d15f"} Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.267851 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2" Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.273856 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.439348 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvcsh\" (UniqueName: \"kubernetes.io/projected/9c0da909-9069-4e7b-8d2c-6853addf2cf8-kube-api-access-vvcsh\") pod \"9c0da909-9069-4e7b-8d2c-6853addf2cf8\" (UID: \"9c0da909-9069-4e7b-8d2c-6853addf2cf8\") " Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.439459 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f13af28-409e-41c3-91f7-b8108aa8a032-client-ca\") pod \"7f13af28-409e-41c3-91f7-b8108aa8a032\" (UID: \"7f13af28-409e-41c3-91f7-b8108aa8a032\") " Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.439561 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f13af28-409e-41c3-91f7-b8108aa8a032-serving-cert\") pod \"7f13af28-409e-41c3-91f7-b8108aa8a032\" (UID: \"7f13af28-409e-41c3-91f7-b8108aa8a032\") " Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.439579 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c0da909-9069-4e7b-8d2c-6853addf2cf8-client-ca\") pod \"9c0da909-9069-4e7b-8d2c-6853addf2cf8\" (UID: \"9c0da909-9069-4e7b-8d2c-6853addf2cf8\") " Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.439615 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f13af28-409e-41c3-91f7-b8108aa8a032-config\") pod \"7f13af28-409e-41c3-91f7-b8108aa8a032\" (UID: \"7f13af28-409e-41c3-91f7-b8108aa8a032\") " Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.439645 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljdwt\" (UniqueName: \"kubernetes.io/projected/7f13af28-409e-41c3-91f7-b8108aa8a032-kube-api-access-ljdwt\") pod \"7f13af28-409e-41c3-91f7-b8108aa8a032\" (UID: \"7f13af28-409e-41c3-91f7-b8108aa8a032\") " Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.440511 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f13af28-409e-41c3-91f7-b8108aa8a032-client-ca" (OuterVolumeSpecName: "client-ca") pod "7f13af28-409e-41c3-91f7-b8108aa8a032" (UID: "7f13af28-409e-41c3-91f7-b8108aa8a032"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.440671 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c0da909-9069-4e7b-8d2c-6853addf2cf8-client-ca" (OuterVolumeSpecName: "client-ca") pod "9c0da909-9069-4e7b-8d2c-6853addf2cf8" (UID: "9c0da909-9069-4e7b-8d2c-6853addf2cf8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.440743 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f13af28-409e-41c3-91f7-b8108aa8a032-config" (OuterVolumeSpecName: "config") pod "7f13af28-409e-41c3-91f7-b8108aa8a032" (UID: "7f13af28-409e-41c3-91f7-b8108aa8a032"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.440989 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c0da909-9069-4e7b-8d2c-6853addf2cf8-serving-cert\") pod \"9c0da909-9069-4e7b-8d2c-6853addf2cf8\" (UID: \"9c0da909-9069-4e7b-8d2c-6853addf2cf8\") " Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.441064 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c0da909-9069-4e7b-8d2c-6853addf2cf8-proxy-ca-bundles\") pod \"9c0da909-9069-4e7b-8d2c-6853addf2cf8\" (UID: \"9c0da909-9069-4e7b-8d2c-6853addf2cf8\") " Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.441130 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0da909-9069-4e7b-8d2c-6853addf2cf8-config\") pod \"9c0da909-9069-4e7b-8d2c-6853addf2cf8\" (UID: \"9c0da909-9069-4e7b-8d2c-6853addf2cf8\") " Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.441570 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f13af28-409e-41c3-91f7-b8108aa8a032-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.441595 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9c0da909-9069-4e7b-8d2c-6853addf2cf8-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.441607 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f13af28-409e-41c3-91f7-b8108aa8a032-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.441951 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c0da909-9069-4e7b-8d2c-6853addf2cf8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9c0da909-9069-4e7b-8d2c-6853addf2cf8" (UID: "9c0da909-9069-4e7b-8d2c-6853addf2cf8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.442350 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c0da909-9069-4e7b-8d2c-6853addf2cf8-config" (OuterVolumeSpecName: "config") pod "9c0da909-9069-4e7b-8d2c-6853addf2cf8" (UID: "9c0da909-9069-4e7b-8d2c-6853addf2cf8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.447550 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f13af28-409e-41c3-91f7-b8108aa8a032-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7f13af28-409e-41c3-91f7-b8108aa8a032" (UID: "7f13af28-409e-41c3-91f7-b8108aa8a032"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.447629 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f13af28-409e-41c3-91f7-b8108aa8a032-kube-api-access-ljdwt" (OuterVolumeSpecName: "kube-api-access-ljdwt") pod "7f13af28-409e-41c3-91f7-b8108aa8a032" (UID: "7f13af28-409e-41c3-91f7-b8108aa8a032"). InnerVolumeSpecName "kube-api-access-ljdwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.447794 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c0da909-9069-4e7b-8d2c-6853addf2cf8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9c0da909-9069-4e7b-8d2c-6853addf2cf8" (UID: "9c0da909-9069-4e7b-8d2c-6853addf2cf8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.448970 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c0da909-9069-4e7b-8d2c-6853addf2cf8-kube-api-access-vvcsh" (OuterVolumeSpecName: "kube-api-access-vvcsh") pod "9c0da909-9069-4e7b-8d2c-6853addf2cf8" (UID: "9c0da909-9069-4e7b-8d2c-6853addf2cf8"). InnerVolumeSpecName "kube-api-access-vvcsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.542933 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f13af28-409e-41c3-91f7-b8108aa8a032-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.543026 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljdwt\" (UniqueName: \"kubernetes.io/projected/7f13af28-409e-41c3-91f7-b8108aa8a032-kube-api-access-ljdwt\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.543038 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c0da909-9069-4e7b-8d2c-6853addf2cf8-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.543046 4840 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9c0da909-9069-4e7b-8d2c-6853addf2cf8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.543056 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0da909-9069-4e7b-8d2c-6853addf2cf8-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:33 crc kubenswrapper[4840]: I0311 09:02:33.543067 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvcsh\" (UniqueName: \"kubernetes.io/projected/9c0da909-9069-4e7b-8d2c-6853addf2cf8-kube-api-access-vvcsh\") on node \"crc\" DevicePath \"\"" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.044406 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2" event={"ID":"7f13af28-409e-41c3-91f7-b8108aa8a032","Type":"ContainerDied","Data":"6774315ef70ea0c1cca586e82088a2b35d5791bc9f2b2926fdf4651987fcc37c"} Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.044487 4840 scope.go:117] "RemoveContainer" containerID="b5ba59ec24ed5f02813fa65e26cafbd0be6238810010d631246125357b8b0dc3" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.044499 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.046803 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" event={"ID":"9c0da909-9069-4e7b-8d2c-6853addf2cf8","Type":"ContainerDied","Data":"221834685a8e0eb5ee1bad022f1a9de7c9665c5fd6c3c59e8c97f5368fa7e801"} Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.046834 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-569c97bdd5-276pr" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.067215 4840 scope.go:117] "RemoveContainer" containerID="cd323413e783373adc7c322ddc4d41ff1d85855a8400a39f4e73f2193189d15f" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.089609 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2"] Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.094874 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f7cf44877-9npp2"] Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.114082 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-569c97bdd5-276pr"] Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.125337 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-569c97bdd5-276pr"] Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.135568 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7"] Mar 11 09:02:34 crc kubenswrapper[4840]: E0311 09:02:34.136112 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c0da909-9069-4e7b-8d2c-6853addf2cf8" containerName="controller-manager" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.136142 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c0da909-9069-4e7b-8d2c-6853addf2cf8" containerName="controller-manager" Mar 11 09:02:34 crc kubenswrapper[4840]: E0311 09:02:34.136167 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f13af28-409e-41c3-91f7-b8108aa8a032" containerName="route-controller-manager" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.136177 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f13af28-409e-41c3-91f7-b8108aa8a032" containerName="route-controller-manager" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.136320 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f13af28-409e-41c3-91f7-b8108aa8a032" containerName="route-controller-manager" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.136342 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c0da909-9069-4e7b-8d2c-6853addf2cf8" containerName="controller-manager" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.138879 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.141915 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.141957 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.141986 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.141986 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.142087 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.143591 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6"] Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.144429 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.144631 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.149338 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6"] Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.154256 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.154797 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.154924 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.154952 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.154996 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.155997 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7"] Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.159685 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.162835 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.256336 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/08cfb0e8-6d25-470f-a462-f4321182ec8d-client-ca\") pod \"route-controller-manager-6685f4fd5b-v8hx7\" (UID: \"08cfb0e8-6d25-470f-a462-f4321182ec8d\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.256411 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-proxy-ca-bundles\") pod \"controller-manager-7f6bd8fd79-wcpq6\" (UID: \"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.256657 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krx58\" (UniqueName: \"kubernetes.io/projected/08cfb0e8-6d25-470f-a462-f4321182ec8d-kube-api-access-krx58\") pod \"route-controller-manager-6685f4fd5b-v8hx7\" (UID: \"08cfb0e8-6d25-470f-a462-f4321182ec8d\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.256730 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-client-ca\") pod \"controller-manager-7f6bd8fd79-wcpq6\" (UID: \"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.256793 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08cfb0e8-6d25-470f-a462-f4321182ec8d-serving-cert\") pod \"route-controller-manager-6685f4fd5b-v8hx7\" (UID: \"08cfb0e8-6d25-470f-a462-f4321182ec8d\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.256994 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-config\") pod \"controller-manager-7f6bd8fd79-wcpq6\" (UID: \"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.257162 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-serving-cert\") pod \"controller-manager-7f6bd8fd79-wcpq6\" (UID: \"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.257202 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08cfb0e8-6d25-470f-a462-f4321182ec8d-config\") pod \"route-controller-manager-6685f4fd5b-v8hx7\" (UID: \"08cfb0e8-6d25-470f-a462-f4321182ec8d\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.257555 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x54xv\" (UniqueName: \"kubernetes.io/projected/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-kube-api-access-x54xv\") pod \"controller-manager-7f6bd8fd79-wcpq6\" (UID: \"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.358896 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-serving-cert\") pod \"controller-manager-7f6bd8fd79-wcpq6\" (UID: \"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.358965 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08cfb0e8-6d25-470f-a462-f4321182ec8d-config\") pod \"route-controller-manager-6685f4fd5b-v8hx7\" (UID: \"08cfb0e8-6d25-470f-a462-f4321182ec8d\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.359009 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x54xv\" (UniqueName: \"kubernetes.io/projected/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-kube-api-access-x54xv\") pod \"controller-manager-7f6bd8fd79-wcpq6\" (UID: \"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.359065 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/08cfb0e8-6d25-470f-a462-f4321182ec8d-client-ca\") pod \"route-controller-manager-6685f4fd5b-v8hx7\" (UID: \"08cfb0e8-6d25-470f-a462-f4321182ec8d\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.359098 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-proxy-ca-bundles\") pod \"controller-manager-7f6bd8fd79-wcpq6\" (UID: \"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.359134 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krx58\" (UniqueName: \"kubernetes.io/projected/08cfb0e8-6d25-470f-a462-f4321182ec8d-kube-api-access-krx58\") pod \"route-controller-manager-6685f4fd5b-v8hx7\" (UID: \"08cfb0e8-6d25-470f-a462-f4321182ec8d\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.359158 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-client-ca\") pod \"controller-manager-7f6bd8fd79-wcpq6\" (UID: \"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.359185 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08cfb0e8-6d25-470f-a462-f4321182ec8d-serving-cert\") pod \"route-controller-manager-6685f4fd5b-v8hx7\" (UID: \"08cfb0e8-6d25-470f-a462-f4321182ec8d\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.359226 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-config\") pod \"controller-manager-7f6bd8fd79-wcpq6\" (UID: \"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.360541 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/08cfb0e8-6d25-470f-a462-f4321182ec8d-client-ca\") pod \"route-controller-manager-6685f4fd5b-v8hx7\" (UID: \"08cfb0e8-6d25-470f-a462-f4321182ec8d\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.360643 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08cfb0e8-6d25-470f-a462-f4321182ec8d-config\") pod \"route-controller-manager-6685f4fd5b-v8hx7\" (UID: \"08cfb0e8-6d25-470f-a462-f4321182ec8d\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.360966 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-proxy-ca-bundles\") pod \"controller-manager-7f6bd8fd79-wcpq6\" (UID: \"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.361214 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-client-ca\") pod \"controller-manager-7f6bd8fd79-wcpq6\" (UID: \"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.362277 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-config\") pod \"controller-manager-7f6bd8fd79-wcpq6\" (UID: \"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.363497 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08cfb0e8-6d25-470f-a462-f4321182ec8d-serving-cert\") pod \"route-controller-manager-6685f4fd5b-v8hx7\" (UID: \"08cfb0e8-6d25-470f-a462-f4321182ec8d\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.363817 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-serving-cert\") pod \"controller-manager-7f6bd8fd79-wcpq6\" (UID: \"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.377620 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x54xv\" (UniqueName: \"kubernetes.io/projected/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-kube-api-access-x54xv\") pod \"controller-manager-7f6bd8fd79-wcpq6\" (UID: \"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba\") " pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.378337 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krx58\" (UniqueName: \"kubernetes.io/projected/08cfb0e8-6d25-470f-a462-f4321182ec8d-kube-api-access-krx58\") pod \"route-controller-manager-6685f4fd5b-v8hx7\" (UID: \"08cfb0e8-6d25-470f-a462-f4321182ec8d\") " pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.482438 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.495397 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" Mar 11 09:02:34 crc kubenswrapper[4840]: I0311 09:02:34.817418 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7"] Mar 11 09:02:35 crc kubenswrapper[4840]: I0311 09:02:35.059387 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7" event={"ID":"08cfb0e8-6d25-470f-a462-f4321182ec8d","Type":"ContainerStarted","Data":"d85bef3449cf61aa5705b1e1ab7b7012ad914d7b4ca1ded7593477c235ecf94d"} Mar 11 09:02:35 crc kubenswrapper[4840]: I0311 09:02:35.059525 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7" event={"ID":"08cfb0e8-6d25-470f-a462-f4321182ec8d","Type":"ContainerStarted","Data":"47a690dca09ba8d1b96f6a3f42849d9cfa173d57612c98e922a8ff997b93c387"} Mar 11 09:02:35 crc kubenswrapper[4840]: I0311 09:02:35.060141 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7" Mar 11 09:02:35 crc kubenswrapper[4840]: I0311 09:02:35.064675 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6"] Mar 11 09:02:35 crc kubenswrapper[4840]: W0311 09:02:35.068686 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7941ab5_6df0_46a7_ac1d_f37cb81f87ba.slice/crio-f24f915955b356adcd82623411874df2b25c2ced7a3e58ae695022e058a6efb9 WatchSource:0}: Error finding container f24f915955b356adcd82623411874df2b25c2ced7a3e58ae695022e058a6efb9: Status 404 returned error can't find the container with id f24f915955b356adcd82623411874df2b25c2ced7a3e58ae695022e058a6efb9 Mar 11 09:02:35 crc kubenswrapper[4840]: I0311 09:02:35.085330 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7" podStartSLOduration=3.085304429 podStartE2EDuration="3.085304429s" podCreationTimestamp="2026-03-11 09:02:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:02:35.082371676 +0000 UTC m=+353.748041511" watchObservedRunningTime="2026-03-11 09:02:35.085304429 +0000 UTC m=+353.750974244" Mar 11 09:02:35 crc kubenswrapper[4840]: I0311 09:02:35.324292 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7" Mar 11 09:02:36 crc kubenswrapper[4840]: I0311 09:02:36.067930 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f13af28-409e-41c3-91f7-b8108aa8a032" path="/var/lib/kubelet/pods/7f13af28-409e-41c3-91f7-b8108aa8a032/volumes" Mar 11 09:02:36 crc kubenswrapper[4840]: I0311 09:02:36.069314 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c0da909-9069-4e7b-8d2c-6853addf2cf8" path="/var/lib/kubelet/pods/9c0da909-9069-4e7b-8d2c-6853addf2cf8/volumes" Mar 11 09:02:36 crc kubenswrapper[4840]: I0311 09:02:36.072513 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" event={"ID":"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba","Type":"ContainerStarted","Data":"0a47003e3a0d5f63fedc58fe54ad497f1d1ff69dafc5be47efd8878d8dc83a33"} Mar 11 09:02:36 crc kubenswrapper[4840]: I0311 09:02:36.072570 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" event={"ID":"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba","Type":"ContainerStarted","Data":"f24f915955b356adcd82623411874df2b25c2ced7a3e58ae695022e058a6efb9"} Mar 11 09:02:36 crc kubenswrapper[4840]: I0311 09:02:36.094515 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" podStartSLOduration=4.094488133 podStartE2EDuration="4.094488133s" podCreationTimestamp="2026-03-11 09:02:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:02:36.091241573 +0000 UTC m=+354.756911398" watchObservedRunningTime="2026-03-11 09:02:36.094488133 +0000 UTC m=+354.760157948" Mar 11 09:02:37 crc kubenswrapper[4840]: I0311 09:02:37.079537 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" Mar 11 09:02:37 crc kubenswrapper[4840]: I0311 09:02:37.086095 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" Mar 11 09:03:12 crc kubenswrapper[4840]: I0311 09:03:12.590666 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6"] Mar 11 09:03:12 crc kubenswrapper[4840]: I0311 09:03:12.592363 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" podUID="d7941ab5-6df0-46a7-ac1d-f37cb81f87ba" containerName="controller-manager" containerID="cri-o://0a47003e3a0d5f63fedc58fe54ad497f1d1ff69dafc5be47efd8878d8dc83a33" gracePeriod=30 Mar 11 09:03:13 crc kubenswrapper[4840]: I0311 09:03:13.036883 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" Mar 11 09:03:13 crc kubenswrapper[4840]: I0311 09:03:13.184216 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-config\") pod \"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba\" (UID: \"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba\") " Mar 11 09:03:13 crc kubenswrapper[4840]: I0311 09:03:13.184329 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x54xv\" (UniqueName: \"kubernetes.io/projected/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-kube-api-access-x54xv\") pod \"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba\" (UID: \"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba\") " Mar 11 09:03:13 crc kubenswrapper[4840]: I0311 09:03:13.184385 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-serving-cert\") pod \"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba\" (UID: \"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba\") " Mar 11 09:03:13 crc kubenswrapper[4840]: I0311 09:03:13.184452 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-client-ca\") pod \"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba\" (UID: \"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba\") " Mar 11 09:03:13 crc kubenswrapper[4840]: I0311 09:03:13.184607 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-proxy-ca-bundles\") pod \"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba\" (UID: \"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba\") " Mar 11 09:03:13 crc kubenswrapper[4840]: I0311 09:03:13.185527 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-client-ca" (OuterVolumeSpecName: "client-ca") pod "d7941ab5-6df0-46a7-ac1d-f37cb81f87ba" (UID: "d7941ab5-6df0-46a7-ac1d-f37cb81f87ba"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:03:13 crc kubenswrapper[4840]: I0311 09:03:13.185556 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d7941ab5-6df0-46a7-ac1d-f37cb81f87ba" (UID: "d7941ab5-6df0-46a7-ac1d-f37cb81f87ba"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:03:13 crc kubenswrapper[4840]: I0311 09:03:13.186584 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-config" (OuterVolumeSpecName: "config") pod "d7941ab5-6df0-46a7-ac1d-f37cb81f87ba" (UID: "d7941ab5-6df0-46a7-ac1d-f37cb81f87ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:03:13 crc kubenswrapper[4840]: I0311 09:03:13.191759 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-kube-api-access-x54xv" (OuterVolumeSpecName: "kube-api-access-x54xv") pod "d7941ab5-6df0-46a7-ac1d-f37cb81f87ba" (UID: "d7941ab5-6df0-46a7-ac1d-f37cb81f87ba"). InnerVolumeSpecName "kube-api-access-x54xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:03:13 crc kubenswrapper[4840]: I0311 09:03:13.198771 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7941ab5-6df0-46a7-ac1d-f37cb81f87ba" (UID: "d7941ab5-6df0-46a7-ac1d-f37cb81f87ba"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:13 crc kubenswrapper[4840]: I0311 09:03:13.286550 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x54xv\" (UniqueName: \"kubernetes.io/projected/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-kube-api-access-x54xv\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:13 crc kubenswrapper[4840]: I0311 09:03:13.286605 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:13 crc kubenswrapper[4840]: I0311 09:03:13.286619 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:13 crc kubenswrapper[4840]: I0311 09:03:13.286631 4840 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:13 crc kubenswrapper[4840]: I0311 09:03:13.286643 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:13 crc kubenswrapper[4840]: I0311 09:03:13.322762 4840 generic.go:334] "Generic (PLEG): container finished" podID="d7941ab5-6df0-46a7-ac1d-f37cb81f87ba" containerID="0a47003e3a0d5f63fedc58fe54ad497f1d1ff69dafc5be47efd8878d8dc83a33" exitCode=0 Mar 11 09:03:13 crc kubenswrapper[4840]: I0311 09:03:13.322814 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" event={"ID":"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba","Type":"ContainerDied","Data":"0a47003e3a0d5f63fedc58fe54ad497f1d1ff69dafc5be47efd8878d8dc83a33"} Mar 11 09:03:13 crc kubenswrapper[4840]: I0311 09:03:13.322845 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" event={"ID":"d7941ab5-6df0-46a7-ac1d-f37cb81f87ba","Type":"ContainerDied","Data":"f24f915955b356adcd82623411874df2b25c2ced7a3e58ae695022e058a6efb9"} Mar 11 09:03:13 crc kubenswrapper[4840]: I0311 09:03:13.322865 4840 scope.go:117] "RemoveContainer" containerID="0a47003e3a0d5f63fedc58fe54ad497f1d1ff69dafc5be47efd8878d8dc83a33" Mar 11 09:03:13 crc kubenswrapper[4840]: I0311 09:03:13.323010 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6" Mar 11 09:03:13 crc kubenswrapper[4840]: I0311 09:03:13.352427 4840 scope.go:117] "RemoveContainer" containerID="0a47003e3a0d5f63fedc58fe54ad497f1d1ff69dafc5be47efd8878d8dc83a33" Mar 11 09:03:13 crc kubenswrapper[4840]: E0311 09:03:13.353341 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a47003e3a0d5f63fedc58fe54ad497f1d1ff69dafc5be47efd8878d8dc83a33\": container with ID starting with 0a47003e3a0d5f63fedc58fe54ad497f1d1ff69dafc5be47efd8878d8dc83a33 not found: ID does not exist" containerID="0a47003e3a0d5f63fedc58fe54ad497f1d1ff69dafc5be47efd8878d8dc83a33" Mar 11 09:03:13 crc kubenswrapper[4840]: I0311 09:03:13.353423 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a47003e3a0d5f63fedc58fe54ad497f1d1ff69dafc5be47efd8878d8dc83a33"} err="failed to get container status \"0a47003e3a0d5f63fedc58fe54ad497f1d1ff69dafc5be47efd8878d8dc83a33\": rpc error: code = NotFound desc = could not find container \"0a47003e3a0d5f63fedc58fe54ad497f1d1ff69dafc5be47efd8878d8dc83a33\": container with ID starting with 0a47003e3a0d5f63fedc58fe54ad497f1d1ff69dafc5be47efd8878d8dc83a33 not found: ID does not exist" Mar 11 09:03:13 crc kubenswrapper[4840]: I0311 09:03:13.369206 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6"] Mar 11 09:03:13 crc kubenswrapper[4840]: I0311 09:03:13.372895 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f6bd8fd79-wcpq6"] Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.073749 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7941ab5-6df0-46a7-ac1d-f37cb81f87ba" path="/var/lib/kubelet/pods/d7941ab5-6df0-46a7-ac1d-f37cb81f87ba/volumes" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.165585 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-569c97bdd5-v2sqh"] Mar 11 09:03:14 crc kubenswrapper[4840]: E0311 09:03:14.166304 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7941ab5-6df0-46a7-ac1d-f37cb81f87ba" containerName="controller-manager" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.166352 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7941ab5-6df0-46a7-ac1d-f37cb81f87ba" containerName="controller-manager" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.166677 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7941ab5-6df0-46a7-ac1d-f37cb81f87ba" containerName="controller-manager" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.167353 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-569c97bdd5-v2sqh" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.172594 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.172991 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.174515 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.174554 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.174777 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.174954 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.184347 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-569c97bdd5-v2sqh"] Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.185108 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.302003 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d68fb048-0b04-4312-a3cc-72f87de692e2-proxy-ca-bundles\") pod \"controller-manager-569c97bdd5-v2sqh\" (UID: \"d68fb048-0b04-4312-a3cc-72f87de692e2\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-v2sqh" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.302062 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4pfs\" (UniqueName: \"kubernetes.io/projected/d68fb048-0b04-4312-a3cc-72f87de692e2-kube-api-access-z4pfs\") pod \"controller-manager-569c97bdd5-v2sqh\" (UID: \"d68fb048-0b04-4312-a3cc-72f87de692e2\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-v2sqh" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.302107 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d68fb048-0b04-4312-a3cc-72f87de692e2-config\") pod \"controller-manager-569c97bdd5-v2sqh\" (UID: \"d68fb048-0b04-4312-a3cc-72f87de692e2\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-v2sqh" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.302136 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d68fb048-0b04-4312-a3cc-72f87de692e2-serving-cert\") pod \"controller-manager-569c97bdd5-v2sqh\" (UID: \"d68fb048-0b04-4312-a3cc-72f87de692e2\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-v2sqh" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.302209 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d68fb048-0b04-4312-a3cc-72f87de692e2-client-ca\") pod \"controller-manager-569c97bdd5-v2sqh\" (UID: \"d68fb048-0b04-4312-a3cc-72f87de692e2\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-v2sqh" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.403690 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d68fb048-0b04-4312-a3cc-72f87de692e2-proxy-ca-bundles\") pod \"controller-manager-569c97bdd5-v2sqh\" (UID: \"d68fb048-0b04-4312-a3cc-72f87de692e2\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-v2sqh" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.403757 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4pfs\" (UniqueName: \"kubernetes.io/projected/d68fb048-0b04-4312-a3cc-72f87de692e2-kube-api-access-z4pfs\") pod \"controller-manager-569c97bdd5-v2sqh\" (UID: \"d68fb048-0b04-4312-a3cc-72f87de692e2\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-v2sqh" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.403791 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d68fb048-0b04-4312-a3cc-72f87de692e2-config\") pod \"controller-manager-569c97bdd5-v2sqh\" (UID: \"d68fb048-0b04-4312-a3cc-72f87de692e2\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-v2sqh" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.403821 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d68fb048-0b04-4312-a3cc-72f87de692e2-serving-cert\") pod \"controller-manager-569c97bdd5-v2sqh\" (UID: \"d68fb048-0b04-4312-a3cc-72f87de692e2\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-v2sqh" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.403889 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d68fb048-0b04-4312-a3cc-72f87de692e2-client-ca\") pod \"controller-manager-569c97bdd5-v2sqh\" (UID: \"d68fb048-0b04-4312-a3cc-72f87de692e2\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-v2sqh" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.405689 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d68fb048-0b04-4312-a3cc-72f87de692e2-client-ca\") pod \"controller-manager-569c97bdd5-v2sqh\" (UID: \"d68fb048-0b04-4312-a3cc-72f87de692e2\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-v2sqh" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.407806 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d68fb048-0b04-4312-a3cc-72f87de692e2-config\") pod \"controller-manager-569c97bdd5-v2sqh\" (UID: \"d68fb048-0b04-4312-a3cc-72f87de692e2\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-v2sqh" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.407813 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d68fb048-0b04-4312-a3cc-72f87de692e2-proxy-ca-bundles\") pod \"controller-manager-569c97bdd5-v2sqh\" (UID: \"d68fb048-0b04-4312-a3cc-72f87de692e2\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-v2sqh" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.413088 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d68fb048-0b04-4312-a3cc-72f87de692e2-serving-cert\") pod \"controller-manager-569c97bdd5-v2sqh\" (UID: \"d68fb048-0b04-4312-a3cc-72f87de692e2\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-v2sqh" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.429297 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4pfs\" (UniqueName: \"kubernetes.io/projected/d68fb048-0b04-4312-a3cc-72f87de692e2-kube-api-access-z4pfs\") pod \"controller-manager-569c97bdd5-v2sqh\" (UID: \"d68fb048-0b04-4312-a3cc-72f87de692e2\") " pod="openshift-controller-manager/controller-manager-569c97bdd5-v2sqh" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.493370 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-569c97bdd5-v2sqh" Mar 11 09:03:14 crc kubenswrapper[4840]: I0311 09:03:14.741252 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-569c97bdd5-v2sqh"] Mar 11 09:03:15 crc kubenswrapper[4840]: I0311 09:03:15.349884 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-569c97bdd5-v2sqh" event={"ID":"d68fb048-0b04-4312-a3cc-72f87de692e2","Type":"ContainerStarted","Data":"0e9a71683b68ac47d802dfa83056e00e05d7e11b966e76a0308953e21d62df2b"} Mar 11 09:03:15 crc kubenswrapper[4840]: I0311 09:03:15.350276 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-569c97bdd5-v2sqh" event={"ID":"d68fb048-0b04-4312-a3cc-72f87de692e2","Type":"ContainerStarted","Data":"fe36293ddadeccb4906b45c1d584c5dce40145b1f2dbfee54accbc55e9351e8a"} Mar 11 09:03:15 crc kubenswrapper[4840]: I0311 09:03:15.350308 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-569c97bdd5-v2sqh" Mar 11 09:03:15 crc kubenswrapper[4840]: I0311 09:03:15.358381 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-569c97bdd5-v2sqh" Mar 11 09:03:15 crc kubenswrapper[4840]: I0311 09:03:15.386349 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-569c97bdd5-v2sqh" podStartSLOduration=3.3863290360000002 podStartE2EDuration="3.386329036s" podCreationTimestamp="2026-03-11 09:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:03:15.368832647 +0000 UTC m=+394.034502472" watchObservedRunningTime="2026-03-11 09:03:15.386329036 +0000 UTC m=+394.051998851" Mar 11 09:03:21 crc kubenswrapper[4840]: I0311 09:03:21.675381 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6wxnb"] Mar 11 09:03:21 crc kubenswrapper[4840]: I0311 09:03:21.676401 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6wxnb" podUID="edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f" containerName="registry-server" containerID="cri-o://1c3d69ee82022c3839bad445e555126a4e28c053d7b2e869ce008169c4fb2a7b" gracePeriod=30 Mar 11 09:03:21 crc kubenswrapper[4840]: I0311 09:03:21.695690 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h578g"] Mar 11 09:03:21 crc kubenswrapper[4840]: I0311 09:03:21.696087 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h578g" podUID="b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6" containerName="registry-server" containerID="cri-o://27ad3910a22bc8888e452aedc5441f68a7394e29f9ebebc7ce53b0fc45a30fba" gracePeriod=30 Mar 11 09:03:21 crc kubenswrapper[4840]: I0311 09:03:21.704570 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d6cv4"] Mar 11 09:03:21 crc kubenswrapper[4840]: I0311 09:03:21.709650 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" podUID="33326f34-f442-42be-9bd2-39cf5627b953" containerName="marketplace-operator" containerID="cri-o://9fffa79656eba5d6afe49decfded5d4c1998218eac5dbfc9a9bb4f5e71df022e" gracePeriod=30 Mar 11 09:03:21 crc kubenswrapper[4840]: I0311 09:03:21.716004 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-swqkn"] Mar 11 09:03:21 crc kubenswrapper[4840]: I0311 09:03:21.716341 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-swqkn" podUID="3ea6e709-84f7-4603-bcda-6d336d3a96fc" containerName="registry-server" containerID="cri-o://b3810fa86559b7398625eb5bff30db6085d1ea88a8cd21fc4239ae7f999b7347" gracePeriod=30 Mar 11 09:03:21 crc kubenswrapper[4840]: I0311 09:03:21.733776 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2xfdw"] Mar 11 09:03:21 crc kubenswrapper[4840]: I0311 09:03:21.734630 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2xfdw" Mar 11 09:03:21 crc kubenswrapper[4840]: I0311 09:03:21.745714 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fp456"] Mar 11 09:03:21 crc kubenswrapper[4840]: I0311 09:03:21.746066 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fp456" podUID="5be5a417-86c5-43f6-b238-9c0c498028ab" containerName="registry-server" containerID="cri-o://3690c91110a373b4c2061a5a2ab79f4ad4d340fb72b05374ad1956e1d66d4e5e" gracePeriod=30 Mar 11 09:03:21 crc kubenswrapper[4840]: I0311 09:03:21.757001 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2xfdw"] Mar 11 09:03:21 crc kubenswrapper[4840]: I0311 09:03:21.848821 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73479d9a-07ac-4487-b779-a59d095c8704-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2xfdw\" (UID: \"73479d9a-07ac-4487-b779-a59d095c8704\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xfdw" Mar 11 09:03:21 crc kubenswrapper[4840]: I0311 09:03:21.848885 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5z4j\" (UniqueName: \"kubernetes.io/projected/73479d9a-07ac-4487-b779-a59d095c8704-kube-api-access-c5z4j\") pod \"marketplace-operator-79b997595-2xfdw\" (UID: \"73479d9a-07ac-4487-b779-a59d095c8704\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xfdw" Mar 11 09:03:21 crc kubenswrapper[4840]: I0311 09:03:21.848970 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/73479d9a-07ac-4487-b779-a59d095c8704-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2xfdw\" (UID: \"73479d9a-07ac-4487-b779-a59d095c8704\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xfdw" Mar 11 09:03:21 crc kubenswrapper[4840]: I0311 09:03:21.951416 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73479d9a-07ac-4487-b779-a59d095c8704-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2xfdw\" (UID: \"73479d9a-07ac-4487-b779-a59d095c8704\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xfdw" Mar 11 09:03:21 crc kubenswrapper[4840]: I0311 09:03:21.952016 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5z4j\" (UniqueName: \"kubernetes.io/projected/73479d9a-07ac-4487-b779-a59d095c8704-kube-api-access-c5z4j\") pod \"marketplace-operator-79b997595-2xfdw\" (UID: \"73479d9a-07ac-4487-b779-a59d095c8704\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xfdw" Mar 11 09:03:21 crc kubenswrapper[4840]: I0311 09:03:21.952118 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/73479d9a-07ac-4487-b779-a59d095c8704-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2xfdw\" (UID: \"73479d9a-07ac-4487-b779-a59d095c8704\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xfdw" Mar 11 09:03:21 crc kubenswrapper[4840]: I0311 09:03:21.953598 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/73479d9a-07ac-4487-b779-a59d095c8704-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2xfdw\" (UID: \"73479d9a-07ac-4487-b779-a59d095c8704\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xfdw" Mar 11 09:03:21 crc kubenswrapper[4840]: I0311 09:03:21.961892 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/73479d9a-07ac-4487-b779-a59d095c8704-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2xfdw\" (UID: \"73479d9a-07ac-4487-b779-a59d095c8704\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xfdw" Mar 11 09:03:21 crc kubenswrapper[4840]: I0311 09:03:21.970881 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5z4j\" (UniqueName: \"kubernetes.io/projected/73479d9a-07ac-4487-b779-a59d095c8704-kube-api-access-c5z4j\") pod \"marketplace-operator-79b997595-2xfdw\" (UID: \"73479d9a-07ac-4487-b779-a59d095c8704\") " pod="openshift-marketplace/marketplace-operator-79b997595-2xfdw" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.065706 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2xfdw" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.411575 4840 generic.go:334] "Generic (PLEG): container finished" podID="b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6" containerID="27ad3910a22bc8888e452aedc5441f68a7394e29f9ebebc7ce53b0fc45a30fba" exitCode=0 Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.411608 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h578g" event={"ID":"b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6","Type":"ContainerDied","Data":"27ad3910a22bc8888e452aedc5441f68a7394e29f9ebebc7ce53b0fc45a30fba"} Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.412975 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6wxnb" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.414326 4840 generic.go:334] "Generic (PLEG): container finished" podID="33326f34-f442-42be-9bd2-39cf5627b953" containerID="9fffa79656eba5d6afe49decfded5d4c1998218eac5dbfc9a9bb4f5e71df022e" exitCode=0 Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.414518 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" event={"ID":"33326f34-f442-42be-9bd2-39cf5627b953","Type":"ContainerDied","Data":"9fffa79656eba5d6afe49decfded5d4c1998218eac5dbfc9a9bb4f5e71df022e"} Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.414802 4840 scope.go:117] "RemoveContainer" containerID="f5b71f97eb6b7af7a3e8c7066dd678dfc244df8ee1e9045fd01005014d052509" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.430054 4840 generic.go:334] "Generic (PLEG): container finished" podID="3ea6e709-84f7-4603-bcda-6d336d3a96fc" containerID="b3810fa86559b7398625eb5bff30db6085d1ea88a8cd21fc4239ae7f999b7347" exitCode=0 Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.430157 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-swqkn" event={"ID":"3ea6e709-84f7-4603-bcda-6d336d3a96fc","Type":"ContainerDied","Data":"b3810fa86559b7398625eb5bff30db6085d1ea88a8cd21fc4239ae7f999b7347"} Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.433156 4840 generic.go:334] "Generic (PLEG): container finished" podID="edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f" containerID="1c3d69ee82022c3839bad445e555126a4e28c053d7b2e869ce008169c4fb2a7b" exitCode=0 Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.433224 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6wxnb" event={"ID":"edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f","Type":"ContainerDied","Data":"1c3d69ee82022c3839bad445e555126a4e28c053d7b2e869ce008169c4fb2a7b"} Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.433258 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6wxnb" event={"ID":"edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f","Type":"ContainerDied","Data":"a20fbd43217c6b1aacfa1bc6805a43077b1833ce908d8f4cd0cd2409121bf442"} Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.433421 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6wxnb" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.457523 4840 generic.go:334] "Generic (PLEG): container finished" podID="5be5a417-86c5-43f6-b238-9c0c498028ab" containerID="3690c91110a373b4c2061a5a2ab79f4ad4d340fb72b05374ad1956e1d66d4e5e" exitCode=0 Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.457595 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fp456" event={"ID":"5be5a417-86c5-43f6-b238-9c0c498028ab","Type":"ContainerDied","Data":"3690c91110a373b4c2061a5a2ab79f4ad4d340fb72b05374ad1956e1d66d4e5e"} Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.464340 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv94k\" (UniqueName: \"kubernetes.io/projected/edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f-kube-api-access-cv94k\") pod \"edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f\" (UID: \"edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f\") " Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.467071 4840 scope.go:117] "RemoveContainer" containerID="1c3d69ee82022c3839bad445e555126a4e28c053d7b2e869ce008169c4fb2a7b" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.478689 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f-kube-api-access-cv94k" (OuterVolumeSpecName: "kube-api-access-cv94k") pod "edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f" (UID: "edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f"). InnerVolumeSpecName "kube-api-access-cv94k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.481700 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f-utilities\") pod \"edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f\" (UID: \"edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f\") " Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.481802 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f-catalog-content\") pod \"edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f\" (UID: \"edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f\") " Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.483176 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f-utilities" (OuterVolumeSpecName: "utilities") pod "edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f" (UID: "edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.484695 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv94k\" (UniqueName: \"kubernetes.io/projected/edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f-kube-api-access-cv94k\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.484724 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.485192 4840 scope.go:117] "RemoveContainer" containerID="0d2f27125a8e6137d993fad1a035889c8d1a06a7394e97d1ed53eebc8057ef7b" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.523699 4840 scope.go:117] "RemoveContainer" containerID="38d28ebda72bb5a8472b48825bb52b45d0a5f3768bb0fd02b11dde128850e8d4" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.531233 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-swqkn" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.543053 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.547323 4840 scope.go:117] "RemoveContainer" containerID="1c3d69ee82022c3839bad445e555126a4e28c053d7b2e869ce008169c4fb2a7b" Mar 11 09:03:22 crc kubenswrapper[4840]: E0311 09:03:22.547812 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c3d69ee82022c3839bad445e555126a4e28c053d7b2e869ce008169c4fb2a7b\": container with ID starting with 1c3d69ee82022c3839bad445e555126a4e28c053d7b2e869ce008169c4fb2a7b not found: ID does not exist" containerID="1c3d69ee82022c3839bad445e555126a4e28c053d7b2e869ce008169c4fb2a7b" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.547877 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c3d69ee82022c3839bad445e555126a4e28c053d7b2e869ce008169c4fb2a7b"} err="failed to get container status \"1c3d69ee82022c3839bad445e555126a4e28c053d7b2e869ce008169c4fb2a7b\": rpc error: code = NotFound desc = could not find container \"1c3d69ee82022c3839bad445e555126a4e28c053d7b2e869ce008169c4fb2a7b\": container with ID starting with 1c3d69ee82022c3839bad445e555126a4e28c053d7b2e869ce008169c4fb2a7b not found: ID does not exist" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.547925 4840 scope.go:117] "RemoveContainer" containerID="0d2f27125a8e6137d993fad1a035889c8d1a06a7394e97d1ed53eebc8057ef7b" Mar 11 09:03:22 crc kubenswrapper[4840]: E0311 09:03:22.560170 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d2f27125a8e6137d993fad1a035889c8d1a06a7394e97d1ed53eebc8057ef7b\": container with ID starting with 0d2f27125a8e6137d993fad1a035889c8d1a06a7394e97d1ed53eebc8057ef7b not found: ID does not exist" containerID="0d2f27125a8e6137d993fad1a035889c8d1a06a7394e97d1ed53eebc8057ef7b" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.560235 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d2f27125a8e6137d993fad1a035889c8d1a06a7394e97d1ed53eebc8057ef7b"} err="failed to get container status \"0d2f27125a8e6137d993fad1a035889c8d1a06a7394e97d1ed53eebc8057ef7b\": rpc error: code = NotFound desc = could not find container \"0d2f27125a8e6137d993fad1a035889c8d1a06a7394e97d1ed53eebc8057ef7b\": container with ID starting with 0d2f27125a8e6137d993fad1a035889c8d1a06a7394e97d1ed53eebc8057ef7b not found: ID does not exist" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.560280 4840 scope.go:117] "RemoveContainer" containerID="38d28ebda72bb5a8472b48825bb52b45d0a5f3768bb0fd02b11dde128850e8d4" Mar 11 09:03:22 crc kubenswrapper[4840]: E0311 09:03:22.560688 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38d28ebda72bb5a8472b48825bb52b45d0a5f3768bb0fd02b11dde128850e8d4\": container with ID starting with 38d28ebda72bb5a8472b48825bb52b45d0a5f3768bb0fd02b11dde128850e8d4 not found: ID does not exist" containerID="38d28ebda72bb5a8472b48825bb52b45d0a5f3768bb0fd02b11dde128850e8d4" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.560737 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38d28ebda72bb5a8472b48825bb52b45d0a5f3768bb0fd02b11dde128850e8d4"} err="failed to get container status \"38d28ebda72bb5a8472b48825bb52b45d0a5f3768bb0fd02b11dde128850e8d4\": rpc error: code = NotFound desc = could not find container \"38d28ebda72bb5a8472b48825bb52b45d0a5f3768bb0fd02b11dde128850e8d4\": container with ID starting with 38d28ebda72bb5a8472b48825bb52b45d0a5f3768bb0fd02b11dde128850e8d4 not found: ID does not exist" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.564252 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fp456" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.574672 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h578g" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.586263 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33326f34-f442-42be-9bd2-39cf5627b953-marketplace-trusted-ca\") pod \"33326f34-f442-42be-9bd2-39cf5627b953\" (UID: \"33326f34-f442-42be-9bd2-39cf5627b953\") " Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.586456 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea6e709-84f7-4603-bcda-6d336d3a96fc-catalog-content\") pod \"3ea6e709-84f7-4603-bcda-6d336d3a96fc\" (UID: \"3ea6e709-84f7-4603-bcda-6d336d3a96fc\") " Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.586496 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvlbz\" (UniqueName: \"kubernetes.io/projected/3ea6e709-84f7-4603-bcda-6d336d3a96fc-kube-api-access-mvlbz\") pod \"3ea6e709-84f7-4603-bcda-6d336d3a96fc\" (UID: \"3ea6e709-84f7-4603-bcda-6d336d3a96fc\") " Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.586527 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qggmx\" (UniqueName: \"kubernetes.io/projected/33326f34-f442-42be-9bd2-39cf5627b953-kube-api-access-qggmx\") pod \"33326f34-f442-42be-9bd2-39cf5627b953\" (UID: \"33326f34-f442-42be-9bd2-39cf5627b953\") " Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.586562 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea6e709-84f7-4603-bcda-6d336d3a96fc-utilities\") pod \"3ea6e709-84f7-4603-bcda-6d336d3a96fc\" (UID: \"3ea6e709-84f7-4603-bcda-6d336d3a96fc\") " Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.586619 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/33326f34-f442-42be-9bd2-39cf5627b953-marketplace-operator-metrics\") pod \"33326f34-f442-42be-9bd2-39cf5627b953\" (UID: \"33326f34-f442-42be-9bd2-39cf5627b953\") " Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.588401 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33326f34-f442-42be-9bd2-39cf5627b953-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "33326f34-f442-42be-9bd2-39cf5627b953" (UID: "33326f34-f442-42be-9bd2-39cf5627b953"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.589147 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ea6e709-84f7-4603-bcda-6d336d3a96fc-utilities" (OuterVolumeSpecName: "utilities") pod "3ea6e709-84f7-4603-bcda-6d336d3a96fc" (UID: "3ea6e709-84f7-4603-bcda-6d336d3a96fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.590739 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33326f34-f442-42be-9bd2-39cf5627b953-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "33326f34-f442-42be-9bd2-39cf5627b953" (UID: "33326f34-f442-42be-9bd2-39cf5627b953"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.592227 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea6e709-84f7-4603-bcda-6d336d3a96fc-kube-api-access-mvlbz" (OuterVolumeSpecName: "kube-api-access-mvlbz") pod "3ea6e709-84f7-4603-bcda-6d336d3a96fc" (UID: "3ea6e709-84f7-4603-bcda-6d336d3a96fc"). InnerVolumeSpecName "kube-api-access-mvlbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.598080 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33326f34-f442-42be-9bd2-39cf5627b953-kube-api-access-qggmx" (OuterVolumeSpecName: "kube-api-access-qggmx") pod "33326f34-f442-42be-9bd2-39cf5627b953" (UID: "33326f34-f442-42be-9bd2-39cf5627b953"). InnerVolumeSpecName "kube-api-access-qggmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.598247 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f" (UID: "edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.635988 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ea6e709-84f7-4603-bcda-6d336d3a96fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ea6e709-84f7-4603-bcda-6d336d3a96fc" (UID: "3ea6e709-84f7-4603-bcda-6d336d3a96fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.688359 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5be5a417-86c5-43f6-b238-9c0c498028ab-utilities\") pod \"5be5a417-86c5-43f6-b238-9c0c498028ab\" (UID: \"5be5a417-86c5-43f6-b238-9c0c498028ab\") " Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.688459 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5be5a417-86c5-43f6-b238-9c0c498028ab-catalog-content\") pod \"5be5a417-86c5-43f6-b238-9c0c498028ab\" (UID: \"5be5a417-86c5-43f6-b238-9c0c498028ab\") " Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.688525 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5f94\" (UniqueName: \"kubernetes.io/projected/b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6-kube-api-access-v5f94\") pod \"b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6\" (UID: \"b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6\") " Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.688549 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6-catalog-content\") pod \"b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6\" (UID: \"b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6\") " Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.688584 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g25d6\" (UniqueName: \"kubernetes.io/projected/5be5a417-86c5-43f6-b238-9c0c498028ab-kube-api-access-g25d6\") pod \"5be5a417-86c5-43f6-b238-9c0c498028ab\" (UID: \"5be5a417-86c5-43f6-b238-9c0c498028ab\") " Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.688633 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6-utilities\") pod \"b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6\" (UID: \"b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6\") " Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.688864 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.688876 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ea6e709-84f7-4603-bcda-6d336d3a96fc-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.688887 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvlbz\" (UniqueName: \"kubernetes.io/projected/3ea6e709-84f7-4603-bcda-6d336d3a96fc-kube-api-access-mvlbz\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.688898 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qggmx\" (UniqueName: \"kubernetes.io/projected/33326f34-f442-42be-9bd2-39cf5627b953-kube-api-access-qggmx\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.688914 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ea6e709-84f7-4603-bcda-6d336d3a96fc-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.688924 4840 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/33326f34-f442-42be-9bd2-39cf5627b953-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.688936 4840 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33326f34-f442-42be-9bd2-39cf5627b953-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.689651 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6-utilities" (OuterVolumeSpecName: "utilities") pod "b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6" (UID: "b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.690279 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5be5a417-86c5-43f6-b238-9c0c498028ab-utilities" (OuterVolumeSpecName: "utilities") pod "5be5a417-86c5-43f6-b238-9c0c498028ab" (UID: "5be5a417-86c5-43f6-b238-9c0c498028ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.694396 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6-kube-api-access-v5f94" (OuterVolumeSpecName: "kube-api-access-v5f94") pod "b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6" (UID: "b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6"). InnerVolumeSpecName "kube-api-access-v5f94". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.694540 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5be5a417-86c5-43f6-b238-9c0c498028ab-kube-api-access-g25d6" (OuterVolumeSpecName: "kube-api-access-g25d6") pod "5be5a417-86c5-43f6-b238-9c0c498028ab" (UID: "5be5a417-86c5-43f6-b238-9c0c498028ab"). InnerVolumeSpecName "kube-api-access-g25d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.764316 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2xfdw"] Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.777093 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6" (UID: "b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.786600 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6wxnb"] Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.790514 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5f94\" (UniqueName: \"kubernetes.io/projected/b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6-kube-api-access-v5f94\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.790543 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.790555 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g25d6\" (UniqueName: \"kubernetes.io/projected/5be5a417-86c5-43f6-b238-9c0c498028ab-kube-api-access-g25d6\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.790567 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.790579 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5be5a417-86c5-43f6-b238-9c0c498028ab-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.793303 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6wxnb"] Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.872815 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5be5a417-86c5-43f6-b238-9c0c498028ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5be5a417-86c5-43f6-b238-9c0c498028ab" (UID: "5be5a417-86c5-43f6-b238-9c0c498028ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:03:22 crc kubenswrapper[4840]: I0311 09:03:22.892191 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5be5a417-86c5-43f6-b238-9c0c498028ab-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.466254 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-swqkn" event={"ID":"3ea6e709-84f7-4603-bcda-6d336d3a96fc","Type":"ContainerDied","Data":"2e4d1a9ae85704eee3ed25d49fb086d2544ecf6c3399ceb0af6164e349ba453d"} Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.466366 4840 scope.go:117] "RemoveContainer" containerID="b3810fa86559b7398625eb5bff30db6085d1ea88a8cd21fc4239ae7f999b7347" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.466300 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-swqkn" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.469647 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2xfdw" event={"ID":"73479d9a-07ac-4487-b779-a59d095c8704","Type":"ContainerStarted","Data":"4ea21daa5019f9134f9b528eaacfa6d04d5ee025bcd95b2da17de9860955032f"} Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.469717 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2xfdw" event={"ID":"73479d9a-07ac-4487-b779-a59d095c8704","Type":"ContainerStarted","Data":"17919aadb07e0925e4655d2c0b65c9ab5a19dbc1462bc95497f5e5127693170a"} Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.471195 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2xfdw" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.473824 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fp456" event={"ID":"5be5a417-86c5-43f6-b238-9c0c498028ab","Type":"ContainerDied","Data":"028b505f1510d8a6f4a82817145efd22de810d59525cfec258a7d75f7917315c"} Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.473918 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fp456" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.476628 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2xfdw" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.485869 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h578g" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.485908 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h578g" event={"ID":"b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6","Type":"ContainerDied","Data":"22fe673c28b4500ddf3524d1832af01b112ce56d18e6ff7c1c3f49669895e39f"} Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.488300 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" event={"ID":"33326f34-f442-42be-9bd2-39cf5627b953","Type":"ContainerDied","Data":"e11a1cbeed6b55553941d85a844deb666a33ccf8b1b1770c2252af8ed4a25ae1"} Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.488391 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d6cv4" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.490582 4840 scope.go:117] "RemoveContainer" containerID="395edf4404c61a731826407dd96439dd27106b688a92b6a49b07bc2070071bcc" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.498222 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dnnpl"] Mar 11 09:03:23 crc kubenswrapper[4840]: E0311 09:03:23.498615 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33326f34-f442-42be-9bd2-39cf5627b953" containerName="marketplace-operator" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.498650 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="33326f34-f442-42be-9bd2-39cf5627b953" containerName="marketplace-operator" Mar 11 09:03:23 crc kubenswrapper[4840]: E0311 09:03:23.498665 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f" containerName="registry-server" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.498676 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f" containerName="registry-server" Mar 11 09:03:23 crc kubenswrapper[4840]: E0311 09:03:23.498691 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea6e709-84f7-4603-bcda-6d336d3a96fc" containerName="extract-utilities" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.498699 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea6e709-84f7-4603-bcda-6d336d3a96fc" containerName="extract-utilities" Mar 11 09:03:23 crc kubenswrapper[4840]: E0311 09:03:23.498728 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea6e709-84f7-4603-bcda-6d336d3a96fc" containerName="extract-content" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.498736 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea6e709-84f7-4603-bcda-6d336d3a96fc" containerName="extract-content" Mar 11 09:03:23 crc kubenswrapper[4840]: E0311 09:03:23.498750 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea6e709-84f7-4603-bcda-6d336d3a96fc" containerName="registry-server" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.498756 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea6e709-84f7-4603-bcda-6d336d3a96fc" containerName="registry-server" Mar 11 09:03:23 crc kubenswrapper[4840]: E0311 09:03:23.498766 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f" containerName="extract-content" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.498772 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f" containerName="extract-content" Mar 11 09:03:23 crc kubenswrapper[4840]: E0311 09:03:23.498780 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5be5a417-86c5-43f6-b238-9c0c498028ab" containerName="registry-server" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.498800 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="5be5a417-86c5-43f6-b238-9c0c498028ab" containerName="registry-server" Mar 11 09:03:23 crc kubenswrapper[4840]: E0311 09:03:23.498811 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6" containerName="extract-content" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.498816 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6" containerName="extract-content" Mar 11 09:03:23 crc kubenswrapper[4840]: E0311 09:03:23.498825 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6" containerName="registry-server" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.498832 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6" containerName="registry-server" Mar 11 09:03:23 crc kubenswrapper[4840]: E0311 09:03:23.498841 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6" containerName="extract-utilities" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.498847 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6" containerName="extract-utilities" Mar 11 09:03:23 crc kubenswrapper[4840]: E0311 09:03:23.498855 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5be5a417-86c5-43f6-b238-9c0c498028ab" containerName="extract-content" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.498876 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="5be5a417-86c5-43f6-b238-9c0c498028ab" containerName="extract-content" Mar 11 09:03:23 crc kubenswrapper[4840]: E0311 09:03:23.498888 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5be5a417-86c5-43f6-b238-9c0c498028ab" containerName="extract-utilities" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.498894 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="5be5a417-86c5-43f6-b238-9c0c498028ab" containerName="extract-utilities" Mar 11 09:03:23 crc kubenswrapper[4840]: E0311 09:03:23.498903 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f" containerName="extract-utilities" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.498911 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f" containerName="extract-utilities" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.499045 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="5be5a417-86c5-43f6-b238-9c0c498028ab" containerName="registry-server" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.499057 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f" containerName="registry-server" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.499065 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6" containerName="registry-server" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.499073 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="33326f34-f442-42be-9bd2-39cf5627b953" containerName="marketplace-operator" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.499082 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea6e709-84f7-4603-bcda-6d336d3a96fc" containerName="registry-server" Mar 11 09:03:23 crc kubenswrapper[4840]: E0311 09:03:23.499214 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33326f34-f442-42be-9bd2-39cf5627b953" containerName="marketplace-operator" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.499222 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="33326f34-f442-42be-9bd2-39cf5627b953" containerName="marketplace-operator" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.499358 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="33326f34-f442-42be-9bd2-39cf5627b953" containerName="marketplace-operator" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.500302 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnnpl" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.506994 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.510845 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2xfdw" podStartSLOduration=2.510809491 podStartE2EDuration="2.510809491s" podCreationTimestamp="2026-03-11 09:03:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:03:23.500531519 +0000 UTC m=+402.166201344" watchObservedRunningTime="2026-03-11 09:03:23.510809491 +0000 UTC m=+402.176479306" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.520177 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dnnpl"] Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.534902 4840 scope.go:117] "RemoveContainer" containerID="379399e647d567d2ed8e4696ccef7ecd2c7302d130df8a8eb89b6aa012e12595" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.573073 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-swqkn"] Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.581293 4840 scope.go:117] "RemoveContainer" containerID="3690c91110a373b4c2061a5a2ab79f4ad4d340fb72b05374ad1956e1d66d4e5e" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.582775 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-swqkn"] Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.593386 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h578g"] Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.600004 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h578g"] Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.602503 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c93238da-07d7-42ab-8b86-59e30ebfe3e5-catalog-content\") pod \"certified-operators-dnnpl\" (UID: \"c93238da-07d7-42ab-8b86-59e30ebfe3e5\") " pod="openshift-marketplace/certified-operators-dnnpl" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.602683 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4qch\" (UniqueName: \"kubernetes.io/projected/c93238da-07d7-42ab-8b86-59e30ebfe3e5-kube-api-access-q4qch\") pod \"certified-operators-dnnpl\" (UID: \"c93238da-07d7-42ab-8b86-59e30ebfe3e5\") " pod="openshift-marketplace/certified-operators-dnnpl" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.602817 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c93238da-07d7-42ab-8b86-59e30ebfe3e5-utilities\") pod \"certified-operators-dnnpl\" (UID: \"c93238da-07d7-42ab-8b86-59e30ebfe3e5\") " pod="openshift-marketplace/certified-operators-dnnpl" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.605666 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fp456"] Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.608621 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fp456"] Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.613491 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d6cv4"] Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.613891 4840 scope.go:117] "RemoveContainer" containerID="d51dae7559949d4d8934f6238e8a989ed047203a5bdc3e1941018e1201b3c740" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.617029 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d6cv4"] Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.637532 4840 scope.go:117] "RemoveContainer" containerID="f93c210b1ad4467215d496c7791e287e76bf65713b6265a6966346dffb07b48c" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.656018 4840 scope.go:117] "RemoveContainer" containerID="27ad3910a22bc8888e452aedc5441f68a7394e29f9ebebc7ce53b0fc45a30fba" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.673180 4840 scope.go:117] "RemoveContainer" containerID="5f2e47e72f6aeae7930cc5143f650e751f0be708fd06c361b440211992a47026" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.694916 4840 scope.go:117] "RemoveContainer" containerID="fa6e02aef95fb10e00731edf653a60f202f9475e549f77c7efbb9387c6643dc2" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.705016 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c93238da-07d7-42ab-8b86-59e30ebfe3e5-utilities\") pod \"certified-operators-dnnpl\" (UID: \"c93238da-07d7-42ab-8b86-59e30ebfe3e5\") " pod="openshift-marketplace/certified-operators-dnnpl" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.705187 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c93238da-07d7-42ab-8b86-59e30ebfe3e5-catalog-content\") pod \"certified-operators-dnnpl\" (UID: \"c93238da-07d7-42ab-8b86-59e30ebfe3e5\") " pod="openshift-marketplace/certified-operators-dnnpl" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.705258 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4qch\" (UniqueName: \"kubernetes.io/projected/c93238da-07d7-42ab-8b86-59e30ebfe3e5-kube-api-access-q4qch\") pod \"certified-operators-dnnpl\" (UID: \"c93238da-07d7-42ab-8b86-59e30ebfe3e5\") " pod="openshift-marketplace/certified-operators-dnnpl" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.705780 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c93238da-07d7-42ab-8b86-59e30ebfe3e5-catalog-content\") pod \"certified-operators-dnnpl\" (UID: \"c93238da-07d7-42ab-8b86-59e30ebfe3e5\") " pod="openshift-marketplace/certified-operators-dnnpl" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.705780 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c93238da-07d7-42ab-8b86-59e30ebfe3e5-utilities\") pod \"certified-operators-dnnpl\" (UID: \"c93238da-07d7-42ab-8b86-59e30ebfe3e5\") " pod="openshift-marketplace/certified-operators-dnnpl" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.715526 4840 scope.go:117] "RemoveContainer" containerID="9fffa79656eba5d6afe49decfded5d4c1998218eac5dbfc9a9bb4f5e71df022e" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.723636 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4qch\" (UniqueName: \"kubernetes.io/projected/c93238da-07d7-42ab-8b86-59e30ebfe3e5-kube-api-access-q4qch\") pod \"certified-operators-dnnpl\" (UID: \"c93238da-07d7-42ab-8b86-59e30ebfe3e5\") " pod="openshift-marketplace/certified-operators-dnnpl" Mar 11 09:03:23 crc kubenswrapper[4840]: I0311 09:03:23.865460 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnnpl" Mar 11 09:03:24 crc kubenswrapper[4840]: I0311 09:03:24.073850 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33326f34-f442-42be-9bd2-39cf5627b953" path="/var/lib/kubelet/pods/33326f34-f442-42be-9bd2-39cf5627b953/volumes" Mar 11 09:03:24 crc kubenswrapper[4840]: I0311 09:03:24.075215 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ea6e709-84f7-4603-bcda-6d336d3a96fc" path="/var/lib/kubelet/pods/3ea6e709-84f7-4603-bcda-6d336d3a96fc/volumes" Mar 11 09:03:24 crc kubenswrapper[4840]: I0311 09:03:24.075974 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5be5a417-86c5-43f6-b238-9c0c498028ab" path="/var/lib/kubelet/pods/5be5a417-86c5-43f6-b238-9c0c498028ab/volumes" Mar 11 09:03:24 crc kubenswrapper[4840]: I0311 09:03:24.077057 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6" path="/var/lib/kubelet/pods/b17f6fce-66c6-45f7-8e1b-be9ffe4f16a6/volumes" Mar 11 09:03:24 crc kubenswrapper[4840]: I0311 09:03:24.077698 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f" path="/var/lib/kubelet/pods/edba0e9a-0f5d-4aea-bc9c-7eff83b36a8f/volumes" Mar 11 09:03:24 crc kubenswrapper[4840]: I0311 09:03:24.290894 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dnnpl"] Mar 11 09:03:24 crc kubenswrapper[4840]: I0311 09:03:24.503117 4840 generic.go:334] "Generic (PLEG): container finished" podID="c93238da-07d7-42ab-8b86-59e30ebfe3e5" containerID="32ca9123015f8b75edfbb48d3b3ae242b6cabee2010cdd1930f43a0242ee8283" exitCode=0 Mar 11 09:03:24 crc kubenswrapper[4840]: I0311 09:03:24.503176 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnnpl" event={"ID":"c93238da-07d7-42ab-8b86-59e30ebfe3e5","Type":"ContainerDied","Data":"32ca9123015f8b75edfbb48d3b3ae242b6cabee2010cdd1930f43a0242ee8283"} Mar 11 09:03:24 crc kubenswrapper[4840]: I0311 09:03:24.503248 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnnpl" event={"ID":"c93238da-07d7-42ab-8b86-59e30ebfe3e5","Type":"ContainerStarted","Data":"e716aca91b9f6452e4c3de12e06c8e8df528a56c03ed03cd1da0080d202f8991"} Mar 11 09:03:24 crc kubenswrapper[4840]: I0311 09:03:24.510249 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-45qqq"] Mar 11 09:03:24 crc kubenswrapper[4840]: I0311 09:03:24.511442 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-45qqq" Mar 11 09:03:24 crc kubenswrapper[4840]: I0311 09:03:24.513696 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 11 09:03:24 crc kubenswrapper[4840]: I0311 09:03:24.515630 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-45qqq"] Mar 11 09:03:24 crc kubenswrapper[4840]: I0311 09:03:24.618126 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eafe068-3690-459c-aa70-f9f494a2ca5c-utilities\") pod \"redhat-marketplace-45qqq\" (UID: \"1eafe068-3690-459c-aa70-f9f494a2ca5c\") " pod="openshift-marketplace/redhat-marketplace-45qqq" Mar 11 09:03:24 crc kubenswrapper[4840]: I0311 09:03:24.618301 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eafe068-3690-459c-aa70-f9f494a2ca5c-catalog-content\") pod \"redhat-marketplace-45qqq\" (UID: \"1eafe068-3690-459c-aa70-f9f494a2ca5c\") " pod="openshift-marketplace/redhat-marketplace-45qqq" Mar 11 09:03:24 crc kubenswrapper[4840]: I0311 09:03:24.618326 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-622s7\" (UniqueName: \"kubernetes.io/projected/1eafe068-3690-459c-aa70-f9f494a2ca5c-kube-api-access-622s7\") pod \"redhat-marketplace-45qqq\" (UID: \"1eafe068-3690-459c-aa70-f9f494a2ca5c\") " pod="openshift-marketplace/redhat-marketplace-45qqq" Mar 11 09:03:24 crc kubenswrapper[4840]: I0311 09:03:24.719885 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eafe068-3690-459c-aa70-f9f494a2ca5c-utilities\") pod \"redhat-marketplace-45qqq\" (UID: \"1eafe068-3690-459c-aa70-f9f494a2ca5c\") " pod="openshift-marketplace/redhat-marketplace-45qqq" Mar 11 09:03:24 crc kubenswrapper[4840]: I0311 09:03:24.720003 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eafe068-3690-459c-aa70-f9f494a2ca5c-catalog-content\") pod \"redhat-marketplace-45qqq\" (UID: \"1eafe068-3690-459c-aa70-f9f494a2ca5c\") " pod="openshift-marketplace/redhat-marketplace-45qqq" Mar 11 09:03:24 crc kubenswrapper[4840]: I0311 09:03:24.720027 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-622s7\" (UniqueName: \"kubernetes.io/projected/1eafe068-3690-459c-aa70-f9f494a2ca5c-kube-api-access-622s7\") pod \"redhat-marketplace-45qqq\" (UID: \"1eafe068-3690-459c-aa70-f9f494a2ca5c\") " pod="openshift-marketplace/redhat-marketplace-45qqq" Mar 11 09:03:24 crc kubenswrapper[4840]: I0311 09:03:24.720617 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eafe068-3690-459c-aa70-f9f494a2ca5c-utilities\") pod \"redhat-marketplace-45qqq\" (UID: \"1eafe068-3690-459c-aa70-f9f494a2ca5c\") " pod="openshift-marketplace/redhat-marketplace-45qqq" Mar 11 09:03:24 crc kubenswrapper[4840]: I0311 09:03:24.720827 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eafe068-3690-459c-aa70-f9f494a2ca5c-catalog-content\") pod \"redhat-marketplace-45qqq\" (UID: \"1eafe068-3690-459c-aa70-f9f494a2ca5c\") " pod="openshift-marketplace/redhat-marketplace-45qqq" Mar 11 09:03:24 crc kubenswrapper[4840]: I0311 09:03:24.748947 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-622s7\" (UniqueName: \"kubernetes.io/projected/1eafe068-3690-459c-aa70-f9f494a2ca5c-kube-api-access-622s7\") pod \"redhat-marketplace-45qqq\" (UID: \"1eafe068-3690-459c-aa70-f9f494a2ca5c\") " pod="openshift-marketplace/redhat-marketplace-45qqq" Mar 11 09:03:24 crc kubenswrapper[4840]: I0311 09:03:24.826504 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-45qqq" Mar 11 09:03:25 crc kubenswrapper[4840]: I0311 09:03:25.267275 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-45qqq"] Mar 11 09:03:25 crc kubenswrapper[4840]: W0311 09:03:25.287243 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1eafe068_3690_459c_aa70_f9f494a2ca5c.slice/crio-1d73e16ca1f1eca447c50d36e082629dadc20484851f75fe78099e6bcbe0e9dc WatchSource:0}: Error finding container 1d73e16ca1f1eca447c50d36e082629dadc20484851f75fe78099e6bcbe0e9dc: Status 404 returned error can't find the container with id 1d73e16ca1f1eca447c50d36e082629dadc20484851f75fe78099e6bcbe0e9dc Mar 11 09:03:25 crc kubenswrapper[4840]: I0311 09:03:25.529293 4840 generic.go:334] "Generic (PLEG): container finished" podID="1eafe068-3690-459c-aa70-f9f494a2ca5c" containerID="1b70735f80cc754c499ac1ab8127ac3971edd78dddc5bb4f464175ab6c2c3d70" exitCode=0 Mar 11 09:03:25 crc kubenswrapper[4840]: I0311 09:03:25.529366 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-45qqq" event={"ID":"1eafe068-3690-459c-aa70-f9f494a2ca5c","Type":"ContainerDied","Data":"1b70735f80cc754c499ac1ab8127ac3971edd78dddc5bb4f464175ab6c2c3d70"} Mar 11 09:03:25 crc kubenswrapper[4840]: I0311 09:03:25.529827 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-45qqq" event={"ID":"1eafe068-3690-459c-aa70-f9f494a2ca5c","Type":"ContainerStarted","Data":"1d73e16ca1f1eca447c50d36e082629dadc20484851f75fe78099e6bcbe0e9dc"} Mar 11 09:03:25 crc kubenswrapper[4840]: I0311 09:03:25.532281 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnnpl" event={"ID":"c93238da-07d7-42ab-8b86-59e30ebfe3e5","Type":"ContainerStarted","Data":"f69da28932e3a19f51664d91ad48e74abb671ae9a83c8fc1044d25c3c4ae2919"} Mar 11 09:03:25 crc kubenswrapper[4840]: I0311 09:03:25.894441 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s4mg2"] Mar 11 09:03:25 crc kubenswrapper[4840]: I0311 09:03:25.895949 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4mg2" Mar 11 09:03:25 crc kubenswrapper[4840]: I0311 09:03:25.901077 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 11 09:03:25 crc kubenswrapper[4840]: I0311 09:03:25.915826 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s4mg2"] Mar 11 09:03:25 crc kubenswrapper[4840]: I0311 09:03:25.937838 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xpd9\" (UniqueName: \"kubernetes.io/projected/9b673938-72a6-421a-9e73-1b2c5226e039-kube-api-access-5xpd9\") pod \"redhat-operators-s4mg2\" (UID: \"9b673938-72a6-421a-9e73-1b2c5226e039\") " pod="openshift-marketplace/redhat-operators-s4mg2" Mar 11 09:03:25 crc kubenswrapper[4840]: I0311 09:03:25.937946 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b673938-72a6-421a-9e73-1b2c5226e039-catalog-content\") pod \"redhat-operators-s4mg2\" (UID: \"9b673938-72a6-421a-9e73-1b2c5226e039\") " pod="openshift-marketplace/redhat-operators-s4mg2" Mar 11 09:03:25 crc kubenswrapper[4840]: I0311 09:03:25.937983 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b673938-72a6-421a-9e73-1b2c5226e039-utilities\") pod \"redhat-operators-s4mg2\" (UID: \"9b673938-72a6-421a-9e73-1b2c5226e039\") " pod="openshift-marketplace/redhat-operators-s4mg2" Mar 11 09:03:26 crc kubenswrapper[4840]: I0311 09:03:26.039206 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xpd9\" (UniqueName: \"kubernetes.io/projected/9b673938-72a6-421a-9e73-1b2c5226e039-kube-api-access-5xpd9\") pod \"redhat-operators-s4mg2\" (UID: \"9b673938-72a6-421a-9e73-1b2c5226e039\") " pod="openshift-marketplace/redhat-operators-s4mg2" Mar 11 09:03:26 crc kubenswrapper[4840]: I0311 09:03:26.039343 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b673938-72a6-421a-9e73-1b2c5226e039-catalog-content\") pod \"redhat-operators-s4mg2\" (UID: \"9b673938-72a6-421a-9e73-1b2c5226e039\") " pod="openshift-marketplace/redhat-operators-s4mg2" Mar 11 09:03:26 crc kubenswrapper[4840]: I0311 09:03:26.039374 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b673938-72a6-421a-9e73-1b2c5226e039-utilities\") pod \"redhat-operators-s4mg2\" (UID: \"9b673938-72a6-421a-9e73-1b2c5226e039\") " pod="openshift-marketplace/redhat-operators-s4mg2" Mar 11 09:03:26 crc kubenswrapper[4840]: I0311 09:03:26.039889 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b673938-72a6-421a-9e73-1b2c5226e039-catalog-content\") pod \"redhat-operators-s4mg2\" (UID: \"9b673938-72a6-421a-9e73-1b2c5226e039\") " pod="openshift-marketplace/redhat-operators-s4mg2" Mar 11 09:03:26 crc kubenswrapper[4840]: I0311 09:03:26.040269 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b673938-72a6-421a-9e73-1b2c5226e039-utilities\") pod \"redhat-operators-s4mg2\" (UID: \"9b673938-72a6-421a-9e73-1b2c5226e039\") " pod="openshift-marketplace/redhat-operators-s4mg2" Mar 11 09:03:26 crc kubenswrapper[4840]: I0311 09:03:26.061487 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xpd9\" (UniqueName: \"kubernetes.io/projected/9b673938-72a6-421a-9e73-1b2c5226e039-kube-api-access-5xpd9\") pod \"redhat-operators-s4mg2\" (UID: \"9b673938-72a6-421a-9e73-1b2c5226e039\") " pod="openshift-marketplace/redhat-operators-s4mg2" Mar 11 09:03:26 crc kubenswrapper[4840]: I0311 09:03:26.232827 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4mg2" Mar 11 09:03:26 crc kubenswrapper[4840]: I0311 09:03:26.557372 4840 generic.go:334] "Generic (PLEG): container finished" podID="c93238da-07d7-42ab-8b86-59e30ebfe3e5" containerID="f69da28932e3a19f51664d91ad48e74abb671ae9a83c8fc1044d25c3c4ae2919" exitCode=0 Mar 11 09:03:26 crc kubenswrapper[4840]: I0311 09:03:26.558708 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnnpl" event={"ID":"c93238da-07d7-42ab-8b86-59e30ebfe3e5","Type":"ContainerDied","Data":"f69da28932e3a19f51664d91ad48e74abb671ae9a83c8fc1044d25c3c4ae2919"} Mar 11 09:03:26 crc kubenswrapper[4840]: I0311 09:03:26.561438 4840 generic.go:334] "Generic (PLEG): container finished" podID="1eafe068-3690-459c-aa70-f9f494a2ca5c" containerID="ed68efc6188cfbc9ee756157d164cb89249d27462eee75f11bbd0d93f57491bc" exitCode=0 Mar 11 09:03:26 crc kubenswrapper[4840]: I0311 09:03:26.561530 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-45qqq" event={"ID":"1eafe068-3690-459c-aa70-f9f494a2ca5c","Type":"ContainerDied","Data":"ed68efc6188cfbc9ee756157d164cb89249d27462eee75f11bbd0d93f57491bc"} Mar 11 09:03:26 crc kubenswrapper[4840]: I0311 09:03:26.648926 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s4mg2"] Mar 11 09:03:26 crc kubenswrapper[4840]: I0311 09:03:26.894487 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ftglb"] Mar 11 09:03:26 crc kubenswrapper[4840]: I0311 09:03:26.897757 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftglb" Mar 11 09:03:26 crc kubenswrapper[4840]: I0311 09:03:26.900925 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 11 09:03:26 crc kubenswrapper[4840]: I0311 09:03:26.903136 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ftglb"] Mar 11 09:03:26 crc kubenswrapper[4840]: I0311 09:03:26.959548 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0011bbf4-2baf-40fd-a220-4ee6f6b7fea0-utilities\") pod \"community-operators-ftglb\" (UID: \"0011bbf4-2baf-40fd-a220-4ee6f6b7fea0\") " pod="openshift-marketplace/community-operators-ftglb" Mar 11 09:03:26 crc kubenswrapper[4840]: I0311 09:03:26.959757 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0011bbf4-2baf-40fd-a220-4ee6f6b7fea0-catalog-content\") pod \"community-operators-ftglb\" (UID: \"0011bbf4-2baf-40fd-a220-4ee6f6b7fea0\") " pod="openshift-marketplace/community-operators-ftglb" Mar 11 09:03:26 crc kubenswrapper[4840]: I0311 09:03:26.959797 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjs54\" (UniqueName: \"kubernetes.io/projected/0011bbf4-2baf-40fd-a220-4ee6f6b7fea0-kube-api-access-zjs54\") pod \"community-operators-ftglb\" (UID: \"0011bbf4-2baf-40fd-a220-4ee6f6b7fea0\") " pod="openshift-marketplace/community-operators-ftglb" Mar 11 09:03:27 crc kubenswrapper[4840]: I0311 09:03:27.061245 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0011bbf4-2baf-40fd-a220-4ee6f6b7fea0-utilities\") pod \"community-operators-ftglb\" (UID: \"0011bbf4-2baf-40fd-a220-4ee6f6b7fea0\") " pod="openshift-marketplace/community-operators-ftglb" Mar 11 09:03:27 crc kubenswrapper[4840]: I0311 09:03:27.061523 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0011bbf4-2baf-40fd-a220-4ee6f6b7fea0-catalog-content\") pod \"community-operators-ftglb\" (UID: \"0011bbf4-2baf-40fd-a220-4ee6f6b7fea0\") " pod="openshift-marketplace/community-operators-ftglb" Mar 11 09:03:27 crc kubenswrapper[4840]: I0311 09:03:27.061603 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjs54\" (UniqueName: \"kubernetes.io/projected/0011bbf4-2baf-40fd-a220-4ee6f6b7fea0-kube-api-access-zjs54\") pod \"community-operators-ftglb\" (UID: \"0011bbf4-2baf-40fd-a220-4ee6f6b7fea0\") " pod="openshift-marketplace/community-operators-ftglb" Mar 11 09:03:27 crc kubenswrapper[4840]: I0311 09:03:27.062055 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0011bbf4-2baf-40fd-a220-4ee6f6b7fea0-utilities\") pod \"community-operators-ftglb\" (UID: \"0011bbf4-2baf-40fd-a220-4ee6f6b7fea0\") " pod="openshift-marketplace/community-operators-ftglb" Mar 11 09:03:27 crc kubenswrapper[4840]: I0311 09:03:27.062115 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0011bbf4-2baf-40fd-a220-4ee6f6b7fea0-catalog-content\") pod \"community-operators-ftglb\" (UID: \"0011bbf4-2baf-40fd-a220-4ee6f6b7fea0\") " pod="openshift-marketplace/community-operators-ftglb" Mar 11 09:03:27 crc kubenswrapper[4840]: I0311 09:03:27.092309 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjs54\" (UniqueName: \"kubernetes.io/projected/0011bbf4-2baf-40fd-a220-4ee6f6b7fea0-kube-api-access-zjs54\") pod \"community-operators-ftglb\" (UID: \"0011bbf4-2baf-40fd-a220-4ee6f6b7fea0\") " pod="openshift-marketplace/community-operators-ftglb" Mar 11 09:03:27 crc kubenswrapper[4840]: I0311 09:03:27.232890 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftglb" Mar 11 09:03:27 crc kubenswrapper[4840]: I0311 09:03:27.446487 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:03:27 crc kubenswrapper[4840]: I0311 09:03:27.447171 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:03:27 crc kubenswrapper[4840]: I0311 09:03:27.581086 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-45qqq" event={"ID":"1eafe068-3690-459c-aa70-f9f494a2ca5c","Type":"ContainerStarted","Data":"786cc3068fe20492e24d94c7d8e6907cf51d50b424958b122ec8a20bf0c6a6b4"} Mar 11 09:03:27 crc kubenswrapper[4840]: I0311 09:03:27.584090 4840 generic.go:334] "Generic (PLEG): container finished" podID="9b673938-72a6-421a-9e73-1b2c5226e039" containerID="84fb23694c0178b7becd9dfda59d38252037cebc83c792ecd19eb6c5e9bd8c3e" exitCode=0 Mar 11 09:03:27 crc kubenswrapper[4840]: I0311 09:03:27.584177 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4mg2" event={"ID":"9b673938-72a6-421a-9e73-1b2c5226e039","Type":"ContainerDied","Data":"84fb23694c0178b7becd9dfda59d38252037cebc83c792ecd19eb6c5e9bd8c3e"} Mar 11 09:03:27 crc kubenswrapper[4840]: I0311 09:03:27.584216 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4mg2" event={"ID":"9b673938-72a6-421a-9e73-1b2c5226e039","Type":"ContainerStarted","Data":"8675bf4a72bb365e68d22a80037215a18585ccd67521a37c7a23ef85a176b715"} Mar 11 09:03:27 crc kubenswrapper[4840]: I0311 09:03:27.588636 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnnpl" event={"ID":"c93238da-07d7-42ab-8b86-59e30ebfe3e5","Type":"ContainerStarted","Data":"c27710c2b6b14777ff15bfdc637b0f5f1177a87cac6ef578df93405779ce8860"} Mar 11 09:03:27 crc kubenswrapper[4840]: I0311 09:03:27.612526 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-45qqq" podStartSLOduration=2.026783701 podStartE2EDuration="3.61249586s" podCreationTimestamp="2026-03-11 09:03:24 +0000 UTC" firstStartedPulling="2026-03-11 09:03:25.531808158 +0000 UTC m=+404.197477993" lastFinishedPulling="2026-03-11 09:03:27.117520337 +0000 UTC m=+405.783190152" observedRunningTime="2026-03-11 09:03:27.603453298 +0000 UTC m=+406.269123113" watchObservedRunningTime="2026-03-11 09:03:27.61249586 +0000 UTC m=+406.278165675" Mar 11 09:03:27 crc kubenswrapper[4840]: I0311 09:03:27.645372 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dnnpl" podStartSLOduration=2.002143296 podStartE2EDuration="4.645337225s" podCreationTimestamp="2026-03-11 09:03:23 +0000 UTC" firstStartedPulling="2026-03-11 09:03:24.505187035 +0000 UTC m=+403.170856850" lastFinishedPulling="2026-03-11 09:03:27.148380964 +0000 UTC m=+405.814050779" observedRunningTime="2026-03-11 09:03:27.64229769 +0000 UTC m=+406.307967505" watchObservedRunningTime="2026-03-11 09:03:27.645337225 +0000 UTC m=+406.311007040" Mar 11 09:03:27 crc kubenswrapper[4840]: I0311 09:03:27.704739 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ftglb"] Mar 11 09:03:27 crc kubenswrapper[4840]: W0311 09:03:27.709277 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0011bbf4_2baf_40fd_a220_4ee6f6b7fea0.slice/crio-604dd4ae13cf4fbd9e015904573428d525edd9566e8a20dc185c2cabce439af6 WatchSource:0}: Error finding container 604dd4ae13cf4fbd9e015904573428d525edd9566e8a20dc185c2cabce439af6: Status 404 returned error can't find the container with id 604dd4ae13cf4fbd9e015904573428d525edd9566e8a20dc185c2cabce439af6 Mar 11 09:03:28 crc kubenswrapper[4840]: I0311 09:03:28.596331 4840 generic.go:334] "Generic (PLEG): container finished" podID="0011bbf4-2baf-40fd-a220-4ee6f6b7fea0" containerID="2a01de526aa23165a6df2f0e8e93f6e37274adf613b36242d16b12bf1b3773b9" exitCode=0 Mar 11 09:03:28 crc kubenswrapper[4840]: I0311 09:03:28.596424 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftglb" event={"ID":"0011bbf4-2baf-40fd-a220-4ee6f6b7fea0","Type":"ContainerDied","Data":"2a01de526aa23165a6df2f0e8e93f6e37274adf613b36242d16b12bf1b3773b9"} Mar 11 09:03:28 crc kubenswrapper[4840]: I0311 09:03:28.596524 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftglb" event={"ID":"0011bbf4-2baf-40fd-a220-4ee6f6b7fea0","Type":"ContainerStarted","Data":"604dd4ae13cf4fbd9e015904573428d525edd9566e8a20dc185c2cabce439af6"} Mar 11 09:03:29 crc kubenswrapper[4840]: I0311 09:03:29.604450 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftglb" event={"ID":"0011bbf4-2baf-40fd-a220-4ee6f6b7fea0","Type":"ContainerStarted","Data":"02e797804ad13a006d65c05a9714f15529537499897cf924bce527eddb4d17b7"} Mar 11 09:03:29 crc kubenswrapper[4840]: I0311 09:03:29.609726 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4mg2" event={"ID":"9b673938-72a6-421a-9e73-1b2c5226e039","Type":"ContainerStarted","Data":"1cf744cb5728082b3dde17ff7abae47cfaef9f27b480942c38a8cf20042dc85a"} Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.560174 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7s6gv"] Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.561526 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.572588 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7s6gv"] Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.611625 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/55d4933b-06d7-47ac-b25d-3272704e0101-registry-certificates\") pod \"image-registry-66df7c8f76-7s6gv\" (UID: \"55d4933b-06d7-47ac-b25d-3272704e0101\") " pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.611702 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/55d4933b-06d7-47ac-b25d-3272704e0101-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7s6gv\" (UID: \"55d4933b-06d7-47ac-b25d-3272704e0101\") " pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.611768 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/55d4933b-06d7-47ac-b25d-3272704e0101-registry-tls\") pod \"image-registry-66df7c8f76-7s6gv\" (UID: \"55d4933b-06d7-47ac-b25d-3272704e0101\") " pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.611829 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcrzf\" (UniqueName: \"kubernetes.io/projected/55d4933b-06d7-47ac-b25d-3272704e0101-kube-api-access-tcrzf\") pod \"image-registry-66df7c8f76-7s6gv\" (UID: \"55d4933b-06d7-47ac-b25d-3272704e0101\") " pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.611876 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55d4933b-06d7-47ac-b25d-3272704e0101-trusted-ca\") pod \"image-registry-66df7c8f76-7s6gv\" (UID: \"55d4933b-06d7-47ac-b25d-3272704e0101\") " pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.611916 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55d4933b-06d7-47ac-b25d-3272704e0101-bound-sa-token\") pod \"image-registry-66df7c8f76-7s6gv\" (UID: \"55d4933b-06d7-47ac-b25d-3272704e0101\") " pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.611959 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/55d4933b-06d7-47ac-b25d-3272704e0101-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7s6gv\" (UID: \"55d4933b-06d7-47ac-b25d-3272704e0101\") " pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.613033 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-7s6gv\" (UID: \"55d4933b-06d7-47ac-b25d-3272704e0101\") " pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.619232 4840 generic.go:334] "Generic (PLEG): container finished" podID="9b673938-72a6-421a-9e73-1b2c5226e039" containerID="1cf744cb5728082b3dde17ff7abae47cfaef9f27b480942c38a8cf20042dc85a" exitCode=0 Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.619333 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4mg2" event={"ID":"9b673938-72a6-421a-9e73-1b2c5226e039","Type":"ContainerDied","Data":"1cf744cb5728082b3dde17ff7abae47cfaef9f27b480942c38a8cf20042dc85a"} Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.625352 4840 generic.go:334] "Generic (PLEG): container finished" podID="0011bbf4-2baf-40fd-a220-4ee6f6b7fea0" containerID="02e797804ad13a006d65c05a9714f15529537499897cf924bce527eddb4d17b7" exitCode=0 Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.625394 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftglb" event={"ID":"0011bbf4-2baf-40fd-a220-4ee6f6b7fea0","Type":"ContainerDied","Data":"02e797804ad13a006d65c05a9714f15529537499897cf924bce527eddb4d17b7"} Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.649764 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-7s6gv\" (UID: \"55d4933b-06d7-47ac-b25d-3272704e0101\") " pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.717329 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/55d4933b-06d7-47ac-b25d-3272704e0101-registry-tls\") pod \"image-registry-66df7c8f76-7s6gv\" (UID: \"55d4933b-06d7-47ac-b25d-3272704e0101\") " pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.717415 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcrzf\" (UniqueName: \"kubernetes.io/projected/55d4933b-06d7-47ac-b25d-3272704e0101-kube-api-access-tcrzf\") pod \"image-registry-66df7c8f76-7s6gv\" (UID: \"55d4933b-06d7-47ac-b25d-3272704e0101\") " pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.717444 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55d4933b-06d7-47ac-b25d-3272704e0101-trusted-ca\") pod \"image-registry-66df7c8f76-7s6gv\" (UID: \"55d4933b-06d7-47ac-b25d-3272704e0101\") " pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.717484 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55d4933b-06d7-47ac-b25d-3272704e0101-bound-sa-token\") pod \"image-registry-66df7c8f76-7s6gv\" (UID: \"55d4933b-06d7-47ac-b25d-3272704e0101\") " pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.717525 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/55d4933b-06d7-47ac-b25d-3272704e0101-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7s6gv\" (UID: \"55d4933b-06d7-47ac-b25d-3272704e0101\") " pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.717550 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/55d4933b-06d7-47ac-b25d-3272704e0101-registry-certificates\") pod \"image-registry-66df7c8f76-7s6gv\" (UID: \"55d4933b-06d7-47ac-b25d-3272704e0101\") " pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.717600 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/55d4933b-06d7-47ac-b25d-3272704e0101-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7s6gv\" (UID: \"55d4933b-06d7-47ac-b25d-3272704e0101\") " pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.719156 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/55d4933b-06d7-47ac-b25d-3272704e0101-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7s6gv\" (UID: \"55d4933b-06d7-47ac-b25d-3272704e0101\") " pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.719994 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/55d4933b-06d7-47ac-b25d-3272704e0101-trusted-ca\") pod \"image-registry-66df7c8f76-7s6gv\" (UID: \"55d4933b-06d7-47ac-b25d-3272704e0101\") " pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.720664 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/55d4933b-06d7-47ac-b25d-3272704e0101-registry-certificates\") pod \"image-registry-66df7c8f76-7s6gv\" (UID: \"55d4933b-06d7-47ac-b25d-3272704e0101\") " pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.725343 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/55d4933b-06d7-47ac-b25d-3272704e0101-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7s6gv\" (UID: \"55d4933b-06d7-47ac-b25d-3272704e0101\") " pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.734129 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/55d4933b-06d7-47ac-b25d-3272704e0101-registry-tls\") pod \"image-registry-66df7c8f76-7s6gv\" (UID: \"55d4933b-06d7-47ac-b25d-3272704e0101\") " pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.737876 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/55d4933b-06d7-47ac-b25d-3272704e0101-bound-sa-token\") pod \"image-registry-66df7c8f76-7s6gv\" (UID: \"55d4933b-06d7-47ac-b25d-3272704e0101\") " pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.748324 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcrzf\" (UniqueName: \"kubernetes.io/projected/55d4933b-06d7-47ac-b25d-3272704e0101-kube-api-access-tcrzf\") pod \"image-registry-66df7c8f76-7s6gv\" (UID: \"55d4933b-06d7-47ac-b25d-3272704e0101\") " pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:30 crc kubenswrapper[4840]: I0311 09:03:30.880968 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:31 crc kubenswrapper[4840]: I0311 09:03:31.366103 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7s6gv"] Mar 11 09:03:31 crc kubenswrapper[4840]: W0311 09:03:31.377712 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55d4933b_06d7_47ac_b25d_3272704e0101.slice/crio-dde89cdec228dcb8ae914ee09d63db3ded558a919849984d1ef88bc29d3f3b9a WatchSource:0}: Error finding container dde89cdec228dcb8ae914ee09d63db3ded558a919849984d1ef88bc29d3f3b9a: Status 404 returned error can't find the container with id dde89cdec228dcb8ae914ee09d63db3ded558a919849984d1ef88bc29d3f3b9a Mar 11 09:03:31 crc kubenswrapper[4840]: I0311 09:03:31.639625 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftglb" event={"ID":"0011bbf4-2baf-40fd-a220-4ee6f6b7fea0","Type":"ContainerStarted","Data":"b3ef13eb25d87fc910fcef75941727a756fc94253d2dc1a635a8dd5dcabd75a8"} Mar 11 09:03:31 crc kubenswrapper[4840]: I0311 09:03:31.640932 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" event={"ID":"55d4933b-06d7-47ac-b25d-3272704e0101","Type":"ContainerStarted","Data":"dde89cdec228dcb8ae914ee09d63db3ded558a919849984d1ef88bc29d3f3b9a"} Mar 11 09:03:31 crc kubenswrapper[4840]: I0311 09:03:31.642930 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4mg2" event={"ID":"9b673938-72a6-421a-9e73-1b2c5226e039","Type":"ContainerStarted","Data":"0a29649f67dcc0577e422cc5d667be0bebcab1e1449a75ddf1795c37999fa935"} Mar 11 09:03:31 crc kubenswrapper[4840]: I0311 09:03:31.660202 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ftglb" podStartSLOduration=3.117099131 podStartE2EDuration="5.660180836s" podCreationTimestamp="2026-03-11 09:03:26 +0000 UTC" firstStartedPulling="2026-03-11 09:03:28.598005177 +0000 UTC m=+407.263674992" lastFinishedPulling="2026-03-11 09:03:31.141086872 +0000 UTC m=+409.806756697" observedRunningTime="2026-03-11 09:03:31.655683586 +0000 UTC m=+410.321353401" watchObservedRunningTime="2026-03-11 09:03:31.660180836 +0000 UTC m=+410.325850651" Mar 11 09:03:31 crc kubenswrapper[4840]: I0311 09:03:31.680110 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s4mg2" podStartSLOduration=3.074794992 podStartE2EDuration="6.680088074s" podCreationTimestamp="2026-03-11 09:03:25 +0000 UTC" firstStartedPulling="2026-03-11 09:03:27.586944774 +0000 UTC m=+406.252614589" lastFinishedPulling="2026-03-11 09:03:31.192237846 +0000 UTC m=+409.857907671" observedRunningTime="2026-03-11 09:03:31.679614142 +0000 UTC m=+410.345283957" watchObservedRunningTime="2026-03-11 09:03:31.680088074 +0000 UTC m=+410.345757879" Mar 11 09:03:32 crc kubenswrapper[4840]: I0311 09:03:32.619734 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7"] Mar 11 09:03:32 crc kubenswrapper[4840]: I0311 09:03:32.620951 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7" podUID="08cfb0e8-6d25-470f-a462-f4321182ec8d" containerName="route-controller-manager" containerID="cri-o://d85bef3449cf61aa5705b1e1ab7b7012ad914d7b4ca1ded7593477c235ecf94d" gracePeriod=30 Mar 11 09:03:32 crc kubenswrapper[4840]: I0311 09:03:32.649963 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" event={"ID":"55d4933b-06d7-47ac-b25d-3272704e0101","Type":"ContainerStarted","Data":"7d748d6c243150b4c34262ecf9a011f2d49fc737d784c07c2e5c2d71db521c15"} Mar 11 09:03:32 crc kubenswrapper[4840]: I0311 09:03:32.677925 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" podStartSLOduration=2.677904932 podStartE2EDuration="2.677904932s" podCreationTimestamp="2026-03-11 09:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:03:32.67374436 +0000 UTC m=+411.339414175" watchObservedRunningTime="2026-03-11 09:03:32.677904932 +0000 UTC m=+411.343574747" Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.112357 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7" Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.159825 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/08cfb0e8-6d25-470f-a462-f4321182ec8d-client-ca\") pod \"08cfb0e8-6d25-470f-a462-f4321182ec8d\" (UID: \"08cfb0e8-6d25-470f-a462-f4321182ec8d\") " Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.159893 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08cfb0e8-6d25-470f-a462-f4321182ec8d-config\") pod \"08cfb0e8-6d25-470f-a462-f4321182ec8d\" (UID: \"08cfb0e8-6d25-470f-a462-f4321182ec8d\") " Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.159945 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08cfb0e8-6d25-470f-a462-f4321182ec8d-serving-cert\") pod \"08cfb0e8-6d25-470f-a462-f4321182ec8d\" (UID: \"08cfb0e8-6d25-470f-a462-f4321182ec8d\") " Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.160819 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08cfb0e8-6d25-470f-a462-f4321182ec8d-client-ca" (OuterVolumeSpecName: "client-ca") pod "08cfb0e8-6d25-470f-a462-f4321182ec8d" (UID: "08cfb0e8-6d25-470f-a462-f4321182ec8d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.160840 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08cfb0e8-6d25-470f-a462-f4321182ec8d-config" (OuterVolumeSpecName: "config") pod "08cfb0e8-6d25-470f-a462-f4321182ec8d" (UID: "08cfb0e8-6d25-470f-a462-f4321182ec8d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.161033 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krx58\" (UniqueName: \"kubernetes.io/projected/08cfb0e8-6d25-470f-a462-f4321182ec8d-kube-api-access-krx58\") pod \"08cfb0e8-6d25-470f-a462-f4321182ec8d\" (UID: \"08cfb0e8-6d25-470f-a462-f4321182ec8d\") " Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.161368 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/08cfb0e8-6d25-470f-a462-f4321182ec8d-client-ca\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.161395 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08cfb0e8-6d25-470f-a462-f4321182ec8d-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.166480 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08cfb0e8-6d25-470f-a462-f4321182ec8d-kube-api-access-krx58" (OuterVolumeSpecName: "kube-api-access-krx58") pod "08cfb0e8-6d25-470f-a462-f4321182ec8d" (UID: "08cfb0e8-6d25-470f-a462-f4321182ec8d"). InnerVolumeSpecName "kube-api-access-krx58". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.170617 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08cfb0e8-6d25-470f-a462-f4321182ec8d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "08cfb0e8-6d25-470f-a462-f4321182ec8d" (UID: "08cfb0e8-6d25-470f-a462-f4321182ec8d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.262507 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krx58\" (UniqueName: \"kubernetes.io/projected/08cfb0e8-6d25-470f-a462-f4321182ec8d-kube-api-access-krx58\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.262540 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08cfb0e8-6d25-470f-a462-f4321182ec8d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.658187 4840 generic.go:334] "Generic (PLEG): container finished" podID="08cfb0e8-6d25-470f-a462-f4321182ec8d" containerID="d85bef3449cf61aa5705b1e1ab7b7012ad914d7b4ca1ded7593477c235ecf94d" exitCode=0 Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.659208 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7" Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.663350 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7" event={"ID":"08cfb0e8-6d25-470f-a462-f4321182ec8d","Type":"ContainerDied","Data":"d85bef3449cf61aa5705b1e1ab7b7012ad914d7b4ca1ded7593477c235ecf94d"} Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.663436 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.663488 4840 scope.go:117] "RemoveContainer" containerID="d85bef3449cf61aa5705b1e1ab7b7012ad914d7b4ca1ded7593477c235ecf94d" Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.663462 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7" event={"ID":"08cfb0e8-6d25-470f-a462-f4321182ec8d","Type":"ContainerDied","Data":"47a690dca09ba8d1b96f6a3f42849d9cfa173d57612c98e922a8ff997b93c387"} Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.685978 4840 scope.go:117] "RemoveContainer" containerID="d85bef3449cf61aa5705b1e1ab7b7012ad914d7b4ca1ded7593477c235ecf94d" Mar 11 09:03:33 crc kubenswrapper[4840]: E0311 09:03:33.686545 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d85bef3449cf61aa5705b1e1ab7b7012ad914d7b4ca1ded7593477c235ecf94d\": container with ID starting with d85bef3449cf61aa5705b1e1ab7b7012ad914d7b4ca1ded7593477c235ecf94d not found: ID does not exist" containerID="d85bef3449cf61aa5705b1e1ab7b7012ad914d7b4ca1ded7593477c235ecf94d" Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.686586 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d85bef3449cf61aa5705b1e1ab7b7012ad914d7b4ca1ded7593477c235ecf94d"} err="failed to get container status \"d85bef3449cf61aa5705b1e1ab7b7012ad914d7b4ca1ded7593477c235ecf94d\": rpc error: code = NotFound desc = could not find container \"d85bef3449cf61aa5705b1e1ab7b7012ad914d7b4ca1ded7593477c235ecf94d\": container with ID starting with d85bef3449cf61aa5705b1e1ab7b7012ad914d7b4ca1ded7593477c235ecf94d not found: ID does not exist" Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.711070 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7"] Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.715482 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6685f4fd5b-v8hx7"] Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.866558 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dnnpl" Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.866641 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dnnpl" Mar 11 09:03:33 crc kubenswrapper[4840]: I0311 09:03:33.913835 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dnnpl" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.069126 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08cfb0e8-6d25-470f-a462-f4321182ec8d" path="/var/lib/kubelet/pods/08cfb0e8-6d25-470f-a462-f4321182ec8d/volumes" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.184095 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f7cf44877-9nq2w"] Mar 11 09:03:34 crc kubenswrapper[4840]: E0311 09:03:34.184536 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cfb0e8-6d25-470f-a462-f4321182ec8d" containerName="route-controller-manager" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.184558 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cfb0e8-6d25-470f-a462-f4321182ec8d" containerName="route-controller-manager" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.184711 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="08cfb0e8-6d25-470f-a462-f4321182ec8d" containerName="route-controller-manager" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.185398 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9nq2w" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.187670 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.190089 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.190179 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.190300 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.190301 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.190428 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.194018 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f7cf44877-9nq2w"] Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.277814 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/996bf86e-de03-42de-a656-75e41c3996ed-serving-cert\") pod \"route-controller-manager-7f7cf44877-9nq2w\" (UID: \"996bf86e-de03-42de-a656-75e41c3996ed\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9nq2w" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.277931 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/996bf86e-de03-42de-a656-75e41c3996ed-client-ca\") pod \"route-controller-manager-7f7cf44877-9nq2w\" (UID: \"996bf86e-de03-42de-a656-75e41c3996ed\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9nq2w" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.278085 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84ptg\" (UniqueName: \"kubernetes.io/projected/996bf86e-de03-42de-a656-75e41c3996ed-kube-api-access-84ptg\") pod \"route-controller-manager-7f7cf44877-9nq2w\" (UID: \"996bf86e-de03-42de-a656-75e41c3996ed\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9nq2w" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.278145 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/996bf86e-de03-42de-a656-75e41c3996ed-config\") pod \"route-controller-manager-7f7cf44877-9nq2w\" (UID: \"996bf86e-de03-42de-a656-75e41c3996ed\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9nq2w" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.379804 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/996bf86e-de03-42de-a656-75e41c3996ed-client-ca\") pod \"route-controller-manager-7f7cf44877-9nq2w\" (UID: \"996bf86e-de03-42de-a656-75e41c3996ed\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9nq2w" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.379918 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84ptg\" (UniqueName: \"kubernetes.io/projected/996bf86e-de03-42de-a656-75e41c3996ed-kube-api-access-84ptg\") pod \"route-controller-manager-7f7cf44877-9nq2w\" (UID: \"996bf86e-de03-42de-a656-75e41c3996ed\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9nq2w" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.379944 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/996bf86e-de03-42de-a656-75e41c3996ed-config\") pod \"route-controller-manager-7f7cf44877-9nq2w\" (UID: \"996bf86e-de03-42de-a656-75e41c3996ed\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9nq2w" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.379981 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/996bf86e-de03-42de-a656-75e41c3996ed-serving-cert\") pod \"route-controller-manager-7f7cf44877-9nq2w\" (UID: \"996bf86e-de03-42de-a656-75e41c3996ed\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9nq2w" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.381131 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/996bf86e-de03-42de-a656-75e41c3996ed-client-ca\") pod \"route-controller-manager-7f7cf44877-9nq2w\" (UID: \"996bf86e-de03-42de-a656-75e41c3996ed\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9nq2w" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.381462 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/996bf86e-de03-42de-a656-75e41c3996ed-config\") pod \"route-controller-manager-7f7cf44877-9nq2w\" (UID: \"996bf86e-de03-42de-a656-75e41c3996ed\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9nq2w" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.385988 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/996bf86e-de03-42de-a656-75e41c3996ed-serving-cert\") pod \"route-controller-manager-7f7cf44877-9nq2w\" (UID: \"996bf86e-de03-42de-a656-75e41c3996ed\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9nq2w" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.407106 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84ptg\" (UniqueName: \"kubernetes.io/projected/996bf86e-de03-42de-a656-75e41c3996ed-kube-api-access-84ptg\") pod \"route-controller-manager-7f7cf44877-9nq2w\" (UID: \"996bf86e-de03-42de-a656-75e41c3996ed\") " pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9nq2w" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.501120 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9nq2w" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.766292 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dnnpl" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.797637 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f7cf44877-9nq2w"] Mar 11 09:03:34 crc kubenswrapper[4840]: W0311 09:03:34.823761 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996bf86e_de03_42de_a656_75e41c3996ed.slice/crio-3de9f01829cf2f4c3646fededb4ea98edb032758e92f6297aa8ec2c9330e7a5a WatchSource:0}: Error finding container 3de9f01829cf2f4c3646fededb4ea98edb032758e92f6297aa8ec2c9330e7a5a: Status 404 returned error can't find the container with id 3de9f01829cf2f4c3646fededb4ea98edb032758e92f6297aa8ec2c9330e7a5a Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.828865 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-45qqq" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.829254 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-45qqq" Mar 11 09:03:34 crc kubenswrapper[4840]: I0311 09:03:34.904190 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-45qqq" Mar 11 09:03:35 crc kubenswrapper[4840]: I0311 09:03:35.685928 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9nq2w" event={"ID":"996bf86e-de03-42de-a656-75e41c3996ed","Type":"ContainerStarted","Data":"5d9c641ad8daaef5ec37972522572319f7fb7f647b1601c1d621961c58c5cb1e"} Mar 11 09:03:35 crc kubenswrapper[4840]: I0311 09:03:35.685993 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9nq2w" event={"ID":"996bf86e-de03-42de-a656-75e41c3996ed","Type":"ContainerStarted","Data":"3de9f01829cf2f4c3646fededb4ea98edb032758e92f6297aa8ec2c9330e7a5a"} Mar 11 09:03:35 crc kubenswrapper[4840]: I0311 09:03:35.713450 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9nq2w" podStartSLOduration=3.7134232579999997 podStartE2EDuration="3.713423258s" podCreationTimestamp="2026-03-11 09:03:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:03:35.710823164 +0000 UTC m=+414.376492979" watchObservedRunningTime="2026-03-11 09:03:35.713423258 +0000 UTC m=+414.379093073" Mar 11 09:03:35 crc kubenswrapper[4840]: I0311 09:03:35.751230 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-45qqq" Mar 11 09:03:36 crc kubenswrapper[4840]: I0311 09:03:36.233063 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s4mg2" Mar 11 09:03:36 crc kubenswrapper[4840]: I0311 09:03:36.233458 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s4mg2" Mar 11 09:03:36 crc kubenswrapper[4840]: I0311 09:03:36.692669 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9nq2w" Mar 11 09:03:36 crc kubenswrapper[4840]: I0311 09:03:36.698575 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7f7cf44877-9nq2w" Mar 11 09:03:37 crc kubenswrapper[4840]: I0311 09:03:37.233243 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ftglb" Mar 11 09:03:37 crc kubenswrapper[4840]: I0311 09:03:37.234846 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ftglb" Mar 11 09:03:37 crc kubenswrapper[4840]: I0311 09:03:37.270334 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s4mg2" podUID="9b673938-72a6-421a-9e73-1b2c5226e039" containerName="registry-server" probeResult="failure" output=< Mar 11 09:03:37 crc kubenswrapper[4840]: timeout: failed to connect service ":50051" within 1s Mar 11 09:03:37 crc kubenswrapper[4840]: > Mar 11 09:03:37 crc kubenswrapper[4840]: I0311 09:03:37.282983 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ftglb" Mar 11 09:03:37 crc kubenswrapper[4840]: I0311 09:03:37.752192 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ftglb" Mar 11 09:03:46 crc kubenswrapper[4840]: I0311 09:03:46.296613 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s4mg2" Mar 11 09:03:46 crc kubenswrapper[4840]: I0311 09:03:46.360000 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s4mg2" Mar 11 09:03:50 crc kubenswrapper[4840]: I0311 09:03:50.887899 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-7s6gv" Mar 11 09:03:50 crc kubenswrapper[4840]: I0311 09:03:50.950412 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pdrj8"] Mar 11 09:03:57 crc kubenswrapper[4840]: I0311 09:03:57.446692 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:03:57 crc kubenswrapper[4840]: I0311 09:03:57.448848 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:04:00 crc kubenswrapper[4840]: I0311 09:04:00.140301 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553664-65njn"] Mar 11 09:04:00 crc kubenswrapper[4840]: I0311 09:04:00.141820 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553664-65njn" Mar 11 09:04:00 crc kubenswrapper[4840]: I0311 09:04:00.146282 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:04:00 crc kubenswrapper[4840]: I0311 09:04:00.146777 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:04:00 crc kubenswrapper[4840]: I0311 09:04:00.151580 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:04:00 crc kubenswrapper[4840]: I0311 09:04:00.158661 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553664-65njn"] Mar 11 09:04:00 crc kubenswrapper[4840]: I0311 09:04:00.202150 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmcgk\" (UniqueName: \"kubernetes.io/projected/d9d8efbb-381e-42dd-9f41-b002e811a046-kube-api-access-nmcgk\") pod \"auto-csr-approver-29553664-65njn\" (UID: \"d9d8efbb-381e-42dd-9f41-b002e811a046\") " pod="openshift-infra/auto-csr-approver-29553664-65njn" Mar 11 09:04:00 crc kubenswrapper[4840]: I0311 09:04:00.304216 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmcgk\" (UniqueName: \"kubernetes.io/projected/d9d8efbb-381e-42dd-9f41-b002e811a046-kube-api-access-nmcgk\") pod \"auto-csr-approver-29553664-65njn\" (UID: \"d9d8efbb-381e-42dd-9f41-b002e811a046\") " pod="openshift-infra/auto-csr-approver-29553664-65njn" Mar 11 09:04:00 crc kubenswrapper[4840]: I0311 09:04:00.329727 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmcgk\" (UniqueName: \"kubernetes.io/projected/d9d8efbb-381e-42dd-9f41-b002e811a046-kube-api-access-nmcgk\") pod \"auto-csr-approver-29553664-65njn\" (UID: \"d9d8efbb-381e-42dd-9f41-b002e811a046\") " pod="openshift-infra/auto-csr-approver-29553664-65njn" Mar 11 09:04:00 crc kubenswrapper[4840]: I0311 09:04:00.472511 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553664-65njn" Mar 11 09:04:00 crc kubenswrapper[4840]: I0311 09:04:00.905588 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553664-65njn"] Mar 11 09:04:01 crc kubenswrapper[4840]: I0311 09:04:01.866532 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553664-65njn" event={"ID":"d9d8efbb-381e-42dd-9f41-b002e811a046","Type":"ContainerStarted","Data":"14a6fec304c4d1187e38c33dcc81a1af81577e0e5ef4cac69961551a595af94d"} Mar 11 09:04:02 crc kubenswrapper[4840]: I0311 09:04:02.878309 4840 generic.go:334] "Generic (PLEG): container finished" podID="d9d8efbb-381e-42dd-9f41-b002e811a046" containerID="674c771977115f112b5ec8199caeeb0514a701f72a3d596a1b732a6efbf3a84c" exitCode=0 Mar 11 09:04:02 crc kubenswrapper[4840]: I0311 09:04:02.878401 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553664-65njn" event={"ID":"d9d8efbb-381e-42dd-9f41-b002e811a046","Type":"ContainerDied","Data":"674c771977115f112b5ec8199caeeb0514a701f72a3d596a1b732a6efbf3a84c"} Mar 11 09:04:04 crc kubenswrapper[4840]: I0311 09:04:04.179260 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553664-65njn" Mar 11 09:04:04 crc kubenswrapper[4840]: I0311 09:04:04.365743 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmcgk\" (UniqueName: \"kubernetes.io/projected/d9d8efbb-381e-42dd-9f41-b002e811a046-kube-api-access-nmcgk\") pod \"d9d8efbb-381e-42dd-9f41-b002e811a046\" (UID: \"d9d8efbb-381e-42dd-9f41-b002e811a046\") " Mar 11 09:04:04 crc kubenswrapper[4840]: I0311 09:04:04.380589 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9d8efbb-381e-42dd-9f41-b002e811a046-kube-api-access-nmcgk" (OuterVolumeSpecName: "kube-api-access-nmcgk") pod "d9d8efbb-381e-42dd-9f41-b002e811a046" (UID: "d9d8efbb-381e-42dd-9f41-b002e811a046"). InnerVolumeSpecName "kube-api-access-nmcgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:04:04 crc kubenswrapper[4840]: I0311 09:04:04.466925 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmcgk\" (UniqueName: \"kubernetes.io/projected/d9d8efbb-381e-42dd-9f41-b002e811a046-kube-api-access-nmcgk\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:04 crc kubenswrapper[4840]: I0311 09:04:04.895918 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553664-65njn" event={"ID":"d9d8efbb-381e-42dd-9f41-b002e811a046","Type":"ContainerDied","Data":"14a6fec304c4d1187e38c33dcc81a1af81577e0e5ef4cac69961551a595af94d"} Mar 11 09:04:04 crc kubenswrapper[4840]: I0311 09:04:04.895961 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14a6fec304c4d1187e38c33dcc81a1af81577e0e5ef4cac69961551a595af94d" Mar 11 09:04:04 crc kubenswrapper[4840]: I0311 09:04:04.896194 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553664-65njn" Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.016152 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" podUID="40a6df27-50b3-452a-940a-aab6b087cdb2" containerName="registry" containerID="cri-o://08e6ad061b588a2322ac9ccb3f32b85a9c7b4219c2bd0b146ba49c64e0a75a65" gracePeriod=30 Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.450225 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.567817 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40a6df27-50b3-452a-940a-aab6b087cdb2-installation-pull-secrets\") pod \"40a6df27-50b3-452a-940a-aab6b087cdb2\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.567906 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40a6df27-50b3-452a-940a-aab6b087cdb2-registry-tls\") pod \"40a6df27-50b3-452a-940a-aab6b087cdb2\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.567937 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40a6df27-50b3-452a-940a-aab6b087cdb2-bound-sa-token\") pod \"40a6df27-50b3-452a-940a-aab6b087cdb2\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.567960 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40a6df27-50b3-452a-940a-aab6b087cdb2-trusted-ca\") pod \"40a6df27-50b3-452a-940a-aab6b087cdb2\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.568071 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40a6df27-50b3-452a-940a-aab6b087cdb2-ca-trust-extracted\") pod \"40a6df27-50b3-452a-940a-aab6b087cdb2\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.568106 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40a6df27-50b3-452a-940a-aab6b087cdb2-registry-certificates\") pod \"40a6df27-50b3-452a-940a-aab6b087cdb2\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.568141 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lrwl\" (UniqueName: \"kubernetes.io/projected/40a6df27-50b3-452a-940a-aab6b087cdb2-kube-api-access-9lrwl\") pod \"40a6df27-50b3-452a-940a-aab6b087cdb2\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.568307 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"40a6df27-50b3-452a-940a-aab6b087cdb2\" (UID: \"40a6df27-50b3-452a-940a-aab6b087cdb2\") " Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.569233 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40a6df27-50b3-452a-940a-aab6b087cdb2-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "40a6df27-50b3-452a-940a-aab6b087cdb2" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.569296 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40a6df27-50b3-452a-940a-aab6b087cdb2-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "40a6df27-50b3-452a-940a-aab6b087cdb2" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.575763 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40a6df27-50b3-452a-940a-aab6b087cdb2-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "40a6df27-50b3-452a-940a-aab6b087cdb2" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.578584 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40a6df27-50b3-452a-940a-aab6b087cdb2-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "40a6df27-50b3-452a-940a-aab6b087cdb2" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.579189 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "40a6df27-50b3-452a-940a-aab6b087cdb2" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.582796 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40a6df27-50b3-452a-940a-aab6b087cdb2-kube-api-access-9lrwl" (OuterVolumeSpecName: "kube-api-access-9lrwl") pod "40a6df27-50b3-452a-940a-aab6b087cdb2" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2"). InnerVolumeSpecName "kube-api-access-9lrwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.584093 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40a6df27-50b3-452a-940a-aab6b087cdb2-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "40a6df27-50b3-452a-940a-aab6b087cdb2" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.588820 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40a6df27-50b3-452a-940a-aab6b087cdb2-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "40a6df27-50b3-452a-940a-aab6b087cdb2" (UID: "40a6df27-50b3-452a-940a-aab6b087cdb2"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.670144 4840 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40a6df27-50b3-452a-940a-aab6b087cdb2-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.670201 4840 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40a6df27-50b3-452a-940a-aab6b087cdb2-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.670216 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lrwl\" (UniqueName: \"kubernetes.io/projected/40a6df27-50b3-452a-940a-aab6b087cdb2-kube-api-access-9lrwl\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.670227 4840 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40a6df27-50b3-452a-940a-aab6b087cdb2-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.670240 4840 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40a6df27-50b3-452a-940a-aab6b087cdb2-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.670249 4840 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40a6df27-50b3-452a-940a-aab6b087cdb2-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.670260 4840 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40a6df27-50b3-452a-940a-aab6b087cdb2-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.979391 4840 generic.go:334] "Generic (PLEG): container finished" podID="40a6df27-50b3-452a-940a-aab6b087cdb2" containerID="08e6ad061b588a2322ac9ccb3f32b85a9c7b4219c2bd0b146ba49c64e0a75a65" exitCode=0 Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.979489 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" event={"ID":"40a6df27-50b3-452a-940a-aab6b087cdb2","Type":"ContainerDied","Data":"08e6ad061b588a2322ac9ccb3f32b85a9c7b4219c2bd0b146ba49c64e0a75a65"} Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.979501 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.979545 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pdrj8" event={"ID":"40a6df27-50b3-452a-940a-aab6b087cdb2","Type":"ContainerDied","Data":"5eb488ed9a6e7b08a3cadc609fdd28d7556b05b193a59010f63eb8b1437abd3c"} Mar 11 09:04:16 crc kubenswrapper[4840]: I0311 09:04:16.979650 4840 scope.go:117] "RemoveContainer" containerID="08e6ad061b588a2322ac9ccb3f32b85a9c7b4219c2bd0b146ba49c64e0a75a65" Mar 11 09:04:17 crc kubenswrapper[4840]: I0311 09:04:17.008770 4840 scope.go:117] "RemoveContainer" containerID="08e6ad061b588a2322ac9ccb3f32b85a9c7b4219c2bd0b146ba49c64e0a75a65" Mar 11 09:04:17 crc kubenswrapper[4840]: E0311 09:04:17.009953 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08e6ad061b588a2322ac9ccb3f32b85a9c7b4219c2bd0b146ba49c64e0a75a65\": container with ID starting with 08e6ad061b588a2322ac9ccb3f32b85a9c7b4219c2bd0b146ba49c64e0a75a65 not found: ID does not exist" containerID="08e6ad061b588a2322ac9ccb3f32b85a9c7b4219c2bd0b146ba49c64e0a75a65" Mar 11 09:04:17 crc kubenswrapper[4840]: I0311 09:04:17.010264 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08e6ad061b588a2322ac9ccb3f32b85a9c7b4219c2bd0b146ba49c64e0a75a65"} err="failed to get container status \"08e6ad061b588a2322ac9ccb3f32b85a9c7b4219c2bd0b146ba49c64e0a75a65\": rpc error: code = NotFound desc = could not find container \"08e6ad061b588a2322ac9ccb3f32b85a9c7b4219c2bd0b146ba49c64e0a75a65\": container with ID starting with 08e6ad061b588a2322ac9ccb3f32b85a9c7b4219c2bd0b146ba49c64e0a75a65 not found: ID does not exist" Mar 11 09:04:17 crc kubenswrapper[4840]: I0311 09:04:17.019691 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pdrj8"] Mar 11 09:04:17 crc kubenswrapper[4840]: I0311 09:04:17.022975 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pdrj8"] Mar 11 09:04:18 crc kubenswrapper[4840]: I0311 09:04:18.069516 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40a6df27-50b3-452a-940a-aab6b087cdb2" path="/var/lib/kubelet/pods/40a6df27-50b3-452a-940a-aab6b087cdb2/volumes" Mar 11 09:04:27 crc kubenswrapper[4840]: I0311 09:04:27.445985 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:04:27 crc kubenswrapper[4840]: I0311 09:04:27.446820 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:04:27 crc kubenswrapper[4840]: I0311 09:04:27.446895 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 09:04:27 crc kubenswrapper[4840]: I0311 09:04:27.447793 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"39bfd2736dc9b7f94a3520f1fec1596fc21bf709c489bf4e66a4802a52f0ecba"} pod="openshift-machine-config-operator/machine-config-daemon-brtht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 09:04:27 crc kubenswrapper[4840]: I0311 09:04:27.447858 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" containerID="cri-o://39bfd2736dc9b7f94a3520f1fec1596fc21bf709c489bf4e66a4802a52f0ecba" gracePeriod=600 Mar 11 09:04:28 crc kubenswrapper[4840]: I0311 09:04:28.055032 4840 generic.go:334] "Generic (PLEG): container finished" podID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerID="39bfd2736dc9b7f94a3520f1fec1596fc21bf709c489bf4e66a4802a52f0ecba" exitCode=0 Mar 11 09:04:28 crc kubenswrapper[4840]: I0311 09:04:28.055110 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerDied","Data":"39bfd2736dc9b7f94a3520f1fec1596fc21bf709c489bf4e66a4802a52f0ecba"} Mar 11 09:04:28 crc kubenswrapper[4840]: I0311 09:04:28.055569 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"29a4ee15afa23fdb14b92f2a19d6918007b0fac2dd41b03b6d3af26018e2aa34"} Mar 11 09:04:28 crc kubenswrapper[4840]: I0311 09:04:28.055591 4840 scope.go:117] "RemoveContainer" containerID="b0602373fe91019f9b20e701e042782a4eb5878ae2df86375738bc605412a803" Mar 11 09:06:00 crc kubenswrapper[4840]: I0311 09:06:00.145820 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553666-tmkcj"] Mar 11 09:06:00 crc kubenswrapper[4840]: E0311 09:06:00.146863 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9d8efbb-381e-42dd-9f41-b002e811a046" containerName="oc" Mar 11 09:06:00 crc kubenswrapper[4840]: I0311 09:06:00.146880 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9d8efbb-381e-42dd-9f41-b002e811a046" containerName="oc" Mar 11 09:06:00 crc kubenswrapper[4840]: E0311 09:06:00.146908 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40a6df27-50b3-452a-940a-aab6b087cdb2" containerName="registry" Mar 11 09:06:00 crc kubenswrapper[4840]: I0311 09:06:00.146917 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="40a6df27-50b3-452a-940a-aab6b087cdb2" containerName="registry" Mar 11 09:06:00 crc kubenswrapper[4840]: I0311 09:06:00.149004 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="40a6df27-50b3-452a-940a-aab6b087cdb2" containerName="registry" Mar 11 09:06:00 crc kubenswrapper[4840]: I0311 09:06:00.149107 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9d8efbb-381e-42dd-9f41-b002e811a046" containerName="oc" Mar 11 09:06:00 crc kubenswrapper[4840]: I0311 09:06:00.150102 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553666-tmkcj" Mar 11 09:06:00 crc kubenswrapper[4840]: I0311 09:06:00.153744 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:06:00 crc kubenswrapper[4840]: I0311 09:06:00.153941 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:06:00 crc kubenswrapper[4840]: I0311 09:06:00.154094 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:06:00 crc kubenswrapper[4840]: I0311 09:06:00.159296 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553666-tmkcj"] Mar 11 09:06:00 crc kubenswrapper[4840]: I0311 09:06:00.188741 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2rls\" (UniqueName: \"kubernetes.io/projected/a5adfca7-ee6d-4948-a79a-15d42015ba8b-kube-api-access-z2rls\") pod \"auto-csr-approver-29553666-tmkcj\" (UID: \"a5adfca7-ee6d-4948-a79a-15d42015ba8b\") " pod="openshift-infra/auto-csr-approver-29553666-tmkcj" Mar 11 09:06:00 crc kubenswrapper[4840]: I0311 09:06:00.290262 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2rls\" (UniqueName: \"kubernetes.io/projected/a5adfca7-ee6d-4948-a79a-15d42015ba8b-kube-api-access-z2rls\") pod \"auto-csr-approver-29553666-tmkcj\" (UID: \"a5adfca7-ee6d-4948-a79a-15d42015ba8b\") " pod="openshift-infra/auto-csr-approver-29553666-tmkcj" Mar 11 09:06:00 crc kubenswrapper[4840]: I0311 09:06:00.316640 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2rls\" (UniqueName: \"kubernetes.io/projected/a5adfca7-ee6d-4948-a79a-15d42015ba8b-kube-api-access-z2rls\") pod \"auto-csr-approver-29553666-tmkcj\" (UID: \"a5adfca7-ee6d-4948-a79a-15d42015ba8b\") " pod="openshift-infra/auto-csr-approver-29553666-tmkcj" Mar 11 09:06:00 crc kubenswrapper[4840]: I0311 09:06:00.475058 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553666-tmkcj" Mar 11 09:06:00 crc kubenswrapper[4840]: I0311 09:06:00.696818 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553666-tmkcj"] Mar 11 09:06:00 crc kubenswrapper[4840]: I0311 09:06:00.706964 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 09:06:00 crc kubenswrapper[4840]: I0311 09:06:00.998951 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553666-tmkcj" event={"ID":"a5adfca7-ee6d-4948-a79a-15d42015ba8b","Type":"ContainerStarted","Data":"99f116fe2801625ac84dbdac3fef79a3962d41b415258c75bf191f963e776969"} Mar 11 09:06:02 crc kubenswrapper[4840]: I0311 09:06:02.007286 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553666-tmkcj" event={"ID":"a5adfca7-ee6d-4948-a79a-15d42015ba8b","Type":"ContainerStarted","Data":"0b7a1a6e54452669c82f91cbe0b81febfe1a6fdf13e22497f30a83e7972ff16c"} Mar 11 09:06:02 crc kubenswrapper[4840]: I0311 09:06:02.024274 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553666-tmkcj" podStartSLOduration=1.085111137 podStartE2EDuration="2.024243293s" podCreationTimestamp="2026-03-11 09:06:00 +0000 UTC" firstStartedPulling="2026-03-11 09:06:00.706727702 +0000 UTC m=+559.372397507" lastFinishedPulling="2026-03-11 09:06:01.645859848 +0000 UTC m=+560.311529663" observedRunningTime="2026-03-11 09:06:02.021358351 +0000 UTC m=+560.687028176" watchObservedRunningTime="2026-03-11 09:06:02.024243293 +0000 UTC m=+560.689913108" Mar 11 09:06:03 crc kubenswrapper[4840]: I0311 09:06:03.017315 4840 generic.go:334] "Generic (PLEG): container finished" podID="a5adfca7-ee6d-4948-a79a-15d42015ba8b" containerID="0b7a1a6e54452669c82f91cbe0b81febfe1a6fdf13e22497f30a83e7972ff16c" exitCode=0 Mar 11 09:06:03 crc kubenswrapper[4840]: I0311 09:06:03.017362 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553666-tmkcj" event={"ID":"a5adfca7-ee6d-4948-a79a-15d42015ba8b","Type":"ContainerDied","Data":"0b7a1a6e54452669c82f91cbe0b81febfe1a6fdf13e22497f30a83e7972ff16c"} Mar 11 09:06:04 crc kubenswrapper[4840]: I0311 09:06:04.226520 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553666-tmkcj" Mar 11 09:06:04 crc kubenswrapper[4840]: I0311 09:06:04.343234 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2rls\" (UniqueName: \"kubernetes.io/projected/a5adfca7-ee6d-4948-a79a-15d42015ba8b-kube-api-access-z2rls\") pod \"a5adfca7-ee6d-4948-a79a-15d42015ba8b\" (UID: \"a5adfca7-ee6d-4948-a79a-15d42015ba8b\") " Mar 11 09:06:04 crc kubenswrapper[4840]: I0311 09:06:04.350933 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5adfca7-ee6d-4948-a79a-15d42015ba8b-kube-api-access-z2rls" (OuterVolumeSpecName: "kube-api-access-z2rls") pod "a5adfca7-ee6d-4948-a79a-15d42015ba8b" (UID: "a5adfca7-ee6d-4948-a79a-15d42015ba8b"). InnerVolumeSpecName "kube-api-access-z2rls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:06:04 crc kubenswrapper[4840]: I0311 09:06:04.445446 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2rls\" (UniqueName: \"kubernetes.io/projected/a5adfca7-ee6d-4948-a79a-15d42015ba8b-kube-api-access-z2rls\") on node \"crc\" DevicePath \"\"" Mar 11 09:06:05 crc kubenswrapper[4840]: I0311 09:06:05.031944 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553666-tmkcj" Mar 11 09:06:05 crc kubenswrapper[4840]: I0311 09:06:05.031925 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553666-tmkcj" event={"ID":"a5adfca7-ee6d-4948-a79a-15d42015ba8b","Type":"ContainerDied","Data":"99f116fe2801625ac84dbdac3fef79a3962d41b415258c75bf191f963e776969"} Mar 11 09:06:05 crc kubenswrapper[4840]: I0311 09:06:05.032010 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99f116fe2801625ac84dbdac3fef79a3962d41b415258c75bf191f963e776969" Mar 11 09:06:05 crc kubenswrapper[4840]: I0311 09:06:05.078019 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553660-t75r2"] Mar 11 09:06:05 crc kubenswrapper[4840]: I0311 09:06:05.081915 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553660-t75r2"] Mar 11 09:06:06 crc kubenswrapper[4840]: I0311 09:06:06.076557 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4172593-e97b-48a1-b064-8051cd6aef46" path="/var/lib/kubelet/pods/c4172593-e97b-48a1-b064-8051cd6aef46/volumes" Mar 11 09:06:27 crc kubenswrapper[4840]: I0311 09:06:27.445780 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:06:27 crc kubenswrapper[4840]: I0311 09:06:27.446481 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:06:42 crc kubenswrapper[4840]: I0311 09:06:42.408424 4840 scope.go:117] "RemoveContainer" containerID="8a7b64590ad588a1fd9b8fe50a9debb1e9ad38bfade3c36d0d9b8217fd2a169e" Mar 11 09:06:42 crc kubenswrapper[4840]: I0311 09:06:42.434962 4840 scope.go:117] "RemoveContainer" containerID="cc56fc852306a16d14ebb28a025ff3e7385553056d053b394158b4fc4fc52a44" Mar 11 09:06:57 crc kubenswrapper[4840]: I0311 09:06:57.446806 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:06:57 crc kubenswrapper[4840]: I0311 09:06:57.447983 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:07:27 crc kubenswrapper[4840]: I0311 09:07:27.446535 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:07:27 crc kubenswrapper[4840]: I0311 09:07:27.447647 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:07:27 crc kubenswrapper[4840]: I0311 09:07:27.447720 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 09:07:27 crc kubenswrapper[4840]: I0311 09:07:27.448751 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"29a4ee15afa23fdb14b92f2a19d6918007b0fac2dd41b03b6d3af26018e2aa34"} pod="openshift-machine-config-operator/machine-config-daemon-brtht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 09:07:27 crc kubenswrapper[4840]: I0311 09:07:27.448829 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" containerID="cri-o://29a4ee15afa23fdb14b92f2a19d6918007b0fac2dd41b03b6d3af26018e2aa34" gracePeriod=600 Mar 11 09:07:28 crc kubenswrapper[4840]: I0311 09:07:28.561525 4840 generic.go:334] "Generic (PLEG): container finished" podID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerID="29a4ee15afa23fdb14b92f2a19d6918007b0fac2dd41b03b6d3af26018e2aa34" exitCode=0 Mar 11 09:07:28 crc kubenswrapper[4840]: I0311 09:07:28.561607 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerDied","Data":"29a4ee15afa23fdb14b92f2a19d6918007b0fac2dd41b03b6d3af26018e2aa34"} Mar 11 09:07:28 crc kubenswrapper[4840]: I0311 09:07:28.562321 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"5d930828edaf48a40ab8d839e51f7f6d23db61f827df30c4134bd6083d7cbb22"} Mar 11 09:07:28 crc kubenswrapper[4840]: I0311 09:07:28.562353 4840 scope.go:117] "RemoveContainer" containerID="39bfd2736dc9b7f94a3520f1fec1596fc21bf709c489bf4e66a4802a52f0ecba" Mar 11 09:08:00 crc kubenswrapper[4840]: I0311 09:08:00.141753 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553668-rcg6x"] Mar 11 09:08:00 crc kubenswrapper[4840]: E0311 09:08:00.142868 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5adfca7-ee6d-4948-a79a-15d42015ba8b" containerName="oc" Mar 11 09:08:00 crc kubenswrapper[4840]: I0311 09:08:00.142890 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5adfca7-ee6d-4948-a79a-15d42015ba8b" containerName="oc" Mar 11 09:08:00 crc kubenswrapper[4840]: I0311 09:08:00.142991 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5adfca7-ee6d-4948-a79a-15d42015ba8b" containerName="oc" Mar 11 09:08:00 crc kubenswrapper[4840]: I0311 09:08:00.143373 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553668-rcg6x" Mar 11 09:08:00 crc kubenswrapper[4840]: I0311 09:08:00.149273 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:08:00 crc kubenswrapper[4840]: I0311 09:08:00.149451 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:08:00 crc kubenswrapper[4840]: I0311 09:08:00.149595 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:08:00 crc kubenswrapper[4840]: I0311 09:08:00.156196 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553668-rcg6x"] Mar 11 09:08:00 crc kubenswrapper[4840]: I0311 09:08:00.255406 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rq7h\" (UniqueName: \"kubernetes.io/projected/f37e6cb1-3d5e-47d7-8829-263a9aab83f8-kube-api-access-6rq7h\") pod \"auto-csr-approver-29553668-rcg6x\" (UID: \"f37e6cb1-3d5e-47d7-8829-263a9aab83f8\") " pod="openshift-infra/auto-csr-approver-29553668-rcg6x" Mar 11 09:08:00 crc kubenswrapper[4840]: I0311 09:08:00.356794 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rq7h\" (UniqueName: \"kubernetes.io/projected/f37e6cb1-3d5e-47d7-8829-263a9aab83f8-kube-api-access-6rq7h\") pod \"auto-csr-approver-29553668-rcg6x\" (UID: \"f37e6cb1-3d5e-47d7-8829-263a9aab83f8\") " pod="openshift-infra/auto-csr-approver-29553668-rcg6x" Mar 11 09:08:00 crc kubenswrapper[4840]: I0311 09:08:00.381317 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rq7h\" (UniqueName: \"kubernetes.io/projected/f37e6cb1-3d5e-47d7-8829-263a9aab83f8-kube-api-access-6rq7h\") pod \"auto-csr-approver-29553668-rcg6x\" (UID: \"f37e6cb1-3d5e-47d7-8829-263a9aab83f8\") " pod="openshift-infra/auto-csr-approver-29553668-rcg6x" Mar 11 09:08:00 crc kubenswrapper[4840]: I0311 09:08:00.476189 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553668-rcg6x" Mar 11 09:08:00 crc kubenswrapper[4840]: I0311 09:08:00.881973 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553668-rcg6x"] Mar 11 09:08:01 crc kubenswrapper[4840]: I0311 09:08:01.775047 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553668-rcg6x" event={"ID":"f37e6cb1-3d5e-47d7-8829-263a9aab83f8","Type":"ContainerStarted","Data":"93d1852939fc9c72f32b2031b54eb6d4db899bb286697dbaf0afdac5396be2ae"} Mar 11 09:08:02 crc kubenswrapper[4840]: I0311 09:08:02.783397 4840 generic.go:334] "Generic (PLEG): container finished" podID="f37e6cb1-3d5e-47d7-8829-263a9aab83f8" containerID="90737869b7cab0b463ad285f85abfb1ee45610be90aa4b9485a74e61608a569b" exitCode=0 Mar 11 09:08:02 crc kubenswrapper[4840]: I0311 09:08:02.783554 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553668-rcg6x" event={"ID":"f37e6cb1-3d5e-47d7-8829-263a9aab83f8","Type":"ContainerDied","Data":"90737869b7cab0b463ad285f85abfb1ee45610be90aa4b9485a74e61608a569b"} Mar 11 09:08:04 crc kubenswrapper[4840]: I0311 09:08:04.063494 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553668-rcg6x" Mar 11 09:08:04 crc kubenswrapper[4840]: I0311 09:08:04.214141 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rq7h\" (UniqueName: \"kubernetes.io/projected/f37e6cb1-3d5e-47d7-8829-263a9aab83f8-kube-api-access-6rq7h\") pod \"f37e6cb1-3d5e-47d7-8829-263a9aab83f8\" (UID: \"f37e6cb1-3d5e-47d7-8829-263a9aab83f8\") " Mar 11 09:08:04 crc kubenswrapper[4840]: I0311 09:08:04.221752 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f37e6cb1-3d5e-47d7-8829-263a9aab83f8-kube-api-access-6rq7h" (OuterVolumeSpecName: "kube-api-access-6rq7h") pod "f37e6cb1-3d5e-47d7-8829-263a9aab83f8" (UID: "f37e6cb1-3d5e-47d7-8829-263a9aab83f8"). InnerVolumeSpecName "kube-api-access-6rq7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:08:04 crc kubenswrapper[4840]: I0311 09:08:04.316533 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rq7h\" (UniqueName: \"kubernetes.io/projected/f37e6cb1-3d5e-47d7-8829-263a9aab83f8-kube-api-access-6rq7h\") on node \"crc\" DevicePath \"\"" Mar 11 09:08:04 crc kubenswrapper[4840]: I0311 09:08:04.798330 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553668-rcg6x" event={"ID":"f37e6cb1-3d5e-47d7-8829-263a9aab83f8","Type":"ContainerDied","Data":"93d1852939fc9c72f32b2031b54eb6d4db899bb286697dbaf0afdac5396be2ae"} Mar 11 09:08:04 crc kubenswrapper[4840]: I0311 09:08:04.798391 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93d1852939fc9c72f32b2031b54eb6d4db899bb286697dbaf0afdac5396be2ae" Mar 11 09:08:04 crc kubenswrapper[4840]: I0311 09:08:04.798425 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553668-rcg6x" Mar 11 09:08:05 crc kubenswrapper[4840]: I0311 09:08:05.131769 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553662-vnks9"] Mar 11 09:08:05 crc kubenswrapper[4840]: I0311 09:08:05.134892 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553662-vnks9"] Mar 11 09:08:06 crc kubenswrapper[4840]: I0311 09:08:06.070070 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe10c40d-babc-4f67-831a-efc94c086e64" path="/var/lib/kubelet/pods/fe10c40d-babc-4f67-831a-efc94c086e64/volumes" Mar 11 09:08:42 crc kubenswrapper[4840]: I0311 09:08:42.531549 4840 scope.go:117] "RemoveContainer" containerID="e017eab064c5fac7fdf9d503edd02db88f5af22a912f0030c82136a9cdfe0713" Mar 11 09:09:56 crc kubenswrapper[4840]: I0311 09:09:56.115225 4840 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 11 09:09:57 crc kubenswrapper[4840]: I0311 09:09:57.445885 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:09:57 crc kubenswrapper[4840]: I0311 09:09:57.446449 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:10:00 crc kubenswrapper[4840]: I0311 09:10:00.141869 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553670-cd8gv"] Mar 11 09:10:00 crc kubenswrapper[4840]: E0311 09:10:00.142742 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f37e6cb1-3d5e-47d7-8829-263a9aab83f8" containerName="oc" Mar 11 09:10:00 crc kubenswrapper[4840]: I0311 09:10:00.142761 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f37e6cb1-3d5e-47d7-8829-263a9aab83f8" containerName="oc" Mar 11 09:10:00 crc kubenswrapper[4840]: I0311 09:10:00.142896 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f37e6cb1-3d5e-47d7-8829-263a9aab83f8" containerName="oc" Mar 11 09:10:00 crc kubenswrapper[4840]: I0311 09:10:00.143452 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553670-cd8gv" Mar 11 09:10:00 crc kubenswrapper[4840]: I0311 09:10:00.148990 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:10:00 crc kubenswrapper[4840]: I0311 09:10:00.148996 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:10:00 crc kubenswrapper[4840]: I0311 09:10:00.149014 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:10:00 crc kubenswrapper[4840]: I0311 09:10:00.156997 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553670-cd8gv"] Mar 11 09:10:00 crc kubenswrapper[4840]: I0311 09:10:00.215614 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b467h\" (UniqueName: \"kubernetes.io/projected/037b8006-981a-4018-a446-b5c00007db43-kube-api-access-b467h\") pod \"auto-csr-approver-29553670-cd8gv\" (UID: \"037b8006-981a-4018-a446-b5c00007db43\") " pod="openshift-infra/auto-csr-approver-29553670-cd8gv" Mar 11 09:10:00 crc kubenswrapper[4840]: I0311 09:10:00.316636 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b467h\" (UniqueName: \"kubernetes.io/projected/037b8006-981a-4018-a446-b5c00007db43-kube-api-access-b467h\") pod \"auto-csr-approver-29553670-cd8gv\" (UID: \"037b8006-981a-4018-a446-b5c00007db43\") " pod="openshift-infra/auto-csr-approver-29553670-cd8gv" Mar 11 09:10:00 crc kubenswrapper[4840]: I0311 09:10:00.347677 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b467h\" (UniqueName: \"kubernetes.io/projected/037b8006-981a-4018-a446-b5c00007db43-kube-api-access-b467h\") pod \"auto-csr-approver-29553670-cd8gv\" (UID: \"037b8006-981a-4018-a446-b5c00007db43\") " pod="openshift-infra/auto-csr-approver-29553670-cd8gv" Mar 11 09:10:00 crc kubenswrapper[4840]: I0311 09:10:00.462872 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553670-cd8gv" Mar 11 09:10:00 crc kubenswrapper[4840]: I0311 09:10:00.899040 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553670-cd8gv"] Mar 11 09:10:00 crc kubenswrapper[4840]: W0311 09:10:00.908740 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod037b8006_981a_4018_a446_b5c00007db43.slice/crio-7e91af2b1008cec273255048015faaa2778aec5fb8c8a88518dd3ea2ff484e64 WatchSource:0}: Error finding container 7e91af2b1008cec273255048015faaa2778aec5fb8c8a88518dd3ea2ff484e64: Status 404 returned error can't find the container with id 7e91af2b1008cec273255048015faaa2778aec5fb8c8a88518dd3ea2ff484e64 Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.358347 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7c2zl"] Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.360367 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="ovn-controller" containerID="cri-o://eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24" gracePeriod=30 Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.360492 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="sbdb" containerID="cri-o://1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879" gracePeriod=30 Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.360486 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c" gracePeriod=30 Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.360617 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="nbdb" containerID="cri-o://aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd" gracePeriod=30 Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.360653 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="kube-rbac-proxy-node" containerID="cri-o://047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373" gracePeriod=30 Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.360692 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="ovn-acl-logging" containerID="cri-o://430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d" gracePeriod=30 Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.360645 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="northd" containerID="cri-o://4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb" gracePeriod=30 Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.404959 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="ovnkube-controller" containerID="cri-o://7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924" gracePeriod=30 Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.704287 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7c2zl_935336e2-294b-4982-83f9-718806d14e5c/ovnkube-controller/2.log" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.706750 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7c2zl_935336e2-294b-4982-83f9-718806d14e5c/ovn-acl-logging/0.log" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.707352 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7c2zl_935336e2-294b-4982-83f9-718806d14e5c/ovn-controller/0.log" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.707991 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.723529 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcb9n_0c1678fd-7741-474b-9c8e-3008d3570921/kube-multus/1.log" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.724043 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcb9n_0c1678fd-7741-474b-9c8e-3008d3570921/kube-multus/0.log" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.724091 4840 generic.go:334] "Generic (PLEG): container finished" podID="0c1678fd-7741-474b-9c8e-3008d3570921" containerID="ae57d680327e6f0eb22304dbeba1a9a8e001326b065bfd0ae0266dcb5e561d87" exitCode=2 Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.724159 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vcb9n" event={"ID":"0c1678fd-7741-474b-9c8e-3008d3570921","Type":"ContainerDied","Data":"ae57d680327e6f0eb22304dbeba1a9a8e001326b065bfd0ae0266dcb5e561d87"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.724204 4840 scope.go:117] "RemoveContainer" containerID="affb76bb8b08157d415301fc93f48fe8c675137a1f6087bcade89dc117748638" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.724938 4840 scope.go:117] "RemoveContainer" containerID="ae57d680327e6f0eb22304dbeba1a9a8e001326b065bfd0ae0266dcb5e561d87" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.728447 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7c2zl_935336e2-294b-4982-83f9-718806d14e5c/ovnkube-controller/2.log" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.735408 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7c2zl_935336e2-294b-4982-83f9-718806d14e5c/ovn-acl-logging/0.log" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.736268 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7c2zl_935336e2-294b-4982-83f9-718806d14e5c/ovn-controller/0.log" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.737929 4840 generic.go:334] "Generic (PLEG): container finished" podID="935336e2-294b-4982-83f9-718806d14e5c" containerID="7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924" exitCode=0 Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.737977 4840 generic.go:334] "Generic (PLEG): container finished" podID="935336e2-294b-4982-83f9-718806d14e5c" containerID="1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879" exitCode=0 Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.737988 4840 generic.go:334] "Generic (PLEG): container finished" podID="935336e2-294b-4982-83f9-718806d14e5c" containerID="aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd" exitCode=0 Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738000 4840 generic.go:334] "Generic (PLEG): container finished" podID="935336e2-294b-4982-83f9-718806d14e5c" containerID="4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb" exitCode=0 Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738010 4840 generic.go:334] "Generic (PLEG): container finished" podID="935336e2-294b-4982-83f9-718806d14e5c" containerID="910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c" exitCode=0 Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738021 4840 generic.go:334] "Generic (PLEG): container finished" podID="935336e2-294b-4982-83f9-718806d14e5c" containerID="047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373" exitCode=0 Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738031 4840 generic.go:334] "Generic (PLEG): container finished" podID="935336e2-294b-4982-83f9-718806d14e5c" containerID="430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d" exitCode=143 Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738042 4840 generic.go:334] "Generic (PLEG): container finished" podID="935336e2-294b-4982-83f9-718806d14e5c" containerID="eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24" exitCode=143 Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738150 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" event={"ID":"935336e2-294b-4982-83f9-718806d14e5c","Type":"ContainerDied","Data":"7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738194 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" event={"ID":"935336e2-294b-4982-83f9-718806d14e5c","Type":"ContainerDied","Data":"1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738213 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" event={"ID":"935336e2-294b-4982-83f9-718806d14e5c","Type":"ContainerDied","Data":"aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738227 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" event={"ID":"935336e2-294b-4982-83f9-718806d14e5c","Type":"ContainerDied","Data":"4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738242 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" event={"ID":"935336e2-294b-4982-83f9-718806d14e5c","Type":"ContainerDied","Data":"910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738259 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" event={"ID":"935336e2-294b-4982-83f9-718806d14e5c","Type":"ContainerDied","Data":"047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738264 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738273 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738287 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738293 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738300 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738307 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738313 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738320 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738325 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738331 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738337 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738345 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" event={"ID":"935336e2-294b-4982-83f9-718806d14e5c","Type":"ContainerDied","Data":"430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738354 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738361 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738368 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738374 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738380 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738386 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738394 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738400 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738406 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738412 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738420 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" event={"ID":"935336e2-294b-4982-83f9-718806d14e5c","Type":"ContainerDied","Data":"eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738432 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738443 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738450 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738457 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738490 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738497 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738505 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738512 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738519 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738526 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738535 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7c2zl" event={"ID":"935336e2-294b-4982-83f9-718806d14e5c","Type":"ContainerDied","Data":"5183fda9fba4bfb5b137d3d27accc77f5c252a73c89e1c6560c5ff2ea00e7c18"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738546 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738554 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738560 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738575 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738581 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738586 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738592 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738598 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738603 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.738609 4840 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.744546 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553670-cd8gv" event={"ID":"037b8006-981a-4018-a446-b5c00007db43","Type":"ContainerStarted","Data":"7e91af2b1008cec273255048015faaa2778aec5fb8c8a88518dd3ea2ff484e64"} Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.762381 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kh9gf"] Mar 11 09:10:01 crc kubenswrapper[4840]: E0311 09:10:01.764676 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="nbdb" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.764701 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="nbdb" Mar 11 09:10:01 crc kubenswrapper[4840]: E0311 09:10:01.764710 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="kube-rbac-proxy-ovn-metrics" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.764716 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="kube-rbac-proxy-ovn-metrics" Mar 11 09:10:01 crc kubenswrapper[4840]: E0311 09:10:01.764722 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="ovn-controller" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.764730 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="ovn-controller" Mar 11 09:10:01 crc kubenswrapper[4840]: E0311 09:10:01.764743 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="ovnkube-controller" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.764749 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="ovnkube-controller" Mar 11 09:10:01 crc kubenswrapper[4840]: E0311 09:10:01.764757 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="ovnkube-controller" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.764763 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="ovnkube-controller" Mar 11 09:10:01 crc kubenswrapper[4840]: E0311 09:10:01.764770 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="ovnkube-controller" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.764776 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="ovnkube-controller" Mar 11 09:10:01 crc kubenswrapper[4840]: E0311 09:10:01.764784 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="ovnkube-controller" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.764790 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="ovnkube-controller" Mar 11 09:10:01 crc kubenswrapper[4840]: E0311 09:10:01.764802 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="sbdb" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.764807 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="sbdb" Mar 11 09:10:01 crc kubenswrapper[4840]: E0311 09:10:01.764815 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="kube-rbac-proxy-node" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.764821 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="kube-rbac-proxy-node" Mar 11 09:10:01 crc kubenswrapper[4840]: E0311 09:10:01.764828 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="northd" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.764833 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="northd" Mar 11 09:10:01 crc kubenswrapper[4840]: E0311 09:10:01.764841 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="kubecfg-setup" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.764847 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="kubecfg-setup" Mar 11 09:10:01 crc kubenswrapper[4840]: E0311 09:10:01.764856 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="ovn-acl-logging" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.764862 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="ovn-acl-logging" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.765029 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="ovnkube-controller" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.765040 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="northd" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.765050 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="nbdb" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.765059 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="kube-rbac-proxy-node" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.765065 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="sbdb" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.765073 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="ovn-controller" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.765080 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="ovnkube-controller" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.765088 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="kube-rbac-proxy-ovn-metrics" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.765096 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="ovn-acl-logging" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.765104 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="ovnkube-controller" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.765270 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="935336e2-294b-4982-83f9-718806d14e5c" containerName="ovnkube-controller" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.766827 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.769351 4840 scope.go:117] "RemoveContainer" containerID="7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.808041 4840 scope.go:117] "RemoveContainer" containerID="0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.828068 4840 scope.go:117] "RemoveContainer" containerID="1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.843309 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrc2z\" (UniqueName: \"kubernetes.io/projected/935336e2-294b-4982-83f9-718806d14e5c-kube-api-access-lrc2z\") pod \"935336e2-294b-4982-83f9-718806d14e5c\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.843375 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-etc-openvswitch\") pod \"935336e2-294b-4982-83f9-718806d14e5c\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.843421 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-kubelet\") pod \"935336e2-294b-4982-83f9-718806d14e5c\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.843443 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-run-ovn-kubernetes\") pod \"935336e2-294b-4982-83f9-718806d14e5c\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.843507 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-cni-netd\") pod \"935336e2-294b-4982-83f9-718806d14e5c\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.843531 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-slash\") pod \"935336e2-294b-4982-83f9-718806d14e5c\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.843583 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "935336e2-294b-4982-83f9-718806d14e5c" (UID: "935336e2-294b-4982-83f9-718806d14e5c"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.843596 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "935336e2-294b-4982-83f9-718806d14e5c" (UID: "935336e2-294b-4982-83f9-718806d14e5c"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.843600 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/935336e2-294b-4982-83f9-718806d14e5c-ovnkube-script-lib\") pod \"935336e2-294b-4982-83f9-718806d14e5c\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.843854 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/935336e2-294b-4982-83f9-718806d14e5c-ovn-node-metrics-cert\") pod \"935336e2-294b-4982-83f9-718806d14e5c\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.843890 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-systemd-units\") pod \"935336e2-294b-4982-83f9-718806d14e5c\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.843910 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-run-ovn\") pod \"935336e2-294b-4982-83f9-718806d14e5c\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.843927 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"935336e2-294b-4982-83f9-718806d14e5c\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.843958 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-run-openvswitch\") pod \"935336e2-294b-4982-83f9-718806d14e5c\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.843975 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-node-log\") pod \"935336e2-294b-4982-83f9-718806d14e5c\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.844066 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-log-socket\") pod \"935336e2-294b-4982-83f9-718806d14e5c\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.844087 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-cni-bin\") pod \"935336e2-294b-4982-83f9-718806d14e5c\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.844114 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/935336e2-294b-4982-83f9-718806d14e5c-env-overrides\") pod \"935336e2-294b-4982-83f9-718806d14e5c\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.844137 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-run-systemd\") pod \"935336e2-294b-4982-83f9-718806d14e5c\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.844160 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-run-netns\") pod \"935336e2-294b-4982-83f9-718806d14e5c\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.844201 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-var-lib-openvswitch\") pod \"935336e2-294b-4982-83f9-718806d14e5c\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.844227 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/935336e2-294b-4982-83f9-718806d14e5c-ovnkube-config\") pod \"935336e2-294b-4982-83f9-718806d14e5c\" (UID: \"935336e2-294b-4982-83f9-718806d14e5c\") " Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.844424 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-run-systemd\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.844524 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-host-run-ovn-kubernetes\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.844564 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-etc-openvswitch\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.844593 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-host-slash\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.844635 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-systemd-units\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.844688 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a51aa75b-e63e-40ea-9759-fc823877a8d2-env-overrides\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.844741 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a51aa75b-e63e-40ea-9759-fc823877a8d2-ovnkube-script-lib\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.844773 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a51aa75b-e63e-40ea-9759-fc823877a8d2-ovn-node-metrics-cert\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.845361 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-host-cni-bin\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.843583 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "935336e2-294b-4982-83f9-718806d14e5c" (UID: "935336e2-294b-4982-83f9-718806d14e5c"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.843626 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-slash" (OuterVolumeSpecName: "host-slash") pod "935336e2-294b-4982-83f9-718806d14e5c" (UID: "935336e2-294b-4982-83f9-718806d14e5c"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.845556 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4xsf\" (UniqueName: \"kubernetes.io/projected/a51aa75b-e63e-40ea-9759-fc823877a8d2-kube-api-access-z4xsf\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.843660 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "935336e2-294b-4982-83f9-718806d14e5c" (UID: "935336e2-294b-4982-83f9-718806d14e5c"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.844218 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/935336e2-294b-4982-83f9-718806d14e5c-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "935336e2-294b-4982-83f9-718806d14e5c" (UID: "935336e2-294b-4982-83f9-718806d14e5c"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.844251 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-node-log" (OuterVolumeSpecName: "node-log") pod "935336e2-294b-4982-83f9-718806d14e5c" (UID: "935336e2-294b-4982-83f9-718806d14e5c"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.844670 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "935336e2-294b-4982-83f9-718806d14e5c" (UID: "935336e2-294b-4982-83f9-718806d14e5c"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.844699 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-log-socket" (OuterVolumeSpecName: "log-socket") pod "935336e2-294b-4982-83f9-718806d14e5c" (UID: "935336e2-294b-4982-83f9-718806d14e5c"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.844721 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "935336e2-294b-4982-83f9-718806d14e5c" (UID: "935336e2-294b-4982-83f9-718806d14e5c"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.844880 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "935336e2-294b-4982-83f9-718806d14e5c" (UID: "935336e2-294b-4982-83f9-718806d14e5c"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.844964 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "935336e2-294b-4982-83f9-718806d14e5c" (UID: "935336e2-294b-4982-83f9-718806d14e5c"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.845078 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "935336e2-294b-4982-83f9-718806d14e5c" (UID: "935336e2-294b-4982-83f9-718806d14e5c"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.845116 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "935336e2-294b-4982-83f9-718806d14e5c" (UID: "935336e2-294b-4982-83f9-718806d14e5c"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.845140 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "935336e2-294b-4982-83f9-718806d14e5c" (UID: "935336e2-294b-4982-83f9-718806d14e5c"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.845326 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/935336e2-294b-4982-83f9-718806d14e5c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "935336e2-294b-4982-83f9-718806d14e5c" (UID: "935336e2-294b-4982-83f9-718806d14e5c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.845714 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/935336e2-294b-4982-83f9-718806d14e5c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "935336e2-294b-4982-83f9-718806d14e5c" (UID: "935336e2-294b-4982-83f9-718806d14e5c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.845825 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-var-lib-openvswitch\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.845866 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-run-ovn\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.845906 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.845932 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a51aa75b-e63e-40ea-9759-fc823877a8d2-ovnkube-config\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.845958 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-node-log\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.845985 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-run-openvswitch\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.846014 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-log-socket\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.846064 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-host-run-netns\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.846100 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-host-kubelet\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.846122 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-host-cni-netd\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.846182 4840 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-log-socket\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.846198 4840 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.846213 4840 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/935336e2-294b-4982-83f9-718806d14e5c-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.846225 4840 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.846235 4840 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.846248 4840 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/935336e2-294b-4982-83f9-718806d14e5c-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.846259 4840 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.846270 4840 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.846281 4840 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.846292 4840 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.846303 4840 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-slash\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.846314 4840 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/935336e2-294b-4982-83f9-718806d14e5c-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.846324 4840 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.846334 4840 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.846346 4840 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.846360 4840 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.846373 4840 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-node-log\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.850430 4840 scope.go:117] "RemoveContainer" containerID="aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.853432 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/935336e2-294b-4982-83f9-718806d14e5c-kube-api-access-lrc2z" (OuterVolumeSpecName: "kube-api-access-lrc2z") pod "935336e2-294b-4982-83f9-718806d14e5c" (UID: "935336e2-294b-4982-83f9-718806d14e5c"). InnerVolumeSpecName "kube-api-access-lrc2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.853665 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/935336e2-294b-4982-83f9-718806d14e5c-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "935336e2-294b-4982-83f9-718806d14e5c" (UID: "935336e2-294b-4982-83f9-718806d14e5c"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.868891 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "935336e2-294b-4982-83f9-718806d14e5c" (UID: "935336e2-294b-4982-83f9-718806d14e5c"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.870747 4840 scope.go:117] "RemoveContainer" containerID="4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.890827 4840 scope.go:117] "RemoveContainer" containerID="910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.907951 4840 scope.go:117] "RemoveContainer" containerID="047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.930624 4840 scope.go:117] "RemoveContainer" containerID="430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.945972 4840 scope.go:117] "RemoveContainer" containerID="eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.947325 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a51aa75b-e63e-40ea-9759-fc823877a8d2-ovnkube-script-lib\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.947381 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a51aa75b-e63e-40ea-9759-fc823877a8d2-ovn-node-metrics-cert\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.947406 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4xsf\" (UniqueName: \"kubernetes.io/projected/a51aa75b-e63e-40ea-9759-fc823877a8d2-kube-api-access-z4xsf\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.947450 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-host-cni-bin\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.947502 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-var-lib-openvswitch\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.947529 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-run-ovn\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.947582 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.947606 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a51aa75b-e63e-40ea-9759-fc823877a8d2-ovnkube-config\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.947644 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-run-openvswitch\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.947663 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-node-log\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.947685 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-log-socket\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.947727 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-host-run-netns\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.947754 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-host-cni-netd\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.947776 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-host-kubelet\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.947819 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-run-systemd\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.947858 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-host-run-ovn-kubernetes\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.947905 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-host-slash\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.947927 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-etc-openvswitch\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.947978 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-systemd-units\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.948004 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a51aa75b-e63e-40ea-9759-fc823877a8d2-env-overrides\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.948075 4840 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/935336e2-294b-4982-83f9-718806d14e5c-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.948092 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrc2z\" (UniqueName: \"kubernetes.io/projected/935336e2-294b-4982-83f9-718806d14e5c-kube-api-access-lrc2z\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.948119 4840 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/935336e2-294b-4982-83f9-718806d14e5c-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.948356 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a51aa75b-e63e-40ea-9759-fc823877a8d2-ovnkube-script-lib\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.948444 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-host-kubelet\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.948498 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-run-openvswitch\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.948531 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-node-log\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.948553 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-log-socket\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.948576 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-host-run-netns\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.948592 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a51aa75b-e63e-40ea-9759-fc823877a8d2-ovnkube-config\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.948619 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-host-slash\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.948601 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-host-cni-netd\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.948647 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-run-systemd\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.948667 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-host-cni-bin\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.948672 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-host-run-ovn-kubernetes\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.948693 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-etc-openvswitch\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.948716 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-var-lib-openvswitch\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.948721 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-run-ovn\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.948739 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-systemd-units\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.948763 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a51aa75b-e63e-40ea-9759-fc823877a8d2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.948831 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a51aa75b-e63e-40ea-9759-fc823877a8d2-env-overrides\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.955384 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a51aa75b-e63e-40ea-9759-fc823877a8d2-ovn-node-metrics-cert\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.964965 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4xsf\" (UniqueName: \"kubernetes.io/projected/a51aa75b-e63e-40ea-9759-fc823877a8d2-kube-api-access-z4xsf\") pod \"ovnkube-node-kh9gf\" (UID: \"a51aa75b-e63e-40ea-9759-fc823877a8d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.969012 4840 scope.go:117] "RemoveContainer" containerID="c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.983092 4840 scope.go:117] "RemoveContainer" containerID="7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924" Mar 11 09:10:01 crc kubenswrapper[4840]: E0311 09:10:01.983546 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924\": container with ID starting with 7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924 not found: ID does not exist" containerID="7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.983581 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924"} err="failed to get container status \"7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924\": rpc error: code = NotFound desc = could not find container \"7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924\": container with ID starting with 7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924 not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.983604 4840 scope.go:117] "RemoveContainer" containerID="0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b" Mar 11 09:10:01 crc kubenswrapper[4840]: E0311 09:10:01.983961 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b\": container with ID starting with 0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b not found: ID does not exist" containerID="0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.984014 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b"} err="failed to get container status \"0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b\": rpc error: code = NotFound desc = could not find container \"0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b\": container with ID starting with 0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.984061 4840 scope.go:117] "RemoveContainer" containerID="1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879" Mar 11 09:10:01 crc kubenswrapper[4840]: E0311 09:10:01.985277 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\": container with ID starting with 1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879 not found: ID does not exist" containerID="1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.985302 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879"} err="failed to get container status \"1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\": rpc error: code = NotFound desc = could not find container \"1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\": container with ID starting with 1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879 not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.985317 4840 scope.go:117] "RemoveContainer" containerID="aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd" Mar 11 09:10:01 crc kubenswrapper[4840]: E0311 09:10:01.985731 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\": container with ID starting with aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd not found: ID does not exist" containerID="aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.985757 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd"} err="failed to get container status \"aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\": rpc error: code = NotFound desc = could not find container \"aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\": container with ID starting with aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.985777 4840 scope.go:117] "RemoveContainer" containerID="4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb" Mar 11 09:10:01 crc kubenswrapper[4840]: E0311 09:10:01.986160 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\": container with ID starting with 4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb not found: ID does not exist" containerID="4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.986216 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb"} err="failed to get container status \"4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\": rpc error: code = NotFound desc = could not find container \"4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\": container with ID starting with 4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.986241 4840 scope.go:117] "RemoveContainer" containerID="910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c" Mar 11 09:10:01 crc kubenswrapper[4840]: E0311 09:10:01.986594 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\": container with ID starting with 910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c not found: ID does not exist" containerID="910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.986619 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c"} err="failed to get container status \"910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\": rpc error: code = NotFound desc = could not find container \"910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\": container with ID starting with 910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.986709 4840 scope.go:117] "RemoveContainer" containerID="047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373" Mar 11 09:10:01 crc kubenswrapper[4840]: E0311 09:10:01.987076 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\": container with ID starting with 047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373 not found: ID does not exist" containerID="047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.987102 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373"} err="failed to get container status \"047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\": rpc error: code = NotFound desc = could not find container \"047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\": container with ID starting with 047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373 not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.987118 4840 scope.go:117] "RemoveContainer" containerID="430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d" Mar 11 09:10:01 crc kubenswrapper[4840]: E0311 09:10:01.987515 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\": container with ID starting with 430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d not found: ID does not exist" containerID="430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.987574 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d"} err="failed to get container status \"430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\": rpc error: code = NotFound desc = could not find container \"430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\": container with ID starting with 430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.987593 4840 scope.go:117] "RemoveContainer" containerID="eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24" Mar 11 09:10:01 crc kubenswrapper[4840]: E0311 09:10:01.987876 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\": container with ID starting with eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24 not found: ID does not exist" containerID="eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.987902 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24"} err="failed to get container status \"eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\": rpc error: code = NotFound desc = could not find container \"eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\": container with ID starting with eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24 not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.987918 4840 scope.go:117] "RemoveContainer" containerID="c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19" Mar 11 09:10:01 crc kubenswrapper[4840]: E0311 09:10:01.988110 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\": container with ID starting with c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19 not found: ID does not exist" containerID="c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.988131 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19"} err="failed to get container status \"c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\": rpc error: code = NotFound desc = could not find container \"c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\": container with ID starting with c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19 not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.988145 4840 scope.go:117] "RemoveContainer" containerID="7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.988416 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924"} err="failed to get container status \"7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924\": rpc error: code = NotFound desc = could not find container \"7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924\": container with ID starting with 7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924 not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.988451 4840 scope.go:117] "RemoveContainer" containerID="0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.988830 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b"} err="failed to get container status \"0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b\": rpc error: code = NotFound desc = could not find container \"0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b\": container with ID starting with 0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.988849 4840 scope.go:117] "RemoveContainer" containerID="1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.989562 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879"} err="failed to get container status \"1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\": rpc error: code = NotFound desc = could not find container \"1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\": container with ID starting with 1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879 not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.989584 4840 scope.go:117] "RemoveContainer" containerID="aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.989903 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd"} err="failed to get container status \"aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\": rpc error: code = NotFound desc = could not find container \"aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\": container with ID starting with aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.989921 4840 scope.go:117] "RemoveContainer" containerID="4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.990304 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb"} err="failed to get container status \"4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\": rpc error: code = NotFound desc = could not find container \"4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\": container with ID starting with 4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.990357 4840 scope.go:117] "RemoveContainer" containerID="910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.990869 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c"} err="failed to get container status \"910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\": rpc error: code = NotFound desc = could not find container \"910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\": container with ID starting with 910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.990907 4840 scope.go:117] "RemoveContainer" containerID="047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.991287 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373"} err="failed to get container status \"047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\": rpc error: code = NotFound desc = could not find container \"047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\": container with ID starting with 047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373 not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.991309 4840 scope.go:117] "RemoveContainer" containerID="430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.991788 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d"} err="failed to get container status \"430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\": rpc error: code = NotFound desc = could not find container \"430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\": container with ID starting with 430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.991843 4840 scope.go:117] "RemoveContainer" containerID="eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.992344 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24"} err="failed to get container status \"eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\": rpc error: code = NotFound desc = could not find container \"eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\": container with ID starting with eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24 not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.992388 4840 scope.go:117] "RemoveContainer" containerID="c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.992812 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19"} err="failed to get container status \"c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\": rpc error: code = NotFound desc = could not find container \"c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\": container with ID starting with c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19 not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.992841 4840 scope.go:117] "RemoveContainer" containerID="7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.993180 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924"} err="failed to get container status \"7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924\": rpc error: code = NotFound desc = could not find container \"7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924\": container with ID starting with 7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924 not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.993209 4840 scope.go:117] "RemoveContainer" containerID="0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.993685 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b"} err="failed to get container status \"0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b\": rpc error: code = NotFound desc = could not find container \"0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b\": container with ID starting with 0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.993703 4840 scope.go:117] "RemoveContainer" containerID="1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.994064 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879"} err="failed to get container status \"1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\": rpc error: code = NotFound desc = could not find container \"1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\": container with ID starting with 1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879 not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.994083 4840 scope.go:117] "RemoveContainer" containerID="aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.994548 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd"} err="failed to get container status \"aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\": rpc error: code = NotFound desc = could not find container \"aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\": container with ID starting with aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.994597 4840 scope.go:117] "RemoveContainer" containerID="4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.995019 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb"} err="failed to get container status \"4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\": rpc error: code = NotFound desc = could not find container \"4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\": container with ID starting with 4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.995040 4840 scope.go:117] "RemoveContainer" containerID="910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.995316 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c"} err="failed to get container status \"910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\": rpc error: code = NotFound desc = could not find container \"910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\": container with ID starting with 910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.995336 4840 scope.go:117] "RemoveContainer" containerID="047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.995616 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373"} err="failed to get container status \"047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\": rpc error: code = NotFound desc = could not find container \"047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\": container with ID starting with 047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373 not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.995645 4840 scope.go:117] "RemoveContainer" containerID="430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.995938 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d"} err="failed to get container status \"430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\": rpc error: code = NotFound desc = could not find container \"430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\": container with ID starting with 430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.995963 4840 scope.go:117] "RemoveContainer" containerID="eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.996383 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24"} err="failed to get container status \"eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\": rpc error: code = NotFound desc = could not find container \"eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\": container with ID starting with eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24 not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.996410 4840 scope.go:117] "RemoveContainer" containerID="c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.996725 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19"} err="failed to get container status \"c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\": rpc error: code = NotFound desc = could not find container \"c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\": container with ID starting with c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19 not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.996749 4840 scope.go:117] "RemoveContainer" containerID="7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.997040 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924"} err="failed to get container status \"7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924\": rpc error: code = NotFound desc = could not find container \"7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924\": container with ID starting with 7060c9a7b65d0c2aaa8611500978abfa8668a64c500761121997e0d627a1c924 not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.997063 4840 scope.go:117] "RemoveContainer" containerID="0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.997356 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b"} err="failed to get container status \"0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b\": rpc error: code = NotFound desc = could not find container \"0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b\": container with ID starting with 0413db34ab0410c5d6e2822520410d9db275b1c4bd9ba1f7343a0c31befc9b0b not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.997373 4840 scope.go:117] "RemoveContainer" containerID="1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.997687 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879"} err="failed to get container status \"1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\": rpc error: code = NotFound desc = could not find container \"1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879\": container with ID starting with 1338b0d6cdb9527554429be2e6fdec5c0b98075978344d168fd6e363eb12c879 not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.997711 4840 scope.go:117] "RemoveContainer" containerID="aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.997995 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd"} err="failed to get container status \"aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\": rpc error: code = NotFound desc = could not find container \"aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd\": container with ID starting with aa38a6bbbd74dad77c64f6f59df35d12881619da7319501839cf0f1eb44c65cd not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.998014 4840 scope.go:117] "RemoveContainer" containerID="4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.998256 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb"} err="failed to get container status \"4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\": rpc error: code = NotFound desc = could not find container \"4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb\": container with ID starting with 4a22746f21587fd8fd3d4a7350442c72a3131d5a9f5e661ff05d78377c5a00cb not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.998278 4840 scope.go:117] "RemoveContainer" containerID="910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.998553 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c"} err="failed to get container status \"910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\": rpc error: code = NotFound desc = could not find container \"910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c\": container with ID starting with 910e2e8b981e2ab6212cd615b1c5134fde5d4cf4f85220c2613e3e301f99293c not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.998572 4840 scope.go:117] "RemoveContainer" containerID="047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.998821 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373"} err="failed to get container status \"047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\": rpc error: code = NotFound desc = could not find container \"047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373\": container with ID starting with 047c371ede152b2bfc450a373d2c3668e92dfed75022ebf12644c116db589373 not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.998857 4840 scope.go:117] "RemoveContainer" containerID="430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.999095 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d"} err="failed to get container status \"430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\": rpc error: code = NotFound desc = could not find container \"430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d\": container with ID starting with 430dc74cf3f3dcf9a87782869390e477899521c6ff0e704ba6272a017b90081d not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.999124 4840 scope.go:117] "RemoveContainer" containerID="eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.999407 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24"} err="failed to get container status \"eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\": rpc error: code = NotFound desc = could not find container \"eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24\": container with ID starting with eaf42bd09d527f5ce4ce8b0619c9b61f56ad6a2a5edf92cac68897a2ada84b24 not found: ID does not exist" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.999428 4840 scope.go:117] "RemoveContainer" containerID="c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19" Mar 11 09:10:01 crc kubenswrapper[4840]: I0311 09:10:01.999661 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19"} err="failed to get container status \"c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\": rpc error: code = NotFound desc = could not find container \"c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19\": container with ID starting with c63877eb44dae815dbd71053c89313ba836c2fdd90cc3d6d299526c027887e19 not found: ID does not exist" Mar 11 09:10:02 crc kubenswrapper[4840]: I0311 09:10:02.083598 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7c2zl"] Mar 11 09:10:02 crc kubenswrapper[4840]: I0311 09:10:02.091648 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7c2zl"] Mar 11 09:10:02 crc kubenswrapper[4840]: I0311 09:10:02.102217 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:02 crc kubenswrapper[4840]: W0311 09:10:02.124233 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda51aa75b_e63e_40ea_9759_fc823877a8d2.slice/crio-388fee9c67d3a082b2ea881c2abacd03bce42e465048e9d1716b6c0f5ff38045 WatchSource:0}: Error finding container 388fee9c67d3a082b2ea881c2abacd03bce42e465048e9d1716b6c0f5ff38045: Status 404 returned error can't find the container with id 388fee9c67d3a082b2ea881c2abacd03bce42e465048e9d1716b6c0f5ff38045 Mar 11 09:10:02 crc kubenswrapper[4840]: I0311 09:10:02.754899 4840 generic.go:334] "Generic (PLEG): container finished" podID="a51aa75b-e63e-40ea-9759-fc823877a8d2" containerID="111ad419f7d0f83160d91208adf0f3b47bce578b386bfbf027446e690642a193" exitCode=0 Mar 11 09:10:02 crc kubenswrapper[4840]: I0311 09:10:02.755005 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" event={"ID":"a51aa75b-e63e-40ea-9759-fc823877a8d2","Type":"ContainerDied","Data":"111ad419f7d0f83160d91208adf0f3b47bce578b386bfbf027446e690642a193"} Mar 11 09:10:02 crc kubenswrapper[4840]: I0311 09:10:02.755541 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" event={"ID":"a51aa75b-e63e-40ea-9759-fc823877a8d2","Type":"ContainerStarted","Data":"388fee9c67d3a082b2ea881c2abacd03bce42e465048e9d1716b6c0f5ff38045"} Mar 11 09:10:02 crc kubenswrapper[4840]: I0311 09:10:02.761491 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vcb9n_0c1678fd-7741-474b-9c8e-3008d3570921/kube-multus/1.log" Mar 11 09:10:02 crc kubenswrapper[4840]: I0311 09:10:02.761652 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vcb9n" event={"ID":"0c1678fd-7741-474b-9c8e-3008d3570921","Type":"ContainerStarted","Data":"2ec19e58101953037a2f3cdeba4543e14658acee48ef2280c93e6d504c4f1c8a"} Mar 11 09:10:02 crc kubenswrapper[4840]: I0311 09:10:02.767688 4840 generic.go:334] "Generic (PLEG): container finished" podID="037b8006-981a-4018-a446-b5c00007db43" containerID="d168afa9708d7e9797f8abbfd70ff9cf43042da62500a40e02f6c32967bf2557" exitCode=0 Mar 11 09:10:02 crc kubenswrapper[4840]: I0311 09:10:02.767729 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553670-cd8gv" event={"ID":"037b8006-981a-4018-a446-b5c00007db43","Type":"ContainerDied","Data":"d168afa9708d7e9797f8abbfd70ff9cf43042da62500a40e02f6c32967bf2557"} Mar 11 09:10:03 crc kubenswrapper[4840]: I0311 09:10:03.779695 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" event={"ID":"a51aa75b-e63e-40ea-9759-fc823877a8d2","Type":"ContainerStarted","Data":"b0fa01e8d3cf82ca2090db87ee5017519ec03e60533a70eb76470e41bed1c318"} Mar 11 09:10:03 crc kubenswrapper[4840]: I0311 09:10:03.780523 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" event={"ID":"a51aa75b-e63e-40ea-9759-fc823877a8d2","Type":"ContainerStarted","Data":"ab6ed80814d750d0723fcbda24577d8eccbd7a7db9d21a5b56d5a8afd47b6c61"} Mar 11 09:10:03 crc kubenswrapper[4840]: I0311 09:10:03.780535 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" event={"ID":"a51aa75b-e63e-40ea-9759-fc823877a8d2","Type":"ContainerStarted","Data":"f752b9a574f433b12de8e4fcabf5c1f078ab407965761ba827c10dd34f420b66"} Mar 11 09:10:03 crc kubenswrapper[4840]: I0311 09:10:03.780545 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" event={"ID":"a51aa75b-e63e-40ea-9759-fc823877a8d2","Type":"ContainerStarted","Data":"73aead4f41c8dffd3ef8582cbaac9459dfd69aaac2c55d46e34da5258fc0e000"} Mar 11 09:10:03 crc kubenswrapper[4840]: I0311 09:10:03.780554 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" event={"ID":"a51aa75b-e63e-40ea-9759-fc823877a8d2","Type":"ContainerStarted","Data":"a7beb59c05a588cae826bfb0bc6868a7ee5f01199080b2fc0fa88232784a60d9"} Mar 11 09:10:03 crc kubenswrapper[4840]: I0311 09:10:03.866442 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553670-cd8gv" Mar 11 09:10:03 crc kubenswrapper[4840]: I0311 09:10:03.977035 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b467h\" (UniqueName: \"kubernetes.io/projected/037b8006-981a-4018-a446-b5c00007db43-kube-api-access-b467h\") pod \"037b8006-981a-4018-a446-b5c00007db43\" (UID: \"037b8006-981a-4018-a446-b5c00007db43\") " Mar 11 09:10:03 crc kubenswrapper[4840]: I0311 09:10:03.987780 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/037b8006-981a-4018-a446-b5c00007db43-kube-api-access-b467h" (OuterVolumeSpecName: "kube-api-access-b467h") pod "037b8006-981a-4018-a446-b5c00007db43" (UID: "037b8006-981a-4018-a446-b5c00007db43"). InnerVolumeSpecName "kube-api-access-b467h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:10:04 crc kubenswrapper[4840]: I0311 09:10:04.071729 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="935336e2-294b-4982-83f9-718806d14e5c" path="/var/lib/kubelet/pods/935336e2-294b-4982-83f9-718806d14e5c/volumes" Mar 11 09:10:04 crc kubenswrapper[4840]: I0311 09:10:04.079142 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b467h\" (UniqueName: \"kubernetes.io/projected/037b8006-981a-4018-a446-b5c00007db43-kube-api-access-b467h\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:04 crc kubenswrapper[4840]: I0311 09:10:04.790772 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" event={"ID":"a51aa75b-e63e-40ea-9759-fc823877a8d2","Type":"ContainerStarted","Data":"099f96863728c1605bf999b32a3f6ceb981abc35c8dd9d3e62665d9c2e38cac9"} Mar 11 09:10:04 crc kubenswrapper[4840]: I0311 09:10:04.792799 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553670-cd8gv" event={"ID":"037b8006-981a-4018-a446-b5c00007db43","Type":"ContainerDied","Data":"7e91af2b1008cec273255048015faaa2778aec5fb8c8a88518dd3ea2ff484e64"} Mar 11 09:10:04 crc kubenswrapper[4840]: I0311 09:10:04.792824 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e91af2b1008cec273255048015faaa2778aec5fb8c8a88518dd3ea2ff484e64" Mar 11 09:10:04 crc kubenswrapper[4840]: I0311 09:10:04.792869 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553670-cd8gv" Mar 11 09:10:04 crc kubenswrapper[4840]: I0311 09:10:04.950407 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553664-65njn"] Mar 11 09:10:04 crc kubenswrapper[4840]: I0311 09:10:04.956387 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553664-65njn"] Mar 11 09:10:06 crc kubenswrapper[4840]: I0311 09:10:06.070574 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9d8efbb-381e-42dd-9f41-b002e811a046" path="/var/lib/kubelet/pods/d9d8efbb-381e-42dd-9f41-b002e811a046/volumes" Mar 11 09:10:06 crc kubenswrapper[4840]: I0311 09:10:06.813310 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" event={"ID":"a51aa75b-e63e-40ea-9759-fc823877a8d2","Type":"ContainerStarted","Data":"eac5fe3995a66bd5805b5179ff346cc816ae1c09b14091698493cf6465543fe0"} Mar 11 09:10:07 crc kubenswrapper[4840]: I0311 09:10:07.673135 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-nxfhp"] Mar 11 09:10:07 crc kubenswrapper[4840]: E0311 09:10:07.673544 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="037b8006-981a-4018-a446-b5c00007db43" containerName="oc" Mar 11 09:10:07 crc kubenswrapper[4840]: I0311 09:10:07.673569 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="037b8006-981a-4018-a446-b5c00007db43" containerName="oc" Mar 11 09:10:07 crc kubenswrapper[4840]: I0311 09:10:07.673738 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="037b8006-981a-4018-a446-b5c00007db43" containerName="oc" Mar 11 09:10:07 crc kubenswrapper[4840]: I0311 09:10:07.674336 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nxfhp" Mar 11 09:10:07 crc kubenswrapper[4840]: I0311 09:10:07.679242 4840 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-lsg87" Mar 11 09:10:07 crc kubenswrapper[4840]: I0311 09:10:07.679510 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Mar 11 09:10:07 crc kubenswrapper[4840]: I0311 09:10:07.679521 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Mar 11 09:10:07 crc kubenswrapper[4840]: I0311 09:10:07.679723 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Mar 11 09:10:07 crc kubenswrapper[4840]: I0311 09:10:07.736353 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srn5x\" (UniqueName: \"kubernetes.io/projected/135fe828-cf04-41c5-9fa6-4e7cbc011252-kube-api-access-srn5x\") pod \"crc-storage-crc-nxfhp\" (UID: \"135fe828-cf04-41c5-9fa6-4e7cbc011252\") " pod="crc-storage/crc-storage-crc-nxfhp" Mar 11 09:10:07 crc kubenswrapper[4840]: I0311 09:10:07.736432 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/135fe828-cf04-41c5-9fa6-4e7cbc011252-crc-storage\") pod \"crc-storage-crc-nxfhp\" (UID: \"135fe828-cf04-41c5-9fa6-4e7cbc011252\") " pod="crc-storage/crc-storage-crc-nxfhp" Mar 11 09:10:07 crc kubenswrapper[4840]: I0311 09:10:07.736490 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/135fe828-cf04-41c5-9fa6-4e7cbc011252-node-mnt\") pod \"crc-storage-crc-nxfhp\" (UID: \"135fe828-cf04-41c5-9fa6-4e7cbc011252\") " pod="crc-storage/crc-storage-crc-nxfhp" Mar 11 09:10:07 crc kubenswrapper[4840]: I0311 09:10:07.838431 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/135fe828-cf04-41c5-9fa6-4e7cbc011252-node-mnt\") pod \"crc-storage-crc-nxfhp\" (UID: \"135fe828-cf04-41c5-9fa6-4e7cbc011252\") " pod="crc-storage/crc-storage-crc-nxfhp" Mar 11 09:10:07 crc kubenswrapper[4840]: I0311 09:10:07.839007 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srn5x\" (UniqueName: \"kubernetes.io/projected/135fe828-cf04-41c5-9fa6-4e7cbc011252-kube-api-access-srn5x\") pod \"crc-storage-crc-nxfhp\" (UID: \"135fe828-cf04-41c5-9fa6-4e7cbc011252\") " pod="crc-storage/crc-storage-crc-nxfhp" Mar 11 09:10:07 crc kubenswrapper[4840]: I0311 09:10:07.839050 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/135fe828-cf04-41c5-9fa6-4e7cbc011252-crc-storage\") pod \"crc-storage-crc-nxfhp\" (UID: \"135fe828-cf04-41c5-9fa6-4e7cbc011252\") " pod="crc-storage/crc-storage-crc-nxfhp" Mar 11 09:10:07 crc kubenswrapper[4840]: I0311 09:10:07.838912 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/135fe828-cf04-41c5-9fa6-4e7cbc011252-node-mnt\") pod \"crc-storage-crc-nxfhp\" (UID: \"135fe828-cf04-41c5-9fa6-4e7cbc011252\") " pod="crc-storage/crc-storage-crc-nxfhp" Mar 11 09:10:07 crc kubenswrapper[4840]: I0311 09:10:07.840847 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/135fe828-cf04-41c5-9fa6-4e7cbc011252-crc-storage\") pod \"crc-storage-crc-nxfhp\" (UID: \"135fe828-cf04-41c5-9fa6-4e7cbc011252\") " pod="crc-storage/crc-storage-crc-nxfhp" Mar 11 09:10:07 crc kubenswrapper[4840]: I0311 09:10:07.862329 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srn5x\" (UniqueName: \"kubernetes.io/projected/135fe828-cf04-41c5-9fa6-4e7cbc011252-kube-api-access-srn5x\") pod \"crc-storage-crc-nxfhp\" (UID: \"135fe828-cf04-41c5-9fa6-4e7cbc011252\") " pod="crc-storage/crc-storage-crc-nxfhp" Mar 11 09:10:07 crc kubenswrapper[4840]: I0311 09:10:07.996787 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nxfhp" Mar 11 09:10:08 crc kubenswrapper[4840]: E0311 09:10:08.028201 4840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-nxfhp_crc-storage_135fe828-cf04-41c5-9fa6-4e7cbc011252_0(601da49948b6da7793f207788877226c8bb17dd7ddd6612f0092e9a785562085): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 09:10:08 crc kubenswrapper[4840]: E0311 09:10:08.028307 4840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-nxfhp_crc-storage_135fe828-cf04-41c5-9fa6-4e7cbc011252_0(601da49948b6da7793f207788877226c8bb17dd7ddd6612f0092e9a785562085): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-nxfhp" Mar 11 09:10:08 crc kubenswrapper[4840]: E0311 09:10:08.028349 4840 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-nxfhp_crc-storage_135fe828-cf04-41c5-9fa6-4e7cbc011252_0(601da49948b6da7793f207788877226c8bb17dd7ddd6612f0092e9a785562085): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-nxfhp" Mar 11 09:10:08 crc kubenswrapper[4840]: E0311 09:10:08.028418 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-nxfhp_crc-storage(135fe828-cf04-41c5-9fa6-4e7cbc011252)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-nxfhp_crc-storage(135fe828-cf04-41c5-9fa6-4e7cbc011252)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-nxfhp_crc-storage_135fe828-cf04-41c5-9fa6-4e7cbc011252_0(601da49948b6da7793f207788877226c8bb17dd7ddd6612f0092e9a785562085): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-nxfhp" podUID="135fe828-cf04-41c5-9fa6-4e7cbc011252" Mar 11 09:10:09 crc kubenswrapper[4840]: I0311 09:10:09.803198 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-nxfhp"] Mar 11 09:10:09 crc kubenswrapper[4840]: I0311 09:10:09.803879 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nxfhp" Mar 11 09:10:09 crc kubenswrapper[4840]: I0311 09:10:09.804509 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nxfhp" Mar 11 09:10:09 crc kubenswrapper[4840]: I0311 09:10:09.834340 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" event={"ID":"a51aa75b-e63e-40ea-9759-fc823877a8d2","Type":"ContainerStarted","Data":"37f926a9fe397e635113c05b68d64ee79cd17facb08090be57da89a6d5dcd21e"} Mar 11 09:10:09 crc kubenswrapper[4840]: I0311 09:10:09.834764 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:09 crc kubenswrapper[4840]: I0311 09:10:09.834842 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:09 crc kubenswrapper[4840]: E0311 09:10:09.835336 4840 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-nxfhp_crc-storage_135fe828-cf04-41c5-9fa6-4e7cbc011252_0(41393d5ee52d8d549659502fc7772a23888534af4b28e02307c74335bb8219de): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 11 09:10:09 crc kubenswrapper[4840]: E0311 09:10:09.835405 4840 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-nxfhp_crc-storage_135fe828-cf04-41c5-9fa6-4e7cbc011252_0(41393d5ee52d8d549659502fc7772a23888534af4b28e02307c74335bb8219de): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-nxfhp" Mar 11 09:10:09 crc kubenswrapper[4840]: E0311 09:10:09.835432 4840 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-nxfhp_crc-storage_135fe828-cf04-41c5-9fa6-4e7cbc011252_0(41393d5ee52d8d549659502fc7772a23888534af4b28e02307c74335bb8219de): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-nxfhp" Mar 11 09:10:09 crc kubenswrapper[4840]: E0311 09:10:09.835505 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-nxfhp_crc-storage(135fe828-cf04-41c5-9fa6-4e7cbc011252)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-nxfhp_crc-storage(135fe828-cf04-41c5-9fa6-4e7cbc011252)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-nxfhp_crc-storage_135fe828-cf04-41c5-9fa6-4e7cbc011252_0(41393d5ee52d8d549659502fc7772a23888534af4b28e02307c74335bb8219de): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-nxfhp" podUID="135fe828-cf04-41c5-9fa6-4e7cbc011252" Mar 11 09:10:09 crc kubenswrapper[4840]: I0311 09:10:09.865056 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" podStartSLOduration=8.865034238 podStartE2EDuration="8.865034238s" podCreationTimestamp="2026-03-11 09:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:10:09.861740425 +0000 UTC m=+808.527410240" watchObservedRunningTime="2026-03-11 09:10:09.865034238 +0000 UTC m=+808.530704053" Mar 11 09:10:09 crc kubenswrapper[4840]: I0311 09:10:09.885691 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:10 crc kubenswrapper[4840]: I0311 09:10:10.842657 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:10 crc kubenswrapper[4840]: I0311 09:10:10.940237 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:21 crc kubenswrapper[4840]: I0311 09:10:21.060322 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nxfhp" Mar 11 09:10:21 crc kubenswrapper[4840]: I0311 09:10:21.061779 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nxfhp" Mar 11 09:10:21 crc kubenswrapper[4840]: I0311 09:10:21.285675 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-nxfhp"] Mar 11 09:10:21 crc kubenswrapper[4840]: I0311 09:10:21.921832 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-nxfhp" event={"ID":"135fe828-cf04-41c5-9fa6-4e7cbc011252","Type":"ContainerStarted","Data":"70f393e3e0e26df8be1a71e973e3a56bc2943ad15e2ff141d51361b4d560352f"} Mar 11 09:10:23 crc kubenswrapper[4840]: I0311 09:10:23.937355 4840 generic.go:334] "Generic (PLEG): container finished" podID="135fe828-cf04-41c5-9fa6-4e7cbc011252" containerID="c8fe2e5935d84e08a7e8d303c01bbcb02d7affcfd61968899a43fcaa601a0490" exitCode=0 Mar 11 09:10:23 crc kubenswrapper[4840]: I0311 09:10:23.937500 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-nxfhp" event={"ID":"135fe828-cf04-41c5-9fa6-4e7cbc011252","Type":"ContainerDied","Data":"c8fe2e5935d84e08a7e8d303c01bbcb02d7affcfd61968899a43fcaa601a0490"} Mar 11 09:10:25 crc kubenswrapper[4840]: I0311 09:10:25.236539 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nxfhp" Mar 11 09:10:25 crc kubenswrapper[4840]: I0311 09:10:25.404730 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/135fe828-cf04-41c5-9fa6-4e7cbc011252-crc-storage\") pod \"135fe828-cf04-41c5-9fa6-4e7cbc011252\" (UID: \"135fe828-cf04-41c5-9fa6-4e7cbc011252\") " Mar 11 09:10:25 crc kubenswrapper[4840]: I0311 09:10:25.404836 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/135fe828-cf04-41c5-9fa6-4e7cbc011252-node-mnt\") pod \"135fe828-cf04-41c5-9fa6-4e7cbc011252\" (UID: \"135fe828-cf04-41c5-9fa6-4e7cbc011252\") " Mar 11 09:10:25 crc kubenswrapper[4840]: I0311 09:10:25.404963 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135fe828-cf04-41c5-9fa6-4e7cbc011252-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "135fe828-cf04-41c5-9fa6-4e7cbc011252" (UID: "135fe828-cf04-41c5-9fa6-4e7cbc011252"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:10:25 crc kubenswrapper[4840]: I0311 09:10:25.405014 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srn5x\" (UniqueName: \"kubernetes.io/projected/135fe828-cf04-41c5-9fa6-4e7cbc011252-kube-api-access-srn5x\") pod \"135fe828-cf04-41c5-9fa6-4e7cbc011252\" (UID: \"135fe828-cf04-41c5-9fa6-4e7cbc011252\") " Mar 11 09:10:25 crc kubenswrapper[4840]: I0311 09:10:25.406373 4840 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/135fe828-cf04-41c5-9fa6-4e7cbc011252-node-mnt\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:25 crc kubenswrapper[4840]: I0311 09:10:25.412277 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/135fe828-cf04-41c5-9fa6-4e7cbc011252-kube-api-access-srn5x" (OuterVolumeSpecName: "kube-api-access-srn5x") pod "135fe828-cf04-41c5-9fa6-4e7cbc011252" (UID: "135fe828-cf04-41c5-9fa6-4e7cbc011252"). InnerVolumeSpecName "kube-api-access-srn5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:10:25 crc kubenswrapper[4840]: I0311 09:10:25.424453 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/135fe828-cf04-41c5-9fa6-4e7cbc011252-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "135fe828-cf04-41c5-9fa6-4e7cbc011252" (UID: "135fe828-cf04-41c5-9fa6-4e7cbc011252"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:10:25 crc kubenswrapper[4840]: I0311 09:10:25.507451 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srn5x\" (UniqueName: \"kubernetes.io/projected/135fe828-cf04-41c5-9fa6-4e7cbc011252-kube-api-access-srn5x\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:25 crc kubenswrapper[4840]: I0311 09:10:25.507532 4840 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/135fe828-cf04-41c5-9fa6-4e7cbc011252-crc-storage\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:25 crc kubenswrapper[4840]: I0311 09:10:25.955406 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-nxfhp" event={"ID":"135fe828-cf04-41c5-9fa6-4e7cbc011252","Type":"ContainerDied","Data":"70f393e3e0e26df8be1a71e973e3a56bc2943ad15e2ff141d51361b4d560352f"} Mar 11 09:10:25 crc kubenswrapper[4840]: I0311 09:10:25.956051 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70f393e3e0e26df8be1a71e973e3a56bc2943ad15e2ff141d51361b4d560352f" Mar 11 09:10:25 crc kubenswrapper[4840]: I0311 09:10:25.955552 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-nxfhp" Mar 11 09:10:27 crc kubenswrapper[4840]: I0311 09:10:27.445597 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:10:27 crc kubenswrapper[4840]: I0311 09:10:27.445709 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:10:32 crc kubenswrapper[4840]: I0311 09:10:32.135273 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kh9gf" Mar 11 09:10:33 crc kubenswrapper[4840]: I0311 09:10:33.467453 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z"] Mar 11 09:10:33 crc kubenswrapper[4840]: E0311 09:10:33.468864 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135fe828-cf04-41c5-9fa6-4e7cbc011252" containerName="storage" Mar 11 09:10:33 crc kubenswrapper[4840]: I0311 09:10:33.468957 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="135fe828-cf04-41c5-9fa6-4e7cbc011252" containerName="storage" Mar 11 09:10:33 crc kubenswrapper[4840]: I0311 09:10:33.469263 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="135fe828-cf04-41c5-9fa6-4e7cbc011252" containerName="storage" Mar 11 09:10:33 crc kubenswrapper[4840]: I0311 09:10:33.470248 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z" Mar 11 09:10:33 crc kubenswrapper[4840]: I0311 09:10:33.475877 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 11 09:10:33 crc kubenswrapper[4840]: I0311 09:10:33.503640 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z"] Mar 11 09:10:33 crc kubenswrapper[4840]: I0311 09:10:33.651311 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2fb836d4-fee3-4607-b48b-c2ea1d889ec5-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z\" (UID: \"2fb836d4-fee3-4607-b48b-c2ea1d889ec5\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z" Mar 11 09:10:33 crc kubenswrapper[4840]: I0311 09:10:33.651437 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2fb836d4-fee3-4607-b48b-c2ea1d889ec5-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z\" (UID: \"2fb836d4-fee3-4607-b48b-c2ea1d889ec5\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z" Mar 11 09:10:33 crc kubenswrapper[4840]: I0311 09:10:33.651651 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwsqd\" (UniqueName: \"kubernetes.io/projected/2fb836d4-fee3-4607-b48b-c2ea1d889ec5-kube-api-access-zwsqd\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z\" (UID: \"2fb836d4-fee3-4607-b48b-c2ea1d889ec5\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z" Mar 11 09:10:33 crc kubenswrapper[4840]: I0311 09:10:33.753380 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwsqd\" (UniqueName: \"kubernetes.io/projected/2fb836d4-fee3-4607-b48b-c2ea1d889ec5-kube-api-access-zwsqd\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z\" (UID: \"2fb836d4-fee3-4607-b48b-c2ea1d889ec5\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z" Mar 11 09:10:33 crc kubenswrapper[4840]: I0311 09:10:33.753581 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2fb836d4-fee3-4607-b48b-c2ea1d889ec5-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z\" (UID: \"2fb836d4-fee3-4607-b48b-c2ea1d889ec5\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z" Mar 11 09:10:33 crc kubenswrapper[4840]: I0311 09:10:33.753931 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2fb836d4-fee3-4607-b48b-c2ea1d889ec5-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z\" (UID: \"2fb836d4-fee3-4607-b48b-c2ea1d889ec5\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z" Mar 11 09:10:33 crc kubenswrapper[4840]: I0311 09:10:33.754892 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2fb836d4-fee3-4607-b48b-c2ea1d889ec5-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z\" (UID: \"2fb836d4-fee3-4607-b48b-c2ea1d889ec5\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z" Mar 11 09:10:33 crc kubenswrapper[4840]: I0311 09:10:33.754903 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2fb836d4-fee3-4607-b48b-c2ea1d889ec5-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z\" (UID: \"2fb836d4-fee3-4607-b48b-c2ea1d889ec5\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z" Mar 11 09:10:33 crc kubenswrapper[4840]: I0311 09:10:33.788416 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwsqd\" (UniqueName: \"kubernetes.io/projected/2fb836d4-fee3-4607-b48b-c2ea1d889ec5-kube-api-access-zwsqd\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z\" (UID: \"2fb836d4-fee3-4607-b48b-c2ea1d889ec5\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z" Mar 11 09:10:33 crc kubenswrapper[4840]: I0311 09:10:33.805781 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z" Mar 11 09:10:34 crc kubenswrapper[4840]: I0311 09:10:34.004992 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z"] Mar 11 09:10:34 crc kubenswrapper[4840]: W0311 09:10:34.015931 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fb836d4_fee3_4607_b48b_c2ea1d889ec5.slice/crio-42e2272da8799d747d2885202ae693b4a8ee693f7935aa4f2dc14e8e5cd410ff WatchSource:0}: Error finding container 42e2272da8799d747d2885202ae693b4a8ee693f7935aa4f2dc14e8e5cd410ff: Status 404 returned error can't find the container with id 42e2272da8799d747d2885202ae693b4a8ee693f7935aa4f2dc14e8e5cd410ff Mar 11 09:10:34 crc kubenswrapper[4840]: I0311 09:10:34.837433 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hclbz"] Mar 11 09:10:34 crc kubenswrapper[4840]: I0311 09:10:34.838880 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hclbz" Mar 11 09:10:34 crc kubenswrapper[4840]: I0311 09:10:34.852807 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hclbz"] Mar 11 09:10:34 crc kubenswrapper[4840]: I0311 09:10:34.974531 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxlxm\" (UniqueName: \"kubernetes.io/projected/bd1bb53a-5918-4117-a515-6d42e66bc7f1-kube-api-access-hxlxm\") pod \"redhat-operators-hclbz\" (UID: \"bd1bb53a-5918-4117-a515-6d42e66bc7f1\") " pod="openshift-marketplace/redhat-operators-hclbz" Mar 11 09:10:34 crc kubenswrapper[4840]: I0311 09:10:34.974597 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd1bb53a-5918-4117-a515-6d42e66bc7f1-utilities\") pod \"redhat-operators-hclbz\" (UID: \"bd1bb53a-5918-4117-a515-6d42e66bc7f1\") " pod="openshift-marketplace/redhat-operators-hclbz" Mar 11 09:10:34 crc kubenswrapper[4840]: I0311 09:10:34.974755 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd1bb53a-5918-4117-a515-6d42e66bc7f1-catalog-content\") pod \"redhat-operators-hclbz\" (UID: \"bd1bb53a-5918-4117-a515-6d42e66bc7f1\") " pod="openshift-marketplace/redhat-operators-hclbz" Mar 11 09:10:35 crc kubenswrapper[4840]: I0311 09:10:35.020311 4840 generic.go:334] "Generic (PLEG): container finished" podID="2fb836d4-fee3-4607-b48b-c2ea1d889ec5" containerID="019161d94896663211178781a4a9199236f1d3a4a07ad497f9532830ff2e3a95" exitCode=0 Mar 11 09:10:35 crc kubenswrapper[4840]: I0311 09:10:35.020359 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z" event={"ID":"2fb836d4-fee3-4607-b48b-c2ea1d889ec5","Type":"ContainerDied","Data":"019161d94896663211178781a4a9199236f1d3a4a07ad497f9532830ff2e3a95"} Mar 11 09:10:35 crc kubenswrapper[4840]: I0311 09:10:35.020385 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z" event={"ID":"2fb836d4-fee3-4607-b48b-c2ea1d889ec5","Type":"ContainerStarted","Data":"42e2272da8799d747d2885202ae693b4a8ee693f7935aa4f2dc14e8e5cd410ff"} Mar 11 09:10:35 crc kubenswrapper[4840]: I0311 09:10:35.076810 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd1bb53a-5918-4117-a515-6d42e66bc7f1-catalog-content\") pod \"redhat-operators-hclbz\" (UID: \"bd1bb53a-5918-4117-a515-6d42e66bc7f1\") " pod="openshift-marketplace/redhat-operators-hclbz" Mar 11 09:10:35 crc kubenswrapper[4840]: I0311 09:10:35.077906 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxlxm\" (UniqueName: \"kubernetes.io/projected/bd1bb53a-5918-4117-a515-6d42e66bc7f1-kube-api-access-hxlxm\") pod \"redhat-operators-hclbz\" (UID: \"bd1bb53a-5918-4117-a515-6d42e66bc7f1\") " pod="openshift-marketplace/redhat-operators-hclbz" Mar 11 09:10:35 crc kubenswrapper[4840]: I0311 09:10:35.077828 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd1bb53a-5918-4117-a515-6d42e66bc7f1-catalog-content\") pod \"redhat-operators-hclbz\" (UID: \"bd1bb53a-5918-4117-a515-6d42e66bc7f1\") " pod="openshift-marketplace/redhat-operators-hclbz" Mar 11 09:10:35 crc kubenswrapper[4840]: I0311 09:10:35.077999 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd1bb53a-5918-4117-a515-6d42e66bc7f1-utilities\") pod \"redhat-operators-hclbz\" (UID: \"bd1bb53a-5918-4117-a515-6d42e66bc7f1\") " pod="openshift-marketplace/redhat-operators-hclbz" Mar 11 09:10:35 crc kubenswrapper[4840]: I0311 09:10:35.078287 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd1bb53a-5918-4117-a515-6d42e66bc7f1-utilities\") pod \"redhat-operators-hclbz\" (UID: \"bd1bb53a-5918-4117-a515-6d42e66bc7f1\") " pod="openshift-marketplace/redhat-operators-hclbz" Mar 11 09:10:35 crc kubenswrapper[4840]: I0311 09:10:35.101128 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxlxm\" (UniqueName: \"kubernetes.io/projected/bd1bb53a-5918-4117-a515-6d42e66bc7f1-kube-api-access-hxlxm\") pod \"redhat-operators-hclbz\" (UID: \"bd1bb53a-5918-4117-a515-6d42e66bc7f1\") " pod="openshift-marketplace/redhat-operators-hclbz" Mar 11 09:10:35 crc kubenswrapper[4840]: I0311 09:10:35.168614 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hclbz" Mar 11 09:10:35 crc kubenswrapper[4840]: I0311 09:10:35.410750 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hclbz"] Mar 11 09:10:35 crc kubenswrapper[4840]: W0311 09:10:35.425835 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd1bb53a_5918_4117_a515_6d42e66bc7f1.slice/crio-3413b08acd7ee8e4742df46288c2df806525c4c3b59d6463d7998ddfae939b90 WatchSource:0}: Error finding container 3413b08acd7ee8e4742df46288c2df806525c4c3b59d6463d7998ddfae939b90: Status 404 returned error can't find the container with id 3413b08acd7ee8e4742df46288c2df806525c4c3b59d6463d7998ddfae939b90 Mar 11 09:10:36 crc kubenswrapper[4840]: I0311 09:10:36.033156 4840 generic.go:334] "Generic (PLEG): container finished" podID="bd1bb53a-5918-4117-a515-6d42e66bc7f1" containerID="86c4f2e417a9361cc7b8e70669883682dae4d4199b90f2944bbde71303ea39f7" exitCode=0 Mar 11 09:10:36 crc kubenswrapper[4840]: I0311 09:10:36.033224 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hclbz" event={"ID":"bd1bb53a-5918-4117-a515-6d42e66bc7f1","Type":"ContainerDied","Data":"86c4f2e417a9361cc7b8e70669883682dae4d4199b90f2944bbde71303ea39f7"} Mar 11 09:10:36 crc kubenswrapper[4840]: I0311 09:10:36.033563 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hclbz" event={"ID":"bd1bb53a-5918-4117-a515-6d42e66bc7f1","Type":"ContainerStarted","Data":"3413b08acd7ee8e4742df46288c2df806525c4c3b59d6463d7998ddfae939b90"} Mar 11 09:10:37 crc kubenswrapper[4840]: I0311 09:10:37.043182 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hclbz" event={"ID":"bd1bb53a-5918-4117-a515-6d42e66bc7f1","Type":"ContainerStarted","Data":"709b43a2082d709691b8bf214faba8454d497130664e7bc3d75b80871addaadf"} Mar 11 09:10:37 crc kubenswrapper[4840]: I0311 09:10:37.044728 4840 generic.go:334] "Generic (PLEG): container finished" podID="2fb836d4-fee3-4607-b48b-c2ea1d889ec5" containerID="abccecb4c80a3635c9858b15febacaa7e192c1f58027635d3909a2549c29c312" exitCode=0 Mar 11 09:10:37 crc kubenswrapper[4840]: I0311 09:10:37.044795 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z" event={"ID":"2fb836d4-fee3-4607-b48b-c2ea1d889ec5","Type":"ContainerDied","Data":"abccecb4c80a3635c9858b15febacaa7e192c1f58027635d3909a2549c29c312"} Mar 11 09:10:38 crc kubenswrapper[4840]: I0311 09:10:38.058406 4840 generic.go:334] "Generic (PLEG): container finished" podID="2fb836d4-fee3-4607-b48b-c2ea1d889ec5" containerID="0aa8996d0b861317b9f91c53fb59de6d33f94b68a1cf23ded212073d6ade4f06" exitCode=0 Mar 11 09:10:38 crc kubenswrapper[4840]: I0311 09:10:38.058500 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z" event={"ID":"2fb836d4-fee3-4607-b48b-c2ea1d889ec5","Type":"ContainerDied","Data":"0aa8996d0b861317b9f91c53fb59de6d33f94b68a1cf23ded212073d6ade4f06"} Mar 11 09:10:38 crc kubenswrapper[4840]: I0311 09:10:38.063734 4840 generic.go:334] "Generic (PLEG): container finished" podID="bd1bb53a-5918-4117-a515-6d42e66bc7f1" containerID="709b43a2082d709691b8bf214faba8454d497130664e7bc3d75b80871addaadf" exitCode=0 Mar 11 09:10:38 crc kubenswrapper[4840]: I0311 09:10:38.071958 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hclbz" event={"ID":"bd1bb53a-5918-4117-a515-6d42e66bc7f1","Type":"ContainerDied","Data":"709b43a2082d709691b8bf214faba8454d497130664e7bc3d75b80871addaadf"} Mar 11 09:10:39 crc kubenswrapper[4840]: I0311 09:10:39.076070 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hclbz" event={"ID":"bd1bb53a-5918-4117-a515-6d42e66bc7f1","Type":"ContainerStarted","Data":"5aa065b07f5bdcb8961678078d3cc804f31fd937c74f42fc771357c03d15802f"} Mar 11 09:10:39 crc kubenswrapper[4840]: I0311 09:10:39.099378 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hclbz" podStartSLOduration=2.751481189 podStartE2EDuration="5.09934984s" podCreationTimestamp="2026-03-11 09:10:34 +0000 UTC" firstStartedPulling="2026-03-11 09:10:36.097702724 +0000 UTC m=+834.763372549" lastFinishedPulling="2026-03-11 09:10:38.445571385 +0000 UTC m=+837.111241200" observedRunningTime="2026-03-11 09:10:39.096313474 +0000 UTC m=+837.761983299" watchObservedRunningTime="2026-03-11 09:10:39.09934984 +0000 UTC m=+837.765019655" Mar 11 09:10:39 crc kubenswrapper[4840]: I0311 09:10:39.346590 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z" Mar 11 09:10:39 crc kubenswrapper[4840]: I0311 09:10:39.442965 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2fb836d4-fee3-4607-b48b-c2ea1d889ec5-bundle\") pod \"2fb836d4-fee3-4607-b48b-c2ea1d889ec5\" (UID: \"2fb836d4-fee3-4607-b48b-c2ea1d889ec5\") " Mar 11 09:10:39 crc kubenswrapper[4840]: I0311 09:10:39.443071 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2fb836d4-fee3-4607-b48b-c2ea1d889ec5-util\") pod \"2fb836d4-fee3-4607-b48b-c2ea1d889ec5\" (UID: \"2fb836d4-fee3-4607-b48b-c2ea1d889ec5\") " Mar 11 09:10:39 crc kubenswrapper[4840]: I0311 09:10:39.443105 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwsqd\" (UniqueName: \"kubernetes.io/projected/2fb836d4-fee3-4607-b48b-c2ea1d889ec5-kube-api-access-zwsqd\") pod \"2fb836d4-fee3-4607-b48b-c2ea1d889ec5\" (UID: \"2fb836d4-fee3-4607-b48b-c2ea1d889ec5\") " Mar 11 09:10:39 crc kubenswrapper[4840]: I0311 09:10:39.443678 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fb836d4-fee3-4607-b48b-c2ea1d889ec5-bundle" (OuterVolumeSpecName: "bundle") pod "2fb836d4-fee3-4607-b48b-c2ea1d889ec5" (UID: "2fb836d4-fee3-4607-b48b-c2ea1d889ec5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:10:39 crc kubenswrapper[4840]: I0311 09:10:39.451672 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fb836d4-fee3-4607-b48b-c2ea1d889ec5-kube-api-access-zwsqd" (OuterVolumeSpecName: "kube-api-access-zwsqd") pod "2fb836d4-fee3-4607-b48b-c2ea1d889ec5" (UID: "2fb836d4-fee3-4607-b48b-c2ea1d889ec5"). InnerVolumeSpecName "kube-api-access-zwsqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:10:39 crc kubenswrapper[4840]: I0311 09:10:39.457486 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fb836d4-fee3-4607-b48b-c2ea1d889ec5-util" (OuterVolumeSpecName: "util") pod "2fb836d4-fee3-4607-b48b-c2ea1d889ec5" (UID: "2fb836d4-fee3-4607-b48b-c2ea1d889ec5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:10:39 crc kubenswrapper[4840]: I0311 09:10:39.544584 4840 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2fb836d4-fee3-4607-b48b-c2ea1d889ec5-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:39 crc kubenswrapper[4840]: I0311 09:10:39.544635 4840 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2fb836d4-fee3-4607-b48b-c2ea1d889ec5-util\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:39 crc kubenswrapper[4840]: I0311 09:10:39.544645 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwsqd\" (UniqueName: \"kubernetes.io/projected/2fb836d4-fee3-4607-b48b-c2ea1d889ec5-kube-api-access-zwsqd\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:40 crc kubenswrapper[4840]: I0311 09:10:40.088492 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z" event={"ID":"2fb836d4-fee3-4607-b48b-c2ea1d889ec5","Type":"ContainerDied","Data":"42e2272da8799d747d2885202ae693b4a8ee693f7935aa4f2dc14e8e5cd410ff"} Mar 11 09:10:40 crc kubenswrapper[4840]: I0311 09:10:40.088564 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42e2272da8799d747d2885202ae693b4a8ee693f7935aa4f2dc14e8e5cd410ff" Mar 11 09:10:40 crc kubenswrapper[4840]: I0311 09:10:40.088664 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z" Mar 11 09:10:42 crc kubenswrapper[4840]: I0311 09:10:42.623143 4840 scope.go:117] "RemoveContainer" containerID="674c771977115f112b5ec8199caeeb0514a701f72a3d596a1b732a6efbf3a84c" Mar 11 09:10:43 crc kubenswrapper[4840]: I0311 09:10:43.966129 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-qzjs5"] Mar 11 09:10:43 crc kubenswrapper[4840]: E0311 09:10:43.966544 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fb836d4-fee3-4607-b48b-c2ea1d889ec5" containerName="extract" Mar 11 09:10:43 crc kubenswrapper[4840]: I0311 09:10:43.966566 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fb836d4-fee3-4607-b48b-c2ea1d889ec5" containerName="extract" Mar 11 09:10:43 crc kubenswrapper[4840]: E0311 09:10:43.966583 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fb836d4-fee3-4607-b48b-c2ea1d889ec5" containerName="util" Mar 11 09:10:43 crc kubenswrapper[4840]: I0311 09:10:43.966591 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fb836d4-fee3-4607-b48b-c2ea1d889ec5" containerName="util" Mar 11 09:10:43 crc kubenswrapper[4840]: E0311 09:10:43.966599 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fb836d4-fee3-4607-b48b-c2ea1d889ec5" containerName="pull" Mar 11 09:10:43 crc kubenswrapper[4840]: I0311 09:10:43.966608 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fb836d4-fee3-4607-b48b-c2ea1d889ec5" containerName="pull" Mar 11 09:10:43 crc kubenswrapper[4840]: I0311 09:10:43.966748 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fb836d4-fee3-4607-b48b-c2ea1d889ec5" containerName="extract" Mar 11 09:10:43 crc kubenswrapper[4840]: I0311 09:10:43.967236 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-qzjs5" Mar 11 09:10:43 crc kubenswrapper[4840]: I0311 09:10:43.969520 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 11 09:10:43 crc kubenswrapper[4840]: I0311 09:10:43.970952 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 11 09:10:43 crc kubenswrapper[4840]: I0311 09:10:43.979063 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-rrpv9" Mar 11 09:10:43 crc kubenswrapper[4840]: I0311 09:10:43.979757 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-qzjs5"] Mar 11 09:10:44 crc kubenswrapper[4840]: I0311 09:10:44.111026 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsxwr\" (UniqueName: \"kubernetes.io/projected/e5d81b10-00d4-4dde-81cd-8adcdb671e0f-kube-api-access-rsxwr\") pod \"nmstate-operator-796d4cfff4-qzjs5\" (UID: \"e5d81b10-00d4-4dde-81cd-8adcdb671e0f\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-qzjs5" Mar 11 09:10:44 crc kubenswrapper[4840]: I0311 09:10:44.212605 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsxwr\" (UniqueName: \"kubernetes.io/projected/e5d81b10-00d4-4dde-81cd-8adcdb671e0f-kube-api-access-rsxwr\") pod \"nmstate-operator-796d4cfff4-qzjs5\" (UID: \"e5d81b10-00d4-4dde-81cd-8adcdb671e0f\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-qzjs5" Mar 11 09:10:44 crc kubenswrapper[4840]: I0311 09:10:44.237671 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsxwr\" (UniqueName: \"kubernetes.io/projected/e5d81b10-00d4-4dde-81cd-8adcdb671e0f-kube-api-access-rsxwr\") pod \"nmstate-operator-796d4cfff4-qzjs5\" (UID: \"e5d81b10-00d4-4dde-81cd-8adcdb671e0f\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-qzjs5" Mar 11 09:10:44 crc kubenswrapper[4840]: I0311 09:10:44.286814 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-qzjs5" Mar 11 09:10:44 crc kubenswrapper[4840]: I0311 09:10:44.521660 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-qzjs5"] Mar 11 09:10:45 crc kubenswrapper[4840]: I0311 09:10:45.124724 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-qzjs5" event={"ID":"e5d81b10-00d4-4dde-81cd-8adcdb671e0f","Type":"ContainerStarted","Data":"4a4e42c1a55cac6d3f345fd5015de0891988461c77f9173cd0d45ff7d6e03513"} Mar 11 09:10:45 crc kubenswrapper[4840]: I0311 09:10:45.169546 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hclbz" Mar 11 09:10:45 crc kubenswrapper[4840]: I0311 09:10:45.169606 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hclbz" Mar 11 09:10:46 crc kubenswrapper[4840]: I0311 09:10:46.215703 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hclbz" podUID="bd1bb53a-5918-4117-a515-6d42e66bc7f1" containerName="registry-server" probeResult="failure" output=< Mar 11 09:10:46 crc kubenswrapper[4840]: timeout: failed to connect service ":50051" within 1s Mar 11 09:10:46 crc kubenswrapper[4840]: > Mar 11 09:10:47 crc kubenswrapper[4840]: I0311 09:10:47.140039 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-qzjs5" event={"ID":"e5d81b10-00d4-4dde-81cd-8adcdb671e0f","Type":"ContainerStarted","Data":"c433ef1e54dc6e38def8a2f080f5c77a66a2258321a362ffd2e5df6427a00b9d"} Mar 11 09:10:47 crc kubenswrapper[4840]: I0311 09:10:47.169842 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-qzjs5" podStartSLOduration=1.730568723 podStartE2EDuration="4.169816832s" podCreationTimestamp="2026-03-11 09:10:43 +0000 UTC" firstStartedPulling="2026-03-11 09:10:44.535813277 +0000 UTC m=+843.201483092" lastFinishedPulling="2026-03-11 09:10:46.975061396 +0000 UTC m=+845.640731201" observedRunningTime="2026-03-11 09:10:47.164325624 +0000 UTC m=+845.829995459" watchObservedRunningTime="2026-03-11 09:10:47.169816832 +0000 UTC m=+845.835486647" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.519690 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-wb5bs"] Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.520930 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-wb5bs" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.524590 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-44wmv" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.541199 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-wb5bs"] Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.581240 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szvx8\" (UniqueName: \"kubernetes.io/projected/f9d3793f-35dd-4f5f-8d7c-d10c5cede8c6-kube-api-access-szvx8\") pod \"nmstate-metrics-9b8c8685d-wb5bs\" (UID: \"f9d3793f-35dd-4f5f-8d7c-d10c5cede8c6\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-wb5bs" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.589906 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-gjg8z"] Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.591030 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gjg8z" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.605913 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-9cwqf"] Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.606909 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9cwqf" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.615849 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.635139 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-gjg8z"] Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.682868 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szvx8\" (UniqueName: \"kubernetes.io/projected/f9d3793f-35dd-4f5f-8d7c-d10c5cede8c6-kube-api-access-szvx8\") pod \"nmstate-metrics-9b8c8685d-wb5bs\" (UID: \"f9d3793f-35dd-4f5f-8d7c-d10c5cede8c6\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-wb5bs" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.725790 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szvx8\" (UniqueName: \"kubernetes.io/projected/f9d3793f-35dd-4f5f-8d7c-d10c5cede8c6-kube-api-access-szvx8\") pod \"nmstate-metrics-9b8c8685d-wb5bs\" (UID: \"f9d3793f-35dd-4f5f-8d7c-d10c5cede8c6\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-wb5bs" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.772126 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-9m2wq"] Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.774487 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-9m2wq" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.781288 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-lfvxl" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.781676 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.782100 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.785970 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j7gd\" (UniqueName: \"kubernetes.io/projected/bdc2a628-53c9-40c3-b438-9e8e059a93bc-kube-api-access-5j7gd\") pod \"nmstate-handler-9cwqf\" (UID: \"bdc2a628-53c9-40c3-b438-9e8e059a93bc\") " pod="openshift-nmstate/nmstate-handler-9cwqf" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.786064 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/efd9bdd7-8e04-433a-83d9-bdfcb094d74b-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-gjg8z\" (UID: \"efd9bdd7-8e04-433a-83d9-bdfcb094d74b\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-gjg8z" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.786112 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/bdc2a628-53c9-40c3-b438-9e8e059a93bc-ovs-socket\") pod \"nmstate-handler-9cwqf\" (UID: \"bdc2a628-53c9-40c3-b438-9e8e059a93bc\") " pod="openshift-nmstate/nmstate-handler-9cwqf" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.787200 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/bdc2a628-53c9-40c3-b438-9e8e059a93bc-dbus-socket\") pod \"nmstate-handler-9cwqf\" (UID: \"bdc2a628-53c9-40c3-b438-9e8e059a93bc\") " pod="openshift-nmstate/nmstate-handler-9cwqf" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.787484 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srg8q\" (UniqueName: \"kubernetes.io/projected/efd9bdd7-8e04-433a-83d9-bdfcb094d74b-kube-api-access-srg8q\") pod \"nmstate-webhook-5f558f5558-gjg8z\" (UID: \"efd9bdd7-8e04-433a-83d9-bdfcb094d74b\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-gjg8z" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.787549 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/bdc2a628-53c9-40c3-b438-9e8e059a93bc-nmstate-lock\") pod \"nmstate-handler-9cwqf\" (UID: \"bdc2a628-53c9-40c3-b438-9e8e059a93bc\") " pod="openshift-nmstate/nmstate-handler-9cwqf" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.807832 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-9m2wq"] Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.842086 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-wb5bs" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.890278 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/bdc2a628-53c9-40c3-b438-9e8e059a93bc-ovs-socket\") pod \"nmstate-handler-9cwqf\" (UID: \"bdc2a628-53c9-40c3-b438-9e8e059a93bc\") " pod="openshift-nmstate/nmstate-handler-9cwqf" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.890322 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/bdc2a628-53c9-40c3-b438-9e8e059a93bc-dbus-socket\") pod \"nmstate-handler-9cwqf\" (UID: \"bdc2a628-53c9-40c3-b438-9e8e059a93bc\") " pod="openshift-nmstate/nmstate-handler-9cwqf" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.890355 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/52d4c8cc-931e-4994-a9b3-b8b93bc67084-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-9m2wq\" (UID: \"52d4c8cc-931e-4994-a9b3-b8b93bc67084\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-9m2wq" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.890384 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srg8q\" (UniqueName: \"kubernetes.io/projected/efd9bdd7-8e04-433a-83d9-bdfcb094d74b-kube-api-access-srg8q\") pod \"nmstate-webhook-5f558f5558-gjg8z\" (UID: \"efd9bdd7-8e04-433a-83d9-bdfcb094d74b\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-gjg8z" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.890402 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/52d4c8cc-931e-4994-a9b3-b8b93bc67084-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-9m2wq\" (UID: \"52d4c8cc-931e-4994-a9b3-b8b93bc67084\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-9m2wq" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.890424 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/bdc2a628-53c9-40c3-b438-9e8e059a93bc-nmstate-lock\") pod \"nmstate-handler-9cwqf\" (UID: \"bdc2a628-53c9-40c3-b438-9e8e059a93bc\") " pod="openshift-nmstate/nmstate-handler-9cwqf" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.890453 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j7gd\" (UniqueName: \"kubernetes.io/projected/bdc2a628-53c9-40c3-b438-9e8e059a93bc-kube-api-access-5j7gd\") pod \"nmstate-handler-9cwqf\" (UID: \"bdc2a628-53c9-40c3-b438-9e8e059a93bc\") " pod="openshift-nmstate/nmstate-handler-9cwqf" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.890486 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v52d\" (UniqueName: \"kubernetes.io/projected/52d4c8cc-931e-4994-a9b3-b8b93bc67084-kube-api-access-7v52d\") pod \"nmstate-console-plugin-86f58fcf4-9m2wq\" (UID: \"52d4c8cc-931e-4994-a9b3-b8b93bc67084\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-9m2wq" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.890514 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/efd9bdd7-8e04-433a-83d9-bdfcb094d74b-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-gjg8z\" (UID: \"efd9bdd7-8e04-433a-83d9-bdfcb094d74b\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-gjg8z" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.891979 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/bdc2a628-53c9-40c3-b438-9e8e059a93bc-ovs-socket\") pod \"nmstate-handler-9cwqf\" (UID: \"bdc2a628-53c9-40c3-b438-9e8e059a93bc\") " pod="openshift-nmstate/nmstate-handler-9cwqf" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.892245 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/bdc2a628-53c9-40c3-b438-9e8e059a93bc-dbus-socket\") pod \"nmstate-handler-9cwqf\" (UID: \"bdc2a628-53c9-40c3-b438-9e8e059a93bc\") " pod="openshift-nmstate/nmstate-handler-9cwqf" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.892296 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/bdc2a628-53c9-40c3-b438-9e8e059a93bc-nmstate-lock\") pod \"nmstate-handler-9cwqf\" (UID: \"bdc2a628-53c9-40c3-b438-9e8e059a93bc\") " pod="openshift-nmstate/nmstate-handler-9cwqf" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.908804 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/efd9bdd7-8e04-433a-83d9-bdfcb094d74b-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-gjg8z\" (UID: \"efd9bdd7-8e04-433a-83d9-bdfcb094d74b\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-gjg8z" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.926514 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srg8q\" (UniqueName: \"kubernetes.io/projected/efd9bdd7-8e04-433a-83d9-bdfcb094d74b-kube-api-access-srg8q\") pod \"nmstate-webhook-5f558f5558-gjg8z\" (UID: \"efd9bdd7-8e04-433a-83d9-bdfcb094d74b\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-gjg8z" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.937357 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j7gd\" (UniqueName: \"kubernetes.io/projected/bdc2a628-53c9-40c3-b438-9e8e059a93bc-kube-api-access-5j7gd\") pod \"nmstate-handler-9cwqf\" (UID: \"bdc2a628-53c9-40c3-b438-9e8e059a93bc\") " pod="openshift-nmstate/nmstate-handler-9cwqf" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.988409 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-694c9dc6c5-ljqk2"] Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.989428 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.991795 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/047a0b61-b67f-45f2-9d2f-6ef8ac799d1a-console-config\") pod \"console-694c9dc6c5-ljqk2\" (UID: \"047a0b61-b67f-45f2-9d2f-6ef8ac799d1a\") " pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.991850 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/047a0b61-b67f-45f2-9d2f-6ef8ac799d1a-trusted-ca-bundle\") pod \"console-694c9dc6c5-ljqk2\" (UID: \"047a0b61-b67f-45f2-9d2f-6ef8ac799d1a\") " pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.991903 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v52d\" (UniqueName: \"kubernetes.io/projected/52d4c8cc-931e-4994-a9b3-b8b93bc67084-kube-api-access-7v52d\") pod \"nmstate-console-plugin-86f58fcf4-9m2wq\" (UID: \"52d4c8cc-931e-4994-a9b3-b8b93bc67084\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-9m2wq" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.991951 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/047a0b61-b67f-45f2-9d2f-6ef8ac799d1a-service-ca\") pod \"console-694c9dc6c5-ljqk2\" (UID: \"047a0b61-b67f-45f2-9d2f-6ef8ac799d1a\") " pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.992005 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/047a0b61-b67f-45f2-9d2f-6ef8ac799d1a-oauth-serving-cert\") pod \"console-694c9dc6c5-ljqk2\" (UID: \"047a0b61-b67f-45f2-9d2f-6ef8ac799d1a\") " pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.992047 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbjpz\" (UniqueName: \"kubernetes.io/projected/047a0b61-b67f-45f2-9d2f-6ef8ac799d1a-kube-api-access-tbjpz\") pod \"console-694c9dc6c5-ljqk2\" (UID: \"047a0b61-b67f-45f2-9d2f-6ef8ac799d1a\") " pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.992093 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/52d4c8cc-931e-4994-a9b3-b8b93bc67084-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-9m2wq\" (UID: \"52d4c8cc-931e-4994-a9b3-b8b93bc67084\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-9m2wq" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.992116 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/047a0b61-b67f-45f2-9d2f-6ef8ac799d1a-console-oauth-config\") pod \"console-694c9dc6c5-ljqk2\" (UID: \"047a0b61-b67f-45f2-9d2f-6ef8ac799d1a\") " pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.992133 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/047a0b61-b67f-45f2-9d2f-6ef8ac799d1a-console-serving-cert\") pod \"console-694c9dc6c5-ljqk2\" (UID: \"047a0b61-b67f-45f2-9d2f-6ef8ac799d1a\") " pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.992160 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/52d4c8cc-931e-4994-a9b3-b8b93bc67084-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-9m2wq\" (UID: \"52d4c8cc-931e-4994-a9b3-b8b93bc67084\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-9m2wq" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.993252 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/52d4c8cc-931e-4994-a9b3-b8b93bc67084-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-9m2wq\" (UID: \"52d4c8cc-931e-4994-a9b3-b8b93bc67084\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-9m2wq" Mar 11 09:10:53 crc kubenswrapper[4840]: I0311 09:10:53.999303 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/52d4c8cc-931e-4994-a9b3-b8b93bc67084-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-9m2wq\" (UID: \"52d4c8cc-931e-4994-a9b3-b8b93bc67084\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-9m2wq" Mar 11 09:10:54 crc kubenswrapper[4840]: I0311 09:10:54.018378 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v52d\" (UniqueName: \"kubernetes.io/projected/52d4c8cc-931e-4994-a9b3-b8b93bc67084-kube-api-access-7v52d\") pod \"nmstate-console-plugin-86f58fcf4-9m2wq\" (UID: \"52d4c8cc-931e-4994-a9b3-b8b93bc67084\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-9m2wq" Mar 11 09:10:54 crc kubenswrapper[4840]: I0311 09:10:54.037856 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-694c9dc6c5-ljqk2"] Mar 11 09:10:54 crc kubenswrapper[4840]: I0311 09:10:54.094804 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbjpz\" (UniqueName: \"kubernetes.io/projected/047a0b61-b67f-45f2-9d2f-6ef8ac799d1a-kube-api-access-tbjpz\") pod \"console-694c9dc6c5-ljqk2\" (UID: \"047a0b61-b67f-45f2-9d2f-6ef8ac799d1a\") " pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:10:54 crc kubenswrapper[4840]: I0311 09:10:54.094887 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/047a0b61-b67f-45f2-9d2f-6ef8ac799d1a-console-oauth-config\") pod \"console-694c9dc6c5-ljqk2\" (UID: \"047a0b61-b67f-45f2-9d2f-6ef8ac799d1a\") " pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:10:54 crc kubenswrapper[4840]: I0311 09:10:54.094913 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/047a0b61-b67f-45f2-9d2f-6ef8ac799d1a-console-serving-cert\") pod \"console-694c9dc6c5-ljqk2\" (UID: \"047a0b61-b67f-45f2-9d2f-6ef8ac799d1a\") " pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:10:54 crc kubenswrapper[4840]: I0311 09:10:54.095110 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/047a0b61-b67f-45f2-9d2f-6ef8ac799d1a-console-config\") pod \"console-694c9dc6c5-ljqk2\" (UID: \"047a0b61-b67f-45f2-9d2f-6ef8ac799d1a\") " pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:10:54 crc kubenswrapper[4840]: I0311 09:10:54.098570 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/047a0b61-b67f-45f2-9d2f-6ef8ac799d1a-trusted-ca-bundle\") pod \"console-694c9dc6c5-ljqk2\" (UID: \"047a0b61-b67f-45f2-9d2f-6ef8ac799d1a\") " pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:10:54 crc kubenswrapper[4840]: I0311 09:10:54.098850 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/047a0b61-b67f-45f2-9d2f-6ef8ac799d1a-console-config\") pod \"console-694c9dc6c5-ljqk2\" (UID: \"047a0b61-b67f-45f2-9d2f-6ef8ac799d1a\") " pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:10:54 crc kubenswrapper[4840]: I0311 09:10:54.099159 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/047a0b61-b67f-45f2-9d2f-6ef8ac799d1a-service-ca\") pod \"console-694c9dc6c5-ljqk2\" (UID: \"047a0b61-b67f-45f2-9d2f-6ef8ac799d1a\") " pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:10:54 crc kubenswrapper[4840]: I0311 09:10:54.099258 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/047a0b61-b67f-45f2-9d2f-6ef8ac799d1a-oauth-serving-cert\") pod \"console-694c9dc6c5-ljqk2\" (UID: \"047a0b61-b67f-45f2-9d2f-6ef8ac799d1a\") " pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:10:54 crc kubenswrapper[4840]: I0311 09:10:54.100010 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/047a0b61-b67f-45f2-9d2f-6ef8ac799d1a-service-ca\") pod \"console-694c9dc6c5-ljqk2\" (UID: \"047a0b61-b67f-45f2-9d2f-6ef8ac799d1a\") " pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:10:54 crc kubenswrapper[4840]: I0311 09:10:54.100217 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/047a0b61-b67f-45f2-9d2f-6ef8ac799d1a-oauth-serving-cert\") pod \"console-694c9dc6c5-ljqk2\" (UID: \"047a0b61-b67f-45f2-9d2f-6ef8ac799d1a\") " pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:10:54 crc kubenswrapper[4840]: I0311 09:10:54.100234 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/047a0b61-b67f-45f2-9d2f-6ef8ac799d1a-trusted-ca-bundle\") pod \"console-694c9dc6c5-ljqk2\" (UID: \"047a0b61-b67f-45f2-9d2f-6ef8ac799d1a\") " pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:10:54 crc kubenswrapper[4840]: I0311 09:10:54.106548 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-9m2wq" Mar 11 09:10:54 crc kubenswrapper[4840]: I0311 09:10:54.106557 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/047a0b61-b67f-45f2-9d2f-6ef8ac799d1a-console-oauth-config\") pod \"console-694c9dc6c5-ljqk2\" (UID: \"047a0b61-b67f-45f2-9d2f-6ef8ac799d1a\") " pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:10:54 crc kubenswrapper[4840]: I0311 09:10:54.108883 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/047a0b61-b67f-45f2-9d2f-6ef8ac799d1a-console-serving-cert\") pod \"console-694c9dc6c5-ljqk2\" (UID: \"047a0b61-b67f-45f2-9d2f-6ef8ac799d1a\") " pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:10:54 crc kubenswrapper[4840]: I0311 09:10:54.121542 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbjpz\" (UniqueName: \"kubernetes.io/projected/047a0b61-b67f-45f2-9d2f-6ef8ac799d1a-kube-api-access-tbjpz\") pod \"console-694c9dc6c5-ljqk2\" (UID: \"047a0b61-b67f-45f2-9d2f-6ef8ac799d1a\") " pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:10:54 crc kubenswrapper[4840]: I0311 09:10:54.162550 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-wb5bs"] Mar 11 09:10:54 crc kubenswrapper[4840]: W0311 09:10:54.167441 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9d3793f_35dd_4f5f_8d7c_d10c5cede8c6.slice/crio-6348115a9e9de9a91579c4cca850551ead815807e18913288e0d0df3135483b9 WatchSource:0}: Error finding container 6348115a9e9de9a91579c4cca850551ead815807e18913288e0d0df3135483b9: Status 404 returned error can't find the container with id 6348115a9e9de9a91579c4cca850551ead815807e18913288e0d0df3135483b9 Mar 11 09:10:54 crc kubenswrapper[4840]: I0311 09:10:54.182928 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-wb5bs" event={"ID":"f9d3793f-35dd-4f5f-8d7c-d10c5cede8c6","Type":"ContainerStarted","Data":"6348115a9e9de9a91579c4cca850551ead815807e18913288e0d0df3135483b9"} Mar 11 09:10:54 crc kubenswrapper[4840]: I0311 09:10:54.209017 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gjg8z" Mar 11 09:10:54 crc kubenswrapper[4840]: I0311 09:10:54.227136 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9cwqf" Mar 11 09:10:54 crc kubenswrapper[4840]: I0311 09:10:54.313145 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:10:54 crc kubenswrapper[4840]: I0311 09:10:54.414401 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-gjg8z"] Mar 11 09:10:54 crc kubenswrapper[4840]: W0311 09:10:54.421093 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefd9bdd7_8e04_433a_83d9_bdfcb094d74b.slice/crio-92da0595140f349a3a73fbf47b07dd8b1c7d3a5a0e9fc85045f7ae91b86a6a57 WatchSource:0}: Error finding container 92da0595140f349a3a73fbf47b07dd8b1c7d3a5a0e9fc85045f7ae91b86a6a57: Status 404 returned error can't find the container with id 92da0595140f349a3a73fbf47b07dd8b1c7d3a5a0e9fc85045f7ae91b86a6a57 Mar 11 09:10:54 crc kubenswrapper[4840]: I0311 09:10:54.513749 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-694c9dc6c5-ljqk2"] Mar 11 09:10:54 crc kubenswrapper[4840]: W0311 09:10:54.519239 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod047a0b61_b67f_45f2_9d2f_6ef8ac799d1a.slice/crio-49b82869ac661fa8a809be6fc024598c5cc57b206c297968cb044b1b6ed772f6 WatchSource:0}: Error finding container 49b82869ac661fa8a809be6fc024598c5cc57b206c297968cb044b1b6ed772f6: Status 404 returned error can't find the container with id 49b82869ac661fa8a809be6fc024598c5cc57b206c297968cb044b1b6ed772f6 Mar 11 09:10:54 crc kubenswrapper[4840]: I0311 09:10:54.539807 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-9m2wq"] Mar 11 09:10:54 crc kubenswrapper[4840]: W0311 09:10:54.542868 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52d4c8cc_931e_4994_a9b3_b8b93bc67084.slice/crio-22686a0d9f013f7a5fb64e92793b4cf5ebcf3047a3aba02c4391d4794a97be5d WatchSource:0}: Error finding container 22686a0d9f013f7a5fb64e92793b4cf5ebcf3047a3aba02c4391d4794a97be5d: Status 404 returned error can't find the container with id 22686a0d9f013f7a5fb64e92793b4cf5ebcf3047a3aba02c4391d4794a97be5d Mar 11 09:10:55 crc kubenswrapper[4840]: I0311 09:10:55.192754 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gjg8z" event={"ID":"efd9bdd7-8e04-433a-83d9-bdfcb094d74b","Type":"ContainerStarted","Data":"92da0595140f349a3a73fbf47b07dd8b1c7d3a5a0e9fc85045f7ae91b86a6a57"} Mar 11 09:10:55 crc kubenswrapper[4840]: I0311 09:10:55.194215 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-9m2wq" event={"ID":"52d4c8cc-931e-4994-a9b3-b8b93bc67084","Type":"ContainerStarted","Data":"22686a0d9f013f7a5fb64e92793b4cf5ebcf3047a3aba02c4391d4794a97be5d"} Mar 11 09:10:55 crc kubenswrapper[4840]: I0311 09:10:55.196307 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-694c9dc6c5-ljqk2" event={"ID":"047a0b61-b67f-45f2-9d2f-6ef8ac799d1a","Type":"ContainerStarted","Data":"9ac6fa9bedc122d367bce564b26b2a96dcc257e35b567f0d36fe4750e189da97"} Mar 11 09:10:55 crc kubenswrapper[4840]: I0311 09:10:55.196334 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-694c9dc6c5-ljqk2" event={"ID":"047a0b61-b67f-45f2-9d2f-6ef8ac799d1a","Type":"ContainerStarted","Data":"49b82869ac661fa8a809be6fc024598c5cc57b206c297968cb044b1b6ed772f6"} Mar 11 09:10:55 crc kubenswrapper[4840]: I0311 09:10:55.198648 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9cwqf" event={"ID":"bdc2a628-53c9-40c3-b438-9e8e059a93bc","Type":"ContainerStarted","Data":"fc68ba351ea36988e953984f01af3573bf64d8b85156a8df0e940d411c359318"} Mar 11 09:10:55 crc kubenswrapper[4840]: I0311 09:10:55.232070 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hclbz" Mar 11 09:10:55 crc kubenswrapper[4840]: I0311 09:10:55.255672 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-694c9dc6c5-ljqk2" podStartSLOduration=2.25564523 podStartE2EDuration="2.25564523s" podCreationTimestamp="2026-03-11 09:10:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:10:55.227760959 +0000 UTC m=+853.893430824" watchObservedRunningTime="2026-03-11 09:10:55.25564523 +0000 UTC m=+853.921315045" Mar 11 09:10:55 crc kubenswrapper[4840]: I0311 09:10:55.272635 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hclbz" Mar 11 09:10:55 crc kubenswrapper[4840]: I0311 09:10:55.470026 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hclbz"] Mar 11 09:10:57 crc kubenswrapper[4840]: I0311 09:10:57.213675 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hclbz" podUID="bd1bb53a-5918-4117-a515-6d42e66bc7f1" containerName="registry-server" containerID="cri-o://5aa065b07f5bdcb8961678078d3cc804f31fd937c74f42fc771357c03d15802f" gracePeriod=2 Mar 11 09:10:57 crc kubenswrapper[4840]: I0311 09:10:57.447265 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:10:57 crc kubenswrapper[4840]: I0311 09:10:57.447741 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:10:57 crc kubenswrapper[4840]: I0311 09:10:57.447793 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 09:10:57 crc kubenswrapper[4840]: I0311 09:10:57.448533 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5d930828edaf48a40ab8d839e51f7f6d23db61f827df30c4134bd6083d7cbb22"} pod="openshift-machine-config-operator/machine-config-daemon-brtht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 09:10:57 crc kubenswrapper[4840]: I0311 09:10:57.448602 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" containerID="cri-o://5d930828edaf48a40ab8d839e51f7f6d23db61f827df30c4134bd6083d7cbb22" gracePeriod=600 Mar 11 09:10:57 crc kubenswrapper[4840]: I0311 09:10:57.608755 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hclbz" Mar 11 09:10:57 crc kubenswrapper[4840]: I0311 09:10:57.754049 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd1bb53a-5918-4117-a515-6d42e66bc7f1-utilities\") pod \"bd1bb53a-5918-4117-a515-6d42e66bc7f1\" (UID: \"bd1bb53a-5918-4117-a515-6d42e66bc7f1\") " Mar 11 09:10:57 crc kubenswrapper[4840]: I0311 09:10:57.754223 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd1bb53a-5918-4117-a515-6d42e66bc7f1-catalog-content\") pod \"bd1bb53a-5918-4117-a515-6d42e66bc7f1\" (UID: \"bd1bb53a-5918-4117-a515-6d42e66bc7f1\") " Mar 11 09:10:57 crc kubenswrapper[4840]: I0311 09:10:57.754258 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxlxm\" (UniqueName: \"kubernetes.io/projected/bd1bb53a-5918-4117-a515-6d42e66bc7f1-kube-api-access-hxlxm\") pod \"bd1bb53a-5918-4117-a515-6d42e66bc7f1\" (UID: \"bd1bb53a-5918-4117-a515-6d42e66bc7f1\") " Mar 11 09:10:57 crc kubenswrapper[4840]: I0311 09:10:57.755184 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd1bb53a-5918-4117-a515-6d42e66bc7f1-utilities" (OuterVolumeSpecName: "utilities") pod "bd1bb53a-5918-4117-a515-6d42e66bc7f1" (UID: "bd1bb53a-5918-4117-a515-6d42e66bc7f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:10:57 crc kubenswrapper[4840]: I0311 09:10:57.763261 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd1bb53a-5918-4117-a515-6d42e66bc7f1-kube-api-access-hxlxm" (OuterVolumeSpecName: "kube-api-access-hxlxm") pod "bd1bb53a-5918-4117-a515-6d42e66bc7f1" (UID: "bd1bb53a-5918-4117-a515-6d42e66bc7f1"). InnerVolumeSpecName "kube-api-access-hxlxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:10:57 crc kubenswrapper[4840]: I0311 09:10:57.857325 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxlxm\" (UniqueName: \"kubernetes.io/projected/bd1bb53a-5918-4117-a515-6d42e66bc7f1-kube-api-access-hxlxm\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:57 crc kubenswrapper[4840]: I0311 09:10:57.857393 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd1bb53a-5918-4117-a515-6d42e66bc7f1-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:57 crc kubenswrapper[4840]: I0311 09:10:57.906168 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd1bb53a-5918-4117-a515-6d42e66bc7f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd1bb53a-5918-4117-a515-6d42e66bc7f1" (UID: "bd1bb53a-5918-4117-a515-6d42e66bc7f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:10:57 crc kubenswrapper[4840]: I0311 09:10:57.959379 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd1bb53a-5918-4117-a515-6d42e66bc7f1-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.225816 4840 generic.go:334] "Generic (PLEG): container finished" podID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerID="5d930828edaf48a40ab8d839e51f7f6d23db61f827df30c4134bd6083d7cbb22" exitCode=0 Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.225910 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerDied","Data":"5d930828edaf48a40ab8d839e51f7f6d23db61f827df30c4134bd6083d7cbb22"} Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.225985 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"f08eeaa4fe8ff05d2389b41d94a12307326129a99c2a57e3c9c13f2ab4a219eb"} Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.226031 4840 scope.go:117] "RemoveContainer" containerID="29a4ee15afa23fdb14b92f2a19d6918007b0fac2dd41b03b6d3af26018e2aa34" Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.228344 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-9m2wq" event={"ID":"52d4c8cc-931e-4994-a9b3-b8b93bc67084","Type":"ContainerStarted","Data":"6f5d7c67c7069817e58e9b93b28800ac5e2029cb76254f001d1a600676311cb6"} Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.237985 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9cwqf" event={"ID":"bdc2a628-53c9-40c3-b438-9e8e059a93bc","Type":"ContainerStarted","Data":"3259879ad0c8a273ff3e858d6ca73d8bba8da28831a7f86c56815741c931f818"} Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.238183 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-9cwqf" Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.241442 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-wb5bs" event={"ID":"f9d3793f-35dd-4f5f-8d7c-d10c5cede8c6","Type":"ContainerStarted","Data":"7dd771d6204ac463a0b472efd69be5415fb01d22cbfab1bc276403c02bce1a88"} Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.243873 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gjg8z" event={"ID":"efd9bdd7-8e04-433a-83d9-bdfcb094d74b","Type":"ContainerStarted","Data":"70d0dff69c086f0965ce8c3b6ad873be8b641dd92e90193f21ba6638b019f351"} Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.244741 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gjg8z" Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.249853 4840 generic.go:334] "Generic (PLEG): container finished" podID="bd1bb53a-5918-4117-a515-6d42e66bc7f1" containerID="5aa065b07f5bdcb8961678078d3cc804f31fd937c74f42fc771357c03d15802f" exitCode=0 Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.250008 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hclbz" event={"ID":"bd1bb53a-5918-4117-a515-6d42e66bc7f1","Type":"ContainerDied","Data":"5aa065b07f5bdcb8961678078d3cc804f31fd937c74f42fc771357c03d15802f"} Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.250048 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hclbz" event={"ID":"bd1bb53a-5918-4117-a515-6d42e66bc7f1","Type":"ContainerDied","Data":"3413b08acd7ee8e4742df46288c2df806525c4c3b59d6463d7998ddfae939b90"} Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.250154 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hclbz" Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.283954 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gjg8z" podStartSLOduration=2.4315054099999998 podStartE2EDuration="5.283922057s" podCreationTimestamp="2026-03-11 09:10:53 +0000 UTC" firstStartedPulling="2026-03-11 09:10:54.424800413 +0000 UTC m=+853.090470228" lastFinishedPulling="2026-03-11 09:10:57.27721706 +0000 UTC m=+855.942886875" observedRunningTime="2026-03-11 09:10:58.282505262 +0000 UTC m=+856.948175117" watchObservedRunningTime="2026-03-11 09:10:58.283922057 +0000 UTC m=+856.949591882" Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.284026 4840 scope.go:117] "RemoveContainer" containerID="5aa065b07f5bdcb8961678078d3cc804f31fd937c74f42fc771357c03d15802f" Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.315173 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-9cwqf" podStartSLOduration=2.309351501 podStartE2EDuration="5.315123052s" podCreationTimestamp="2026-03-11 09:10:53 +0000 UTC" firstStartedPulling="2026-03-11 09:10:54.260207996 +0000 UTC m=+852.925877811" lastFinishedPulling="2026-03-11 09:10:57.265979547 +0000 UTC m=+855.931649362" observedRunningTime="2026-03-11 09:10:58.31188868 +0000 UTC m=+856.977558495" watchObservedRunningTime="2026-03-11 09:10:58.315123052 +0000 UTC m=+856.980792877" Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.334120 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-9m2wq" podStartSLOduration=2.616534093 podStartE2EDuration="5.334092919s" podCreationTimestamp="2026-03-11 09:10:53 +0000 UTC" firstStartedPulling="2026-03-11 09:10:54.546828491 +0000 UTC m=+853.212498306" lastFinishedPulling="2026-03-11 09:10:57.264387317 +0000 UTC m=+855.930057132" observedRunningTime="2026-03-11 09:10:58.331428182 +0000 UTC m=+856.997098007" watchObservedRunningTime="2026-03-11 09:10:58.334092919 +0000 UTC m=+856.999762734" Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.337767 4840 scope.go:117] "RemoveContainer" containerID="709b43a2082d709691b8bf214faba8454d497130664e7bc3d75b80871addaadf" Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.364884 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hclbz"] Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.372518 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hclbz"] Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.376676 4840 scope.go:117] "RemoveContainer" containerID="86c4f2e417a9361cc7b8e70669883682dae4d4199b90f2944bbde71303ea39f7" Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.394097 4840 scope.go:117] "RemoveContainer" containerID="5aa065b07f5bdcb8961678078d3cc804f31fd937c74f42fc771357c03d15802f" Mar 11 09:10:58 crc kubenswrapper[4840]: E0311 09:10:58.395082 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aa065b07f5bdcb8961678078d3cc804f31fd937c74f42fc771357c03d15802f\": container with ID starting with 5aa065b07f5bdcb8961678078d3cc804f31fd937c74f42fc771357c03d15802f not found: ID does not exist" containerID="5aa065b07f5bdcb8961678078d3cc804f31fd937c74f42fc771357c03d15802f" Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.395126 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aa065b07f5bdcb8961678078d3cc804f31fd937c74f42fc771357c03d15802f"} err="failed to get container status \"5aa065b07f5bdcb8961678078d3cc804f31fd937c74f42fc771357c03d15802f\": rpc error: code = NotFound desc = could not find container \"5aa065b07f5bdcb8961678078d3cc804f31fd937c74f42fc771357c03d15802f\": container with ID starting with 5aa065b07f5bdcb8961678078d3cc804f31fd937c74f42fc771357c03d15802f not found: ID does not exist" Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.395150 4840 scope.go:117] "RemoveContainer" containerID="709b43a2082d709691b8bf214faba8454d497130664e7bc3d75b80871addaadf" Mar 11 09:10:58 crc kubenswrapper[4840]: E0311 09:10:58.395403 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"709b43a2082d709691b8bf214faba8454d497130664e7bc3d75b80871addaadf\": container with ID starting with 709b43a2082d709691b8bf214faba8454d497130664e7bc3d75b80871addaadf not found: ID does not exist" containerID="709b43a2082d709691b8bf214faba8454d497130664e7bc3d75b80871addaadf" Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.395428 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"709b43a2082d709691b8bf214faba8454d497130664e7bc3d75b80871addaadf"} err="failed to get container status \"709b43a2082d709691b8bf214faba8454d497130664e7bc3d75b80871addaadf\": rpc error: code = NotFound desc = could not find container \"709b43a2082d709691b8bf214faba8454d497130664e7bc3d75b80871addaadf\": container with ID starting with 709b43a2082d709691b8bf214faba8454d497130664e7bc3d75b80871addaadf not found: ID does not exist" Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.395446 4840 scope.go:117] "RemoveContainer" containerID="86c4f2e417a9361cc7b8e70669883682dae4d4199b90f2944bbde71303ea39f7" Mar 11 09:10:58 crc kubenswrapper[4840]: E0311 09:10:58.395744 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86c4f2e417a9361cc7b8e70669883682dae4d4199b90f2944bbde71303ea39f7\": container with ID starting with 86c4f2e417a9361cc7b8e70669883682dae4d4199b90f2944bbde71303ea39f7 not found: ID does not exist" containerID="86c4f2e417a9361cc7b8e70669883682dae4d4199b90f2944bbde71303ea39f7" Mar 11 09:10:58 crc kubenswrapper[4840]: I0311 09:10:58.395773 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86c4f2e417a9361cc7b8e70669883682dae4d4199b90f2944bbde71303ea39f7"} err="failed to get container status \"86c4f2e417a9361cc7b8e70669883682dae4d4199b90f2944bbde71303ea39f7\": rpc error: code = NotFound desc = could not find container \"86c4f2e417a9361cc7b8e70669883682dae4d4199b90f2944bbde71303ea39f7\": container with ID starting with 86c4f2e417a9361cc7b8e70669883682dae4d4199b90f2944bbde71303ea39f7 not found: ID does not exist" Mar 11 09:11:00 crc kubenswrapper[4840]: I0311 09:11:00.320146 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd1bb53a-5918-4117-a515-6d42e66bc7f1" path="/var/lib/kubelet/pods/bd1bb53a-5918-4117-a515-6d42e66bc7f1/volumes" Mar 11 09:11:01 crc kubenswrapper[4840]: I0311 09:11:01.325628 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-wb5bs" event={"ID":"f9d3793f-35dd-4f5f-8d7c-d10c5cede8c6","Type":"ContainerStarted","Data":"27fc10a23907b2bac316db25398898a2297f681c9777a52ac9d1378327bb00eb"} Mar 11 09:11:01 crc kubenswrapper[4840]: I0311 09:11:01.348052 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-wb5bs" podStartSLOduration=2.132598137 podStartE2EDuration="8.348027195s" podCreationTimestamp="2026-03-11 09:10:53 +0000 UTC" firstStartedPulling="2026-03-11 09:10:54.170745527 +0000 UTC m=+852.836415342" lastFinishedPulling="2026-03-11 09:11:00.386174575 +0000 UTC m=+859.051844400" observedRunningTime="2026-03-11 09:11:01.347467551 +0000 UTC m=+860.013137406" watchObservedRunningTime="2026-03-11 09:11:01.348027195 +0000 UTC m=+860.013697010" Mar 11 09:11:04 crc kubenswrapper[4840]: I0311 09:11:04.261768 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-9cwqf" Mar 11 09:11:04 crc kubenswrapper[4840]: I0311 09:11:04.314216 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:11:04 crc kubenswrapper[4840]: I0311 09:11:04.314279 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:11:04 crc kubenswrapper[4840]: I0311 09:11:04.320720 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:11:04 crc kubenswrapper[4840]: I0311 09:11:04.363595 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-694c9dc6c5-ljqk2" Mar 11 09:11:04 crc kubenswrapper[4840]: I0311 09:11:04.435725 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-xkq7s"] Mar 11 09:11:14 crc kubenswrapper[4840]: I0311 09:11:14.219315 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-gjg8z" Mar 11 09:11:26 crc kubenswrapper[4840]: I0311 09:11:26.979184 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf"] Mar 11 09:11:26 crc kubenswrapper[4840]: E0311 09:11:26.980035 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd1bb53a-5918-4117-a515-6d42e66bc7f1" containerName="extract-content" Mar 11 09:11:26 crc kubenswrapper[4840]: I0311 09:11:26.980049 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd1bb53a-5918-4117-a515-6d42e66bc7f1" containerName="extract-content" Mar 11 09:11:26 crc kubenswrapper[4840]: E0311 09:11:26.980063 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd1bb53a-5918-4117-a515-6d42e66bc7f1" containerName="extract-utilities" Mar 11 09:11:26 crc kubenswrapper[4840]: I0311 09:11:26.980071 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd1bb53a-5918-4117-a515-6d42e66bc7f1" containerName="extract-utilities" Mar 11 09:11:26 crc kubenswrapper[4840]: E0311 09:11:26.980081 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd1bb53a-5918-4117-a515-6d42e66bc7f1" containerName="registry-server" Mar 11 09:11:26 crc kubenswrapper[4840]: I0311 09:11:26.980087 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd1bb53a-5918-4117-a515-6d42e66bc7f1" containerName="registry-server" Mar 11 09:11:26 crc kubenswrapper[4840]: I0311 09:11:26.980184 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd1bb53a-5918-4117-a515-6d42e66bc7f1" containerName="registry-server" Mar 11 09:11:26 crc kubenswrapper[4840]: I0311 09:11:26.980994 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf" Mar 11 09:11:26 crc kubenswrapper[4840]: I0311 09:11:26.983017 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 11 09:11:26 crc kubenswrapper[4840]: I0311 09:11:26.989192 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf"] Mar 11 09:11:27 crc kubenswrapper[4840]: I0311 09:11:27.063644 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31a664e3-80ea-4e78-a87f-3257129bc45a-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf\" (UID: \"31a664e3-80ea-4e78-a87f-3257129bc45a\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf" Mar 11 09:11:27 crc kubenswrapper[4840]: I0311 09:11:27.063732 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31a664e3-80ea-4e78-a87f-3257129bc45a-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf\" (UID: \"31a664e3-80ea-4e78-a87f-3257129bc45a\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf" Mar 11 09:11:27 crc kubenswrapper[4840]: I0311 09:11:27.063792 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxq86\" (UniqueName: \"kubernetes.io/projected/31a664e3-80ea-4e78-a87f-3257129bc45a-kube-api-access-qxq86\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf\" (UID: \"31a664e3-80ea-4e78-a87f-3257129bc45a\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf" Mar 11 09:11:27 crc kubenswrapper[4840]: I0311 09:11:27.164767 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxq86\" (UniqueName: \"kubernetes.io/projected/31a664e3-80ea-4e78-a87f-3257129bc45a-kube-api-access-qxq86\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf\" (UID: \"31a664e3-80ea-4e78-a87f-3257129bc45a\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf" Mar 11 09:11:27 crc kubenswrapper[4840]: I0311 09:11:27.164859 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31a664e3-80ea-4e78-a87f-3257129bc45a-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf\" (UID: \"31a664e3-80ea-4e78-a87f-3257129bc45a\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf" Mar 11 09:11:27 crc kubenswrapper[4840]: I0311 09:11:27.164915 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31a664e3-80ea-4e78-a87f-3257129bc45a-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf\" (UID: \"31a664e3-80ea-4e78-a87f-3257129bc45a\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf" Mar 11 09:11:27 crc kubenswrapper[4840]: I0311 09:11:27.165646 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31a664e3-80ea-4e78-a87f-3257129bc45a-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf\" (UID: \"31a664e3-80ea-4e78-a87f-3257129bc45a\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf" Mar 11 09:11:27 crc kubenswrapper[4840]: I0311 09:11:27.165703 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31a664e3-80ea-4e78-a87f-3257129bc45a-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf\" (UID: \"31a664e3-80ea-4e78-a87f-3257129bc45a\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf" Mar 11 09:11:27 crc kubenswrapper[4840]: I0311 09:11:27.184599 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxq86\" (UniqueName: \"kubernetes.io/projected/31a664e3-80ea-4e78-a87f-3257129bc45a-kube-api-access-qxq86\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf\" (UID: \"31a664e3-80ea-4e78-a87f-3257129bc45a\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf" Mar 11 09:11:27 crc kubenswrapper[4840]: I0311 09:11:27.299145 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf" Mar 11 09:11:27 crc kubenswrapper[4840]: I0311 09:11:27.547252 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf"] Mar 11 09:11:28 crc kubenswrapper[4840]: I0311 09:11:28.515282 4840 generic.go:334] "Generic (PLEG): container finished" podID="31a664e3-80ea-4e78-a87f-3257129bc45a" containerID="c394e154a77e244411c795e2974fc4ee2a6d44703a42d50a58224f220950936e" exitCode=0 Mar 11 09:11:28 crc kubenswrapper[4840]: I0311 09:11:28.515348 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf" event={"ID":"31a664e3-80ea-4e78-a87f-3257129bc45a","Type":"ContainerDied","Data":"c394e154a77e244411c795e2974fc4ee2a6d44703a42d50a58224f220950936e"} Mar 11 09:11:28 crc kubenswrapper[4840]: I0311 09:11:28.515804 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf" event={"ID":"31a664e3-80ea-4e78-a87f-3257129bc45a","Type":"ContainerStarted","Data":"29393f43dc70f6e5aeef6280766be1ac8399a5c20ccfb82314e0e7812c93dd68"} Mar 11 09:11:28 crc kubenswrapper[4840]: I0311 09:11:28.524595 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 09:11:29 crc kubenswrapper[4840]: I0311 09:11:29.482393 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-xkq7s" podUID="5dc5ef77-d18a-4474-a523-473f27166095" containerName="console" containerID="cri-o://df68a8e392c8a43765a966fb9a267c99c8a89529df63a671b25f0f11998125d8" gracePeriod=15 Mar 11 09:11:29 crc kubenswrapper[4840]: I0311 09:11:29.844523 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-xkq7s_5dc5ef77-d18a-4474-a523-473f27166095/console/0.log" Mar 11 09:11:29 crc kubenswrapper[4840]: I0311 09:11:29.845420 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 09:11:29 crc kubenswrapper[4840]: I0311 09:11:29.915260 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5dc5ef77-d18a-4474-a523-473f27166095-console-serving-cert\") pod \"5dc5ef77-d18a-4474-a523-473f27166095\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " Mar 11 09:11:29 crc kubenswrapper[4840]: I0311 09:11:29.915687 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcbfq\" (UniqueName: \"kubernetes.io/projected/5dc5ef77-d18a-4474-a523-473f27166095-kube-api-access-zcbfq\") pod \"5dc5ef77-d18a-4474-a523-473f27166095\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " Mar 11 09:11:29 crc kubenswrapper[4840]: I0311 09:11:29.915805 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5dc5ef77-d18a-4474-a523-473f27166095-trusted-ca-bundle\") pod \"5dc5ef77-d18a-4474-a523-473f27166095\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " Mar 11 09:11:29 crc kubenswrapper[4840]: I0311 09:11:29.915893 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5dc5ef77-d18a-4474-a523-473f27166095-service-ca\") pod \"5dc5ef77-d18a-4474-a523-473f27166095\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " Mar 11 09:11:29 crc kubenswrapper[4840]: I0311 09:11:29.915984 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5dc5ef77-d18a-4474-a523-473f27166095-oauth-serving-cert\") pod \"5dc5ef77-d18a-4474-a523-473f27166095\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " Mar 11 09:11:29 crc kubenswrapper[4840]: I0311 09:11:29.916092 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5dc5ef77-d18a-4474-a523-473f27166095-console-config\") pod \"5dc5ef77-d18a-4474-a523-473f27166095\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " Mar 11 09:11:29 crc kubenswrapper[4840]: I0311 09:11:29.916199 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5dc5ef77-d18a-4474-a523-473f27166095-console-oauth-config\") pod \"5dc5ef77-d18a-4474-a523-473f27166095\" (UID: \"5dc5ef77-d18a-4474-a523-473f27166095\") " Mar 11 09:11:29 crc kubenswrapper[4840]: I0311 09:11:29.917545 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dc5ef77-d18a-4474-a523-473f27166095-service-ca" (OuterVolumeSpecName: "service-ca") pod "5dc5ef77-d18a-4474-a523-473f27166095" (UID: "5dc5ef77-d18a-4474-a523-473f27166095"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:11:29 crc kubenswrapper[4840]: I0311 09:11:29.917764 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dc5ef77-d18a-4474-a523-473f27166095-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5dc5ef77-d18a-4474-a523-473f27166095" (UID: "5dc5ef77-d18a-4474-a523-473f27166095"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:11:29 crc kubenswrapper[4840]: I0311 09:11:29.917963 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dc5ef77-d18a-4474-a523-473f27166095-console-config" (OuterVolumeSpecName: "console-config") pod "5dc5ef77-d18a-4474-a523-473f27166095" (UID: "5dc5ef77-d18a-4474-a523-473f27166095"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:11:29 crc kubenswrapper[4840]: I0311 09:11:29.918039 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dc5ef77-d18a-4474-a523-473f27166095-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5dc5ef77-d18a-4474-a523-473f27166095" (UID: "5dc5ef77-d18a-4474-a523-473f27166095"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:11:29 crc kubenswrapper[4840]: I0311 09:11:29.924517 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dc5ef77-d18a-4474-a523-473f27166095-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5dc5ef77-d18a-4474-a523-473f27166095" (UID: "5dc5ef77-d18a-4474-a523-473f27166095"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:11:29 crc kubenswrapper[4840]: I0311 09:11:29.926823 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dc5ef77-d18a-4474-a523-473f27166095-kube-api-access-zcbfq" (OuterVolumeSpecName: "kube-api-access-zcbfq") pod "5dc5ef77-d18a-4474-a523-473f27166095" (UID: "5dc5ef77-d18a-4474-a523-473f27166095"). InnerVolumeSpecName "kube-api-access-zcbfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:11:29 crc kubenswrapper[4840]: I0311 09:11:29.927534 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dc5ef77-d18a-4474-a523-473f27166095-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5dc5ef77-d18a-4474-a523-473f27166095" (UID: "5dc5ef77-d18a-4474-a523-473f27166095"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:11:30 crc kubenswrapper[4840]: I0311 09:11:30.016968 4840 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5dc5ef77-d18a-4474-a523-473f27166095-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 09:11:30 crc kubenswrapper[4840]: I0311 09:11:30.017029 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcbfq\" (UniqueName: \"kubernetes.io/projected/5dc5ef77-d18a-4474-a523-473f27166095-kube-api-access-zcbfq\") on node \"crc\" DevicePath \"\"" Mar 11 09:11:30 crc kubenswrapper[4840]: I0311 09:11:30.017040 4840 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5dc5ef77-d18a-4474-a523-473f27166095-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:11:30 crc kubenswrapper[4840]: I0311 09:11:30.017051 4840 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5dc5ef77-d18a-4474-a523-473f27166095-service-ca\") on node \"crc\" DevicePath \"\"" Mar 11 09:11:30 crc kubenswrapper[4840]: I0311 09:11:30.017061 4840 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5dc5ef77-d18a-4474-a523-473f27166095-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 11 09:11:30 crc kubenswrapper[4840]: I0311 09:11:30.017071 4840 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5dc5ef77-d18a-4474-a523-473f27166095-console-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:11:30 crc kubenswrapper[4840]: I0311 09:11:30.017080 4840 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5dc5ef77-d18a-4474-a523-473f27166095-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:11:30 crc kubenswrapper[4840]: I0311 09:11:30.532617 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-xkq7s_5dc5ef77-d18a-4474-a523-473f27166095/console/0.log" Mar 11 09:11:30 crc kubenswrapper[4840]: I0311 09:11:30.532679 4840 generic.go:334] "Generic (PLEG): container finished" podID="5dc5ef77-d18a-4474-a523-473f27166095" containerID="df68a8e392c8a43765a966fb9a267c99c8a89529df63a671b25f0f11998125d8" exitCode=2 Mar 11 09:11:30 crc kubenswrapper[4840]: I0311 09:11:30.532713 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xkq7s" event={"ID":"5dc5ef77-d18a-4474-a523-473f27166095","Type":"ContainerDied","Data":"df68a8e392c8a43765a966fb9a267c99c8a89529df63a671b25f0f11998125d8"} Mar 11 09:11:30 crc kubenswrapper[4840]: I0311 09:11:30.532743 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xkq7s" event={"ID":"5dc5ef77-d18a-4474-a523-473f27166095","Type":"ContainerDied","Data":"f2ee17afeca58eab14b40b421909a77ddf225bae5a23626ae5fd3ead80633d3b"} Mar 11 09:11:30 crc kubenswrapper[4840]: I0311 09:11:30.532762 4840 scope.go:117] "RemoveContainer" containerID="df68a8e392c8a43765a966fb9a267c99c8a89529df63a671b25f0f11998125d8" Mar 11 09:11:30 crc kubenswrapper[4840]: I0311 09:11:30.532755 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xkq7s" Mar 11 09:11:30 crc kubenswrapper[4840]: I0311 09:11:30.559625 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-xkq7s"] Mar 11 09:11:30 crc kubenswrapper[4840]: I0311 09:11:30.563264 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-xkq7s"] Mar 11 09:11:31 crc kubenswrapper[4840]: I0311 09:11:31.253054 4840 scope.go:117] "RemoveContainer" containerID="df68a8e392c8a43765a966fb9a267c99c8a89529df63a671b25f0f11998125d8" Mar 11 09:11:31 crc kubenswrapper[4840]: E0311 09:11:31.253825 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df68a8e392c8a43765a966fb9a267c99c8a89529df63a671b25f0f11998125d8\": container with ID starting with df68a8e392c8a43765a966fb9a267c99c8a89529df63a671b25f0f11998125d8 not found: ID does not exist" containerID="df68a8e392c8a43765a966fb9a267c99c8a89529df63a671b25f0f11998125d8" Mar 11 09:11:31 crc kubenswrapper[4840]: I0311 09:11:31.253881 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df68a8e392c8a43765a966fb9a267c99c8a89529df63a671b25f0f11998125d8"} err="failed to get container status \"df68a8e392c8a43765a966fb9a267c99c8a89529df63a671b25f0f11998125d8\": rpc error: code = NotFound desc = could not find container \"df68a8e392c8a43765a966fb9a267c99c8a89529df63a671b25f0f11998125d8\": container with ID starting with df68a8e392c8a43765a966fb9a267c99c8a89529df63a671b25f0f11998125d8 not found: ID does not exist" Mar 11 09:11:32 crc kubenswrapper[4840]: I0311 09:11:32.067537 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dc5ef77-d18a-4474-a523-473f27166095" path="/var/lib/kubelet/pods/5dc5ef77-d18a-4474-a523-473f27166095/volumes" Mar 11 09:11:32 crc kubenswrapper[4840]: I0311 09:11:32.552836 4840 generic.go:334] "Generic (PLEG): container finished" podID="31a664e3-80ea-4e78-a87f-3257129bc45a" containerID="76ab72bffba49106add1aab0dc52e560c516074adf0ec100a5956cc69841e065" exitCode=0 Mar 11 09:11:32 crc kubenswrapper[4840]: I0311 09:11:32.553702 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf" event={"ID":"31a664e3-80ea-4e78-a87f-3257129bc45a","Type":"ContainerDied","Data":"76ab72bffba49106add1aab0dc52e560c516074adf0ec100a5956cc69841e065"} Mar 11 09:11:33 crc kubenswrapper[4840]: I0311 09:11:33.564561 4840 generic.go:334] "Generic (PLEG): container finished" podID="31a664e3-80ea-4e78-a87f-3257129bc45a" containerID="890adc8e5a2486970ec1b12c99f0c86414556cc638d5e4b86c8550f19a3e0bc4" exitCode=0 Mar 11 09:11:33 crc kubenswrapper[4840]: I0311 09:11:33.564622 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf" event={"ID":"31a664e3-80ea-4e78-a87f-3257129bc45a","Type":"ContainerDied","Data":"890adc8e5a2486970ec1b12c99f0c86414556cc638d5e4b86c8550f19a3e0bc4"} Mar 11 09:11:34 crc kubenswrapper[4840]: I0311 09:11:34.822581 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf" Mar 11 09:11:34 crc kubenswrapper[4840]: I0311 09:11:34.893675 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31a664e3-80ea-4e78-a87f-3257129bc45a-bundle\") pod \"31a664e3-80ea-4e78-a87f-3257129bc45a\" (UID: \"31a664e3-80ea-4e78-a87f-3257129bc45a\") " Mar 11 09:11:34 crc kubenswrapper[4840]: I0311 09:11:34.893739 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxq86\" (UniqueName: \"kubernetes.io/projected/31a664e3-80ea-4e78-a87f-3257129bc45a-kube-api-access-qxq86\") pod \"31a664e3-80ea-4e78-a87f-3257129bc45a\" (UID: \"31a664e3-80ea-4e78-a87f-3257129bc45a\") " Mar 11 09:11:34 crc kubenswrapper[4840]: I0311 09:11:34.893760 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31a664e3-80ea-4e78-a87f-3257129bc45a-util\") pod \"31a664e3-80ea-4e78-a87f-3257129bc45a\" (UID: \"31a664e3-80ea-4e78-a87f-3257129bc45a\") " Mar 11 09:11:34 crc kubenswrapper[4840]: I0311 09:11:34.894883 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31a664e3-80ea-4e78-a87f-3257129bc45a-bundle" (OuterVolumeSpecName: "bundle") pod "31a664e3-80ea-4e78-a87f-3257129bc45a" (UID: "31a664e3-80ea-4e78-a87f-3257129bc45a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:11:34 crc kubenswrapper[4840]: I0311 09:11:34.901724 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31a664e3-80ea-4e78-a87f-3257129bc45a-kube-api-access-qxq86" (OuterVolumeSpecName: "kube-api-access-qxq86") pod "31a664e3-80ea-4e78-a87f-3257129bc45a" (UID: "31a664e3-80ea-4e78-a87f-3257129bc45a"). InnerVolumeSpecName "kube-api-access-qxq86". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:11:34 crc kubenswrapper[4840]: I0311 09:11:34.908435 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31a664e3-80ea-4e78-a87f-3257129bc45a-util" (OuterVolumeSpecName: "util") pod "31a664e3-80ea-4e78-a87f-3257129bc45a" (UID: "31a664e3-80ea-4e78-a87f-3257129bc45a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:11:34 crc kubenswrapper[4840]: I0311 09:11:34.994697 4840 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31a664e3-80ea-4e78-a87f-3257129bc45a-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:11:34 crc kubenswrapper[4840]: I0311 09:11:34.995041 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxq86\" (UniqueName: \"kubernetes.io/projected/31a664e3-80ea-4e78-a87f-3257129bc45a-kube-api-access-qxq86\") on node \"crc\" DevicePath \"\"" Mar 11 09:11:34 crc kubenswrapper[4840]: I0311 09:11:34.995054 4840 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31a664e3-80ea-4e78-a87f-3257129bc45a-util\") on node \"crc\" DevicePath \"\"" Mar 11 09:11:35 crc kubenswrapper[4840]: I0311 09:11:35.580898 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf" event={"ID":"31a664e3-80ea-4e78-a87f-3257129bc45a","Type":"ContainerDied","Data":"29393f43dc70f6e5aeef6280766be1ac8399a5c20ccfb82314e0e7812c93dd68"} Mar 11 09:11:35 crc kubenswrapper[4840]: I0311 09:11:35.580950 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29393f43dc70f6e5aeef6280766be1ac8399a5c20ccfb82314e0e7812c93dd68" Mar 11 09:11:35 crc kubenswrapper[4840]: I0311 09:11:35.581103 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.041976 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c8c488f96-z6m7f"] Mar 11 09:11:45 crc kubenswrapper[4840]: E0311 09:11:45.043189 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31a664e3-80ea-4e78-a87f-3257129bc45a" containerName="util" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.043211 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="31a664e3-80ea-4e78-a87f-3257129bc45a" containerName="util" Mar 11 09:11:45 crc kubenswrapper[4840]: E0311 09:11:45.043244 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31a664e3-80ea-4e78-a87f-3257129bc45a" containerName="extract" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.043253 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="31a664e3-80ea-4e78-a87f-3257129bc45a" containerName="extract" Mar 11 09:11:45 crc kubenswrapper[4840]: E0311 09:11:45.043268 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dc5ef77-d18a-4474-a523-473f27166095" containerName="console" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.043275 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dc5ef77-d18a-4474-a523-473f27166095" containerName="console" Mar 11 09:11:45 crc kubenswrapper[4840]: E0311 09:11:45.043287 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31a664e3-80ea-4e78-a87f-3257129bc45a" containerName="pull" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.043293 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="31a664e3-80ea-4e78-a87f-3257129bc45a" containerName="pull" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.043435 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dc5ef77-d18a-4474-a523-473f27166095" containerName="console" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.043458 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="31a664e3-80ea-4e78-a87f-3257129bc45a" containerName="extract" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.044164 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6c8c488f96-z6m7f" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.047070 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.048801 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.049051 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.049624 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-k7tdc" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.049936 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.073267 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c8c488f96-z6m7f"] Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.230514 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6fe83f11-7be3-4047-a551-4b1eb34a4345-apiservice-cert\") pod \"metallb-operator-controller-manager-6c8c488f96-z6m7f\" (UID: \"6fe83f11-7be3-4047-a551-4b1eb34a4345\") " pod="metallb-system/metallb-operator-controller-manager-6c8c488f96-z6m7f" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.230693 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6fe83f11-7be3-4047-a551-4b1eb34a4345-webhook-cert\") pod \"metallb-operator-controller-manager-6c8c488f96-z6m7f\" (UID: \"6fe83f11-7be3-4047-a551-4b1eb34a4345\") " pod="metallb-system/metallb-operator-controller-manager-6c8c488f96-z6m7f" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.230753 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4szv\" (UniqueName: \"kubernetes.io/projected/6fe83f11-7be3-4047-a551-4b1eb34a4345-kube-api-access-r4szv\") pod \"metallb-operator-controller-manager-6c8c488f96-z6m7f\" (UID: \"6fe83f11-7be3-4047-a551-4b1eb34a4345\") " pod="metallb-system/metallb-operator-controller-manager-6c8c488f96-z6m7f" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.331802 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6fe83f11-7be3-4047-a551-4b1eb34a4345-webhook-cert\") pod \"metallb-operator-controller-manager-6c8c488f96-z6m7f\" (UID: \"6fe83f11-7be3-4047-a551-4b1eb34a4345\") " pod="metallb-system/metallb-operator-controller-manager-6c8c488f96-z6m7f" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.331862 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4szv\" (UniqueName: \"kubernetes.io/projected/6fe83f11-7be3-4047-a551-4b1eb34a4345-kube-api-access-r4szv\") pod \"metallb-operator-controller-manager-6c8c488f96-z6m7f\" (UID: \"6fe83f11-7be3-4047-a551-4b1eb34a4345\") " pod="metallb-system/metallb-operator-controller-manager-6c8c488f96-z6m7f" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.331903 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6fe83f11-7be3-4047-a551-4b1eb34a4345-apiservice-cert\") pod \"metallb-operator-controller-manager-6c8c488f96-z6m7f\" (UID: \"6fe83f11-7be3-4047-a551-4b1eb34a4345\") " pod="metallb-system/metallb-operator-controller-manager-6c8c488f96-z6m7f" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.339908 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6fe83f11-7be3-4047-a551-4b1eb34a4345-webhook-cert\") pod \"metallb-operator-controller-manager-6c8c488f96-z6m7f\" (UID: \"6fe83f11-7be3-4047-a551-4b1eb34a4345\") " pod="metallb-system/metallb-operator-controller-manager-6c8c488f96-z6m7f" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.341292 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6fe83f11-7be3-4047-a551-4b1eb34a4345-apiservice-cert\") pod \"metallb-operator-controller-manager-6c8c488f96-z6m7f\" (UID: \"6fe83f11-7be3-4047-a551-4b1eb34a4345\") " pod="metallb-system/metallb-operator-controller-manager-6c8c488f96-z6m7f" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.365046 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4szv\" (UniqueName: \"kubernetes.io/projected/6fe83f11-7be3-4047-a551-4b1eb34a4345-kube-api-access-r4szv\") pod \"metallb-operator-controller-manager-6c8c488f96-z6m7f\" (UID: \"6fe83f11-7be3-4047-a551-4b1eb34a4345\") " pod="metallb-system/metallb-operator-controller-manager-6c8c488f96-z6m7f" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.395053 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-dd6c58799-hd789"] Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.396213 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-dd6c58799-hd789" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.398250 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.398898 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.399132 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-xqdpx" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.459820 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-dd6c58799-hd789"] Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.534555 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dbd13783-14a3-4a72-aff6-0367320d9baf-apiservice-cert\") pod \"metallb-operator-webhook-server-dd6c58799-hd789\" (UID: \"dbd13783-14a3-4a72-aff6-0367320d9baf\") " pod="metallb-system/metallb-operator-webhook-server-dd6c58799-hd789" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.534957 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbd13783-14a3-4a72-aff6-0367320d9baf-webhook-cert\") pod \"metallb-operator-webhook-server-dd6c58799-hd789\" (UID: \"dbd13783-14a3-4a72-aff6-0367320d9baf\") " pod="metallb-system/metallb-operator-webhook-server-dd6c58799-hd789" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.535144 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq95q\" (UniqueName: \"kubernetes.io/projected/dbd13783-14a3-4a72-aff6-0367320d9baf-kube-api-access-nq95q\") pod \"metallb-operator-webhook-server-dd6c58799-hd789\" (UID: \"dbd13783-14a3-4a72-aff6-0367320d9baf\") " pod="metallb-system/metallb-operator-webhook-server-dd6c58799-hd789" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.637088 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dbd13783-14a3-4a72-aff6-0367320d9baf-apiservice-cert\") pod \"metallb-operator-webhook-server-dd6c58799-hd789\" (UID: \"dbd13783-14a3-4a72-aff6-0367320d9baf\") " pod="metallb-system/metallb-operator-webhook-server-dd6c58799-hd789" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.637209 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbd13783-14a3-4a72-aff6-0367320d9baf-webhook-cert\") pod \"metallb-operator-webhook-server-dd6c58799-hd789\" (UID: \"dbd13783-14a3-4a72-aff6-0367320d9baf\") " pod="metallb-system/metallb-operator-webhook-server-dd6c58799-hd789" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.637246 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq95q\" (UniqueName: \"kubernetes.io/projected/dbd13783-14a3-4a72-aff6-0367320d9baf-kube-api-access-nq95q\") pod \"metallb-operator-webhook-server-dd6c58799-hd789\" (UID: \"dbd13783-14a3-4a72-aff6-0367320d9baf\") " pod="metallb-system/metallb-operator-webhook-server-dd6c58799-hd789" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.643141 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dbd13783-14a3-4a72-aff6-0367320d9baf-apiservice-cert\") pod \"metallb-operator-webhook-server-dd6c58799-hd789\" (UID: \"dbd13783-14a3-4a72-aff6-0367320d9baf\") " pod="metallb-system/metallb-operator-webhook-server-dd6c58799-hd789" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.653003 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbd13783-14a3-4a72-aff6-0367320d9baf-webhook-cert\") pod \"metallb-operator-webhook-server-dd6c58799-hd789\" (UID: \"dbd13783-14a3-4a72-aff6-0367320d9baf\") " pod="metallb-system/metallb-operator-webhook-server-dd6c58799-hd789" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.653952 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq95q\" (UniqueName: \"kubernetes.io/projected/dbd13783-14a3-4a72-aff6-0367320d9baf-kube-api-access-nq95q\") pod \"metallb-operator-webhook-server-dd6c58799-hd789\" (UID: \"dbd13783-14a3-4a72-aff6-0367320d9baf\") " pod="metallb-system/metallb-operator-webhook-server-dd6c58799-hd789" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.664577 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6c8c488f96-z6m7f" Mar 11 09:11:45 crc kubenswrapper[4840]: I0311 09:11:45.715948 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-dd6c58799-hd789" Mar 11 09:11:46 crc kubenswrapper[4840]: I0311 09:11:46.007822 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c8c488f96-z6m7f"] Mar 11 09:11:46 crc kubenswrapper[4840]: W0311 09:11:46.017185 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fe83f11_7be3_4047_a551_4b1eb34a4345.slice/crio-cf369697ade44baae9a55803e30eee27d0a8ddd1a4ed10add37a85d7c6dd1993 WatchSource:0}: Error finding container cf369697ade44baae9a55803e30eee27d0a8ddd1a4ed10add37a85d7c6dd1993: Status 404 returned error can't find the container with id cf369697ade44baae9a55803e30eee27d0a8ddd1a4ed10add37a85d7c6dd1993 Mar 11 09:11:46 crc kubenswrapper[4840]: I0311 09:11:46.137295 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qjw6z"] Mar 11 09:11:46 crc kubenswrapper[4840]: I0311 09:11:46.140249 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjw6z" Mar 11 09:11:46 crc kubenswrapper[4840]: I0311 09:11:46.152392 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qjw6z"] Mar 11 09:11:46 crc kubenswrapper[4840]: I0311 09:11:46.247813 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49-utilities\") pod \"community-operators-qjw6z\" (UID: \"ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49\") " pod="openshift-marketplace/community-operators-qjw6z" Mar 11 09:11:46 crc kubenswrapper[4840]: I0311 09:11:46.247892 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49-catalog-content\") pod \"community-operators-qjw6z\" (UID: \"ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49\") " pod="openshift-marketplace/community-operators-qjw6z" Mar 11 09:11:46 crc kubenswrapper[4840]: I0311 09:11:46.247921 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxl6p\" (UniqueName: \"kubernetes.io/projected/ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49-kube-api-access-gxl6p\") pod \"community-operators-qjw6z\" (UID: \"ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49\") " pod="openshift-marketplace/community-operators-qjw6z" Mar 11 09:11:46 crc kubenswrapper[4840]: I0311 09:11:46.296565 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-dd6c58799-hd789"] Mar 11 09:11:46 crc kubenswrapper[4840]: W0311 09:11:46.300518 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbd13783_14a3_4a72_aff6_0367320d9baf.slice/crio-89dda3d5acc3f89814dccf3b6efb811328e75af2b64585338bb68641c777c7b0 WatchSource:0}: Error finding container 89dda3d5acc3f89814dccf3b6efb811328e75af2b64585338bb68641c777c7b0: Status 404 returned error can't find the container with id 89dda3d5acc3f89814dccf3b6efb811328e75af2b64585338bb68641c777c7b0 Mar 11 09:11:46 crc kubenswrapper[4840]: I0311 09:11:46.348916 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49-utilities\") pod \"community-operators-qjw6z\" (UID: \"ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49\") " pod="openshift-marketplace/community-operators-qjw6z" Mar 11 09:11:46 crc kubenswrapper[4840]: I0311 09:11:46.348995 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49-catalog-content\") pod \"community-operators-qjw6z\" (UID: \"ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49\") " pod="openshift-marketplace/community-operators-qjw6z" Mar 11 09:11:46 crc kubenswrapper[4840]: I0311 09:11:46.349019 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxl6p\" (UniqueName: \"kubernetes.io/projected/ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49-kube-api-access-gxl6p\") pod \"community-operators-qjw6z\" (UID: \"ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49\") " pod="openshift-marketplace/community-operators-qjw6z" Mar 11 09:11:46 crc kubenswrapper[4840]: I0311 09:11:46.349779 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49-catalog-content\") pod \"community-operators-qjw6z\" (UID: \"ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49\") " pod="openshift-marketplace/community-operators-qjw6z" Mar 11 09:11:46 crc kubenswrapper[4840]: I0311 09:11:46.349787 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49-utilities\") pod \"community-operators-qjw6z\" (UID: \"ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49\") " pod="openshift-marketplace/community-operators-qjw6z" Mar 11 09:11:46 crc kubenswrapper[4840]: I0311 09:11:46.370512 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxl6p\" (UniqueName: \"kubernetes.io/projected/ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49-kube-api-access-gxl6p\") pod \"community-operators-qjw6z\" (UID: \"ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49\") " pod="openshift-marketplace/community-operators-qjw6z" Mar 11 09:11:46 crc kubenswrapper[4840]: I0311 09:11:46.462184 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjw6z" Mar 11 09:11:46 crc kubenswrapper[4840]: I0311 09:11:46.649307 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-dd6c58799-hd789" event={"ID":"dbd13783-14a3-4a72-aff6-0367320d9baf","Type":"ContainerStarted","Data":"89dda3d5acc3f89814dccf3b6efb811328e75af2b64585338bb68641c777c7b0"} Mar 11 09:11:46 crc kubenswrapper[4840]: I0311 09:11:46.651589 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c8c488f96-z6m7f" event={"ID":"6fe83f11-7be3-4047-a551-4b1eb34a4345","Type":"ContainerStarted","Data":"cf369697ade44baae9a55803e30eee27d0a8ddd1a4ed10add37a85d7c6dd1993"} Mar 11 09:11:46 crc kubenswrapper[4840]: I0311 09:11:46.731778 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qjw6z"] Mar 11 09:11:47 crc kubenswrapper[4840]: I0311 09:11:47.659528 4840 generic.go:334] "Generic (PLEG): container finished" podID="ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49" containerID="aaa3bcc54d3f98a297c5b0df079d2159a0bb3f67a2ab715e06116de3dad27ce9" exitCode=0 Mar 11 09:11:47 crc kubenswrapper[4840]: I0311 09:11:47.659632 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjw6z" event={"ID":"ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49","Type":"ContainerDied","Data":"aaa3bcc54d3f98a297c5b0df079d2159a0bb3f67a2ab715e06116de3dad27ce9"} Mar 11 09:11:47 crc kubenswrapper[4840]: I0311 09:11:47.660212 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjw6z" event={"ID":"ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49","Type":"ContainerStarted","Data":"be6267d484d95dad2e2aa733dbd4a3fedfead27d1fab54b4c7fa66fa1bd674f7"} Mar 11 09:11:49 crc kubenswrapper[4840]: I0311 09:11:49.685013 4840 generic.go:334] "Generic (PLEG): container finished" podID="ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49" containerID="2cd586dbaa22befdbf71803d430430a49c52dcbfe9a5c2d55b0dd9c94366fb17" exitCode=0 Mar 11 09:11:49 crc kubenswrapper[4840]: I0311 09:11:49.685123 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjw6z" event={"ID":"ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49","Type":"ContainerDied","Data":"2cd586dbaa22befdbf71803d430430a49c52dcbfe9a5c2d55b0dd9c94366fb17"} Mar 11 09:11:50 crc kubenswrapper[4840]: I0311 09:11:50.695104 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c8c488f96-z6m7f" event={"ID":"6fe83f11-7be3-4047-a551-4b1eb34a4345","Type":"ContainerStarted","Data":"d35d3dcde1f276d53d1aa62f75bf7ae792d40d876cf383be2288e8b142bea45f"} Mar 11 09:11:50 crc kubenswrapper[4840]: I0311 09:11:50.695690 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6c8c488f96-z6m7f" Mar 11 09:11:51 crc kubenswrapper[4840]: I0311 09:11:51.431088 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6c8c488f96-z6m7f" podStartSLOduration=2.030329614 podStartE2EDuration="6.431056319s" podCreationTimestamp="2026-03-11 09:11:45 +0000 UTC" firstStartedPulling="2026-03-11 09:11:46.02059561 +0000 UTC m=+904.686265425" lastFinishedPulling="2026-03-11 09:11:50.421322315 +0000 UTC m=+909.086992130" observedRunningTime="2026-03-11 09:11:50.71892966 +0000 UTC m=+909.384599495" watchObservedRunningTime="2026-03-11 09:11:51.431056319 +0000 UTC m=+910.096726134" Mar 11 09:11:51 crc kubenswrapper[4840]: I0311 09:11:51.433496 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k275b"] Mar 11 09:11:51 crc kubenswrapper[4840]: I0311 09:11:51.434896 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k275b" Mar 11 09:11:51 crc kubenswrapper[4840]: I0311 09:11:51.443360 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k275b"] Mar 11 09:11:51 crc kubenswrapper[4840]: I0311 09:11:51.543237 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd256b66-8b61-485e-b007-8e893951297c-utilities\") pod \"redhat-marketplace-k275b\" (UID: \"fd256b66-8b61-485e-b007-8e893951297c\") " pod="openshift-marketplace/redhat-marketplace-k275b" Mar 11 09:11:51 crc kubenswrapper[4840]: I0311 09:11:51.543358 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gqkz\" (UniqueName: \"kubernetes.io/projected/fd256b66-8b61-485e-b007-8e893951297c-kube-api-access-9gqkz\") pod \"redhat-marketplace-k275b\" (UID: \"fd256b66-8b61-485e-b007-8e893951297c\") " pod="openshift-marketplace/redhat-marketplace-k275b" Mar 11 09:11:51 crc kubenswrapper[4840]: I0311 09:11:51.543383 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd256b66-8b61-485e-b007-8e893951297c-catalog-content\") pod \"redhat-marketplace-k275b\" (UID: \"fd256b66-8b61-485e-b007-8e893951297c\") " pod="openshift-marketplace/redhat-marketplace-k275b" Mar 11 09:11:51 crc kubenswrapper[4840]: I0311 09:11:51.644554 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gqkz\" (UniqueName: \"kubernetes.io/projected/fd256b66-8b61-485e-b007-8e893951297c-kube-api-access-9gqkz\") pod \"redhat-marketplace-k275b\" (UID: \"fd256b66-8b61-485e-b007-8e893951297c\") " pod="openshift-marketplace/redhat-marketplace-k275b" Mar 11 09:11:51 crc kubenswrapper[4840]: I0311 09:11:51.644623 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd256b66-8b61-485e-b007-8e893951297c-catalog-content\") pod \"redhat-marketplace-k275b\" (UID: \"fd256b66-8b61-485e-b007-8e893951297c\") " pod="openshift-marketplace/redhat-marketplace-k275b" Mar 11 09:11:51 crc kubenswrapper[4840]: I0311 09:11:51.644669 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd256b66-8b61-485e-b007-8e893951297c-utilities\") pod \"redhat-marketplace-k275b\" (UID: \"fd256b66-8b61-485e-b007-8e893951297c\") " pod="openshift-marketplace/redhat-marketplace-k275b" Mar 11 09:11:51 crc kubenswrapper[4840]: I0311 09:11:51.645200 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd256b66-8b61-485e-b007-8e893951297c-utilities\") pod \"redhat-marketplace-k275b\" (UID: \"fd256b66-8b61-485e-b007-8e893951297c\") " pod="openshift-marketplace/redhat-marketplace-k275b" Mar 11 09:11:51 crc kubenswrapper[4840]: I0311 09:11:51.645744 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd256b66-8b61-485e-b007-8e893951297c-catalog-content\") pod \"redhat-marketplace-k275b\" (UID: \"fd256b66-8b61-485e-b007-8e893951297c\") " pod="openshift-marketplace/redhat-marketplace-k275b" Mar 11 09:11:51 crc kubenswrapper[4840]: I0311 09:11:51.675321 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gqkz\" (UniqueName: \"kubernetes.io/projected/fd256b66-8b61-485e-b007-8e893951297c-kube-api-access-9gqkz\") pod \"redhat-marketplace-k275b\" (UID: \"fd256b66-8b61-485e-b007-8e893951297c\") " pod="openshift-marketplace/redhat-marketplace-k275b" Mar 11 09:11:51 crc kubenswrapper[4840]: I0311 09:11:51.757165 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k275b" Mar 11 09:11:52 crc kubenswrapper[4840]: I0311 09:11:52.857518 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k275b"] Mar 11 09:11:52 crc kubenswrapper[4840]: W0311 09:11:52.889732 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd256b66_8b61_485e_b007_8e893951297c.slice/crio-cf99e6b3526030cb475ff0fcfdebdcb50d021afe9d938dd04277cf2a745d1ca0 WatchSource:0}: Error finding container cf99e6b3526030cb475ff0fcfdebdcb50d021afe9d938dd04277cf2a745d1ca0: Status 404 returned error can't find the container with id cf99e6b3526030cb475ff0fcfdebdcb50d021afe9d938dd04277cf2a745d1ca0 Mar 11 09:11:53 crc kubenswrapper[4840]: I0311 09:11:53.726323 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjw6z" event={"ID":"ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49","Type":"ContainerStarted","Data":"8fe1565e15d6ac051d92233efe31c7997fed2ae69d3ac16776b643aab9bc6c04"} Mar 11 09:11:53 crc kubenswrapper[4840]: I0311 09:11:53.728351 4840 generic.go:334] "Generic (PLEG): container finished" podID="fd256b66-8b61-485e-b007-8e893951297c" containerID="81dcaf80f1edf94b74661307487334c5d3bad541d178024c8540972a213e2d24" exitCode=0 Mar 11 09:11:53 crc kubenswrapper[4840]: I0311 09:11:53.728430 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k275b" event={"ID":"fd256b66-8b61-485e-b007-8e893951297c","Type":"ContainerDied","Data":"81dcaf80f1edf94b74661307487334c5d3bad541d178024c8540972a213e2d24"} Mar 11 09:11:53 crc kubenswrapper[4840]: I0311 09:11:53.728461 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k275b" event={"ID":"fd256b66-8b61-485e-b007-8e893951297c","Type":"ContainerStarted","Data":"cf99e6b3526030cb475ff0fcfdebdcb50d021afe9d938dd04277cf2a745d1ca0"} Mar 11 09:11:53 crc kubenswrapper[4840]: I0311 09:11:53.730364 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-dd6c58799-hd789" event={"ID":"dbd13783-14a3-4a72-aff6-0367320d9baf","Type":"ContainerStarted","Data":"929a122e99eb1b17df4d22aae235814a5186a27a5043d44f66cd092d3928b239"} Mar 11 09:11:53 crc kubenswrapper[4840]: I0311 09:11:53.730520 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-dd6c58799-hd789" Mar 11 09:11:53 crc kubenswrapper[4840]: I0311 09:11:53.760512 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qjw6z" podStartSLOduration=2.801506887 podStartE2EDuration="7.760483213s" podCreationTimestamp="2026-03-11 09:11:46 +0000 UTC" firstStartedPulling="2026-03-11 09:11:47.662067204 +0000 UTC m=+906.327737019" lastFinishedPulling="2026-03-11 09:11:52.62104353 +0000 UTC m=+911.286713345" observedRunningTime="2026-03-11 09:11:53.759410856 +0000 UTC m=+912.425080671" watchObservedRunningTime="2026-03-11 09:11:53.760483213 +0000 UTC m=+912.426153028" Mar 11 09:11:53 crc kubenswrapper[4840]: I0311 09:11:53.777081 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-dd6c58799-hd789" podStartSLOduration=2.42283613 podStartE2EDuration="8.777058908s" podCreationTimestamp="2026-03-11 09:11:45 +0000 UTC" firstStartedPulling="2026-03-11 09:11:46.304059375 +0000 UTC m=+904.969729190" lastFinishedPulling="2026-03-11 09:11:52.658282153 +0000 UTC m=+911.323951968" observedRunningTime="2026-03-11 09:11:53.775189521 +0000 UTC m=+912.440859336" watchObservedRunningTime="2026-03-11 09:11:53.777058908 +0000 UTC m=+912.442728723" Mar 11 09:11:54 crc kubenswrapper[4840]: I0311 09:11:54.740130 4840 generic.go:334] "Generic (PLEG): container finished" podID="fd256b66-8b61-485e-b007-8e893951297c" containerID="e60211822facabb0133d5bff6c0edf0444200c7736fd3f0493de5113f4e76734" exitCode=0 Mar 11 09:11:54 crc kubenswrapper[4840]: I0311 09:11:54.740265 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k275b" event={"ID":"fd256b66-8b61-485e-b007-8e893951297c","Type":"ContainerDied","Data":"e60211822facabb0133d5bff6c0edf0444200c7736fd3f0493de5113f4e76734"} Mar 11 09:11:55 crc kubenswrapper[4840]: I0311 09:11:55.750079 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k275b" event={"ID":"fd256b66-8b61-485e-b007-8e893951297c","Type":"ContainerStarted","Data":"48542e9d58fba9806cc457e0a36c4d6ef5de6365697d0ef5e534ca0ad5265fcd"} Mar 11 09:11:55 crc kubenswrapper[4840]: I0311 09:11:55.767432 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k275b" podStartSLOduration=3.384383542 podStartE2EDuration="4.767413189s" podCreationTimestamp="2026-03-11 09:11:51 +0000 UTC" firstStartedPulling="2026-03-11 09:11:53.729653901 +0000 UTC m=+912.395323716" lastFinishedPulling="2026-03-11 09:11:55.112683548 +0000 UTC m=+913.778353363" observedRunningTime="2026-03-11 09:11:55.76586154 +0000 UTC m=+914.431531355" watchObservedRunningTime="2026-03-11 09:11:55.767413189 +0000 UTC m=+914.433083004" Mar 11 09:11:56 crc kubenswrapper[4840]: I0311 09:11:56.462601 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qjw6z" Mar 11 09:11:56 crc kubenswrapper[4840]: I0311 09:11:56.462677 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qjw6z" Mar 11 09:11:56 crc kubenswrapper[4840]: I0311 09:11:56.512568 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qjw6z" Mar 11 09:12:00 crc kubenswrapper[4840]: I0311 09:12:00.302255 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553672-mwv76"] Mar 11 09:12:00 crc kubenswrapper[4840]: I0311 09:12:00.306661 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553672-mwv76" Mar 11 09:12:00 crc kubenswrapper[4840]: I0311 09:12:00.313433 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:12:00 crc kubenswrapper[4840]: I0311 09:12:00.313869 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:12:00 crc kubenswrapper[4840]: I0311 09:12:00.314072 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:12:00 crc kubenswrapper[4840]: I0311 09:12:00.343699 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553672-mwv76"] Mar 11 09:12:00 crc kubenswrapper[4840]: I0311 09:12:00.480437 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sth44\" (UniqueName: \"kubernetes.io/projected/1cef34d8-a6f9-4ee3-b869-686486675141-kube-api-access-sth44\") pod \"auto-csr-approver-29553672-mwv76\" (UID: \"1cef34d8-a6f9-4ee3-b869-686486675141\") " pod="openshift-infra/auto-csr-approver-29553672-mwv76" Mar 11 09:12:00 crc kubenswrapper[4840]: I0311 09:12:00.581786 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sth44\" (UniqueName: \"kubernetes.io/projected/1cef34d8-a6f9-4ee3-b869-686486675141-kube-api-access-sth44\") pod \"auto-csr-approver-29553672-mwv76\" (UID: \"1cef34d8-a6f9-4ee3-b869-686486675141\") " pod="openshift-infra/auto-csr-approver-29553672-mwv76" Mar 11 09:12:00 crc kubenswrapper[4840]: I0311 09:12:00.629316 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sth44\" (UniqueName: \"kubernetes.io/projected/1cef34d8-a6f9-4ee3-b869-686486675141-kube-api-access-sth44\") pod \"auto-csr-approver-29553672-mwv76\" (UID: \"1cef34d8-a6f9-4ee3-b869-686486675141\") " pod="openshift-infra/auto-csr-approver-29553672-mwv76" Mar 11 09:12:00 crc kubenswrapper[4840]: I0311 09:12:00.635946 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553672-mwv76" Mar 11 09:12:00 crc kubenswrapper[4840]: I0311 09:12:00.917272 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553672-mwv76"] Mar 11 09:12:01 crc kubenswrapper[4840]: I0311 09:12:01.757880 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k275b" Mar 11 09:12:01 crc kubenswrapper[4840]: I0311 09:12:01.758329 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k275b" Mar 11 09:12:01 crc kubenswrapper[4840]: I0311 09:12:01.801085 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553672-mwv76" event={"ID":"1cef34d8-a6f9-4ee3-b869-686486675141","Type":"ContainerStarted","Data":"b2abf2512343217a5de645ec80780a87867a3f31fb5ee03607250e7f37ee1e8f"} Mar 11 09:12:01 crc kubenswrapper[4840]: I0311 09:12:01.812390 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k275b" Mar 11 09:12:01 crc kubenswrapper[4840]: I0311 09:12:01.877449 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k275b" Mar 11 09:12:02 crc kubenswrapper[4840]: I0311 09:12:02.806972 4840 generic.go:334] "Generic (PLEG): container finished" podID="1cef34d8-a6f9-4ee3-b869-686486675141" containerID="e9e548f7515506ca605317929e764ac11891e7792857753f0c1300d4de020361" exitCode=0 Mar 11 09:12:02 crc kubenswrapper[4840]: I0311 09:12:02.808226 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553672-mwv76" event={"ID":"1cef34d8-a6f9-4ee3-b869-686486675141","Type":"ContainerDied","Data":"e9e548f7515506ca605317929e764ac11891e7792857753f0c1300d4de020361"} Mar 11 09:12:04 crc kubenswrapper[4840]: I0311 09:12:04.091842 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553672-mwv76" Mar 11 09:12:04 crc kubenswrapper[4840]: I0311 09:12:04.123885 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k275b"] Mar 11 09:12:04 crc kubenswrapper[4840]: I0311 09:12:04.124212 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k275b" podUID="fd256b66-8b61-485e-b007-8e893951297c" containerName="registry-server" containerID="cri-o://48542e9d58fba9806cc457e0a36c4d6ef5de6365697d0ef5e534ca0ad5265fcd" gracePeriod=2 Mar 11 09:12:04 crc kubenswrapper[4840]: I0311 09:12:04.139819 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sth44\" (UniqueName: \"kubernetes.io/projected/1cef34d8-a6f9-4ee3-b869-686486675141-kube-api-access-sth44\") pod \"1cef34d8-a6f9-4ee3-b869-686486675141\" (UID: \"1cef34d8-a6f9-4ee3-b869-686486675141\") " Mar 11 09:12:04 crc kubenswrapper[4840]: I0311 09:12:04.152704 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cef34d8-a6f9-4ee3-b869-686486675141-kube-api-access-sth44" (OuterVolumeSpecName: "kube-api-access-sth44") pod "1cef34d8-a6f9-4ee3-b869-686486675141" (UID: "1cef34d8-a6f9-4ee3-b869-686486675141"). InnerVolumeSpecName "kube-api-access-sth44". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:12:04 crc kubenswrapper[4840]: I0311 09:12:04.241301 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sth44\" (UniqueName: \"kubernetes.io/projected/1cef34d8-a6f9-4ee3-b869-686486675141-kube-api-access-sth44\") on node \"crc\" DevicePath \"\"" Mar 11 09:12:04 crc kubenswrapper[4840]: I0311 09:12:04.820840 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553672-mwv76" event={"ID":"1cef34d8-a6f9-4ee3-b869-686486675141","Type":"ContainerDied","Data":"b2abf2512343217a5de645ec80780a87867a3f31fb5ee03607250e7f37ee1e8f"} Mar 11 09:12:04 crc kubenswrapper[4840]: I0311 09:12:04.820891 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2abf2512343217a5de645ec80780a87867a3f31fb5ee03607250e7f37ee1e8f" Mar 11 09:12:04 crc kubenswrapper[4840]: I0311 09:12:04.820952 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553672-mwv76" Mar 11 09:12:04 crc kubenswrapper[4840]: I0311 09:12:04.825235 4840 generic.go:334] "Generic (PLEG): container finished" podID="fd256b66-8b61-485e-b007-8e893951297c" containerID="48542e9d58fba9806cc457e0a36c4d6ef5de6365697d0ef5e534ca0ad5265fcd" exitCode=0 Mar 11 09:12:04 crc kubenswrapper[4840]: I0311 09:12:04.825289 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k275b" event={"ID":"fd256b66-8b61-485e-b007-8e893951297c","Type":"ContainerDied","Data":"48542e9d58fba9806cc457e0a36c4d6ef5de6365697d0ef5e534ca0ad5265fcd"} Mar 11 09:12:04 crc kubenswrapper[4840]: I0311 09:12:04.985212 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k275b" Mar 11 09:12:05 crc kubenswrapper[4840]: I0311 09:12:05.050362 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd256b66-8b61-485e-b007-8e893951297c-catalog-content\") pod \"fd256b66-8b61-485e-b007-8e893951297c\" (UID: \"fd256b66-8b61-485e-b007-8e893951297c\") " Mar 11 09:12:05 crc kubenswrapper[4840]: I0311 09:12:05.050434 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd256b66-8b61-485e-b007-8e893951297c-utilities\") pod \"fd256b66-8b61-485e-b007-8e893951297c\" (UID: \"fd256b66-8b61-485e-b007-8e893951297c\") " Mar 11 09:12:05 crc kubenswrapper[4840]: I0311 09:12:05.050500 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gqkz\" (UniqueName: \"kubernetes.io/projected/fd256b66-8b61-485e-b007-8e893951297c-kube-api-access-9gqkz\") pod \"fd256b66-8b61-485e-b007-8e893951297c\" (UID: \"fd256b66-8b61-485e-b007-8e893951297c\") " Mar 11 09:12:05 crc kubenswrapper[4840]: I0311 09:12:05.051658 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd256b66-8b61-485e-b007-8e893951297c-utilities" (OuterVolumeSpecName: "utilities") pod "fd256b66-8b61-485e-b007-8e893951297c" (UID: "fd256b66-8b61-485e-b007-8e893951297c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:12:05 crc kubenswrapper[4840]: I0311 09:12:05.054630 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd256b66-8b61-485e-b007-8e893951297c-kube-api-access-9gqkz" (OuterVolumeSpecName: "kube-api-access-9gqkz") pod "fd256b66-8b61-485e-b007-8e893951297c" (UID: "fd256b66-8b61-485e-b007-8e893951297c"). InnerVolumeSpecName "kube-api-access-9gqkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:12:05 crc kubenswrapper[4840]: I0311 09:12:05.077201 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd256b66-8b61-485e-b007-8e893951297c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd256b66-8b61-485e-b007-8e893951297c" (UID: "fd256b66-8b61-485e-b007-8e893951297c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:12:05 crc kubenswrapper[4840]: I0311 09:12:05.148185 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553666-tmkcj"] Mar 11 09:12:05 crc kubenswrapper[4840]: I0311 09:12:05.152044 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553666-tmkcj"] Mar 11 09:12:05 crc kubenswrapper[4840]: I0311 09:12:05.152436 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd256b66-8b61-485e-b007-8e893951297c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:12:05 crc kubenswrapper[4840]: I0311 09:12:05.152529 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd256b66-8b61-485e-b007-8e893951297c-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:12:05 crc kubenswrapper[4840]: I0311 09:12:05.152541 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gqkz\" (UniqueName: \"kubernetes.io/projected/fd256b66-8b61-485e-b007-8e893951297c-kube-api-access-9gqkz\") on node \"crc\" DevicePath \"\"" Mar 11 09:12:05 crc kubenswrapper[4840]: I0311 09:12:05.731219 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-dd6c58799-hd789" Mar 11 09:12:05 crc kubenswrapper[4840]: I0311 09:12:05.832860 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k275b" event={"ID":"fd256b66-8b61-485e-b007-8e893951297c","Type":"ContainerDied","Data":"cf99e6b3526030cb475ff0fcfdebdcb50d021afe9d938dd04277cf2a745d1ca0"} Mar 11 09:12:05 crc kubenswrapper[4840]: I0311 09:12:05.832918 4840 scope.go:117] "RemoveContainer" containerID="48542e9d58fba9806cc457e0a36c4d6ef5de6365697d0ef5e534ca0ad5265fcd" Mar 11 09:12:05 crc kubenswrapper[4840]: I0311 09:12:05.833076 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k275b" Mar 11 09:12:05 crc kubenswrapper[4840]: I0311 09:12:05.859590 4840 scope.go:117] "RemoveContainer" containerID="e60211822facabb0133d5bff6c0edf0444200c7736fd3f0493de5113f4e76734" Mar 11 09:12:05 crc kubenswrapper[4840]: I0311 09:12:05.877232 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k275b"] Mar 11 09:12:05 crc kubenswrapper[4840]: I0311 09:12:05.880627 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k275b"] Mar 11 09:12:05 crc kubenswrapper[4840]: I0311 09:12:05.887680 4840 scope.go:117] "RemoveContainer" containerID="81dcaf80f1edf94b74661307487334c5d3bad541d178024c8540972a213e2d24" Mar 11 09:12:06 crc kubenswrapper[4840]: I0311 09:12:06.067853 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5adfca7-ee6d-4948-a79a-15d42015ba8b" path="/var/lib/kubelet/pods/a5adfca7-ee6d-4948-a79a-15d42015ba8b/volumes" Mar 11 09:12:06 crc kubenswrapper[4840]: I0311 09:12:06.068802 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd256b66-8b61-485e-b007-8e893951297c" path="/var/lib/kubelet/pods/fd256b66-8b61-485e-b007-8e893951297c/volumes" Mar 11 09:12:06 crc kubenswrapper[4840]: I0311 09:12:06.506660 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qjw6z" Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.124307 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qjw6z"] Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.125129 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qjw6z" podUID="ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49" containerName="registry-server" containerID="cri-o://8fe1565e15d6ac051d92233efe31c7997fed2ae69d3ac16776b643aab9bc6c04" gracePeriod=2 Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.534546 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjw6z" Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.621963 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49-catalog-content\") pod \"ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49\" (UID: \"ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49\") " Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.622553 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49-utilities\") pod \"ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49\" (UID: \"ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49\") " Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.622590 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxl6p\" (UniqueName: \"kubernetes.io/projected/ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49-kube-api-access-gxl6p\") pod \"ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49\" (UID: \"ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49\") " Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.624272 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49-utilities" (OuterVolumeSpecName: "utilities") pod "ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49" (UID: "ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.629251 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49-kube-api-access-gxl6p" (OuterVolumeSpecName: "kube-api-access-gxl6p") pod "ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49" (UID: "ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49"). InnerVolumeSpecName "kube-api-access-gxl6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.672158 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49" (UID: "ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.723974 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.724026 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.724040 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxl6p\" (UniqueName: \"kubernetes.io/projected/ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49-kube-api-access-gxl6p\") on node \"crc\" DevicePath \"\"" Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.864427 4840 generic.go:334] "Generic (PLEG): container finished" podID="ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49" containerID="8fe1565e15d6ac051d92233efe31c7997fed2ae69d3ac16776b643aab9bc6c04" exitCode=0 Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.864504 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjw6z" event={"ID":"ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49","Type":"ContainerDied","Data":"8fe1565e15d6ac051d92233efe31c7997fed2ae69d3ac16776b643aab9bc6c04"} Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.864521 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qjw6z" Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.864557 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qjw6z" event={"ID":"ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49","Type":"ContainerDied","Data":"be6267d484d95dad2e2aa733dbd4a3fedfead27d1fab54b4c7fa66fa1bd674f7"} Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.864585 4840 scope.go:117] "RemoveContainer" containerID="8fe1565e15d6ac051d92233efe31c7997fed2ae69d3ac16776b643aab9bc6c04" Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.883750 4840 scope.go:117] "RemoveContainer" containerID="2cd586dbaa22befdbf71803d430430a49c52dcbfe9a5c2d55b0dd9c94366fb17" Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.899411 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qjw6z"] Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.905833 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qjw6z"] Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.918701 4840 scope.go:117] "RemoveContainer" containerID="aaa3bcc54d3f98a297c5b0df079d2159a0bb3f67a2ab715e06116de3dad27ce9" Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.937222 4840 scope.go:117] "RemoveContainer" containerID="8fe1565e15d6ac051d92233efe31c7997fed2ae69d3ac16776b643aab9bc6c04" Mar 11 09:12:09 crc kubenswrapper[4840]: E0311 09:12:09.937968 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fe1565e15d6ac051d92233efe31c7997fed2ae69d3ac16776b643aab9bc6c04\": container with ID starting with 8fe1565e15d6ac051d92233efe31c7997fed2ae69d3ac16776b643aab9bc6c04 not found: ID does not exist" containerID="8fe1565e15d6ac051d92233efe31c7997fed2ae69d3ac16776b643aab9bc6c04" Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.938010 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fe1565e15d6ac051d92233efe31c7997fed2ae69d3ac16776b643aab9bc6c04"} err="failed to get container status \"8fe1565e15d6ac051d92233efe31c7997fed2ae69d3ac16776b643aab9bc6c04\": rpc error: code = NotFound desc = could not find container \"8fe1565e15d6ac051d92233efe31c7997fed2ae69d3ac16776b643aab9bc6c04\": container with ID starting with 8fe1565e15d6ac051d92233efe31c7997fed2ae69d3ac16776b643aab9bc6c04 not found: ID does not exist" Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.938038 4840 scope.go:117] "RemoveContainer" containerID="2cd586dbaa22befdbf71803d430430a49c52dcbfe9a5c2d55b0dd9c94366fb17" Mar 11 09:12:09 crc kubenswrapper[4840]: E0311 09:12:09.938391 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cd586dbaa22befdbf71803d430430a49c52dcbfe9a5c2d55b0dd9c94366fb17\": container with ID starting with 2cd586dbaa22befdbf71803d430430a49c52dcbfe9a5c2d55b0dd9c94366fb17 not found: ID does not exist" containerID="2cd586dbaa22befdbf71803d430430a49c52dcbfe9a5c2d55b0dd9c94366fb17" Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.938419 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cd586dbaa22befdbf71803d430430a49c52dcbfe9a5c2d55b0dd9c94366fb17"} err="failed to get container status \"2cd586dbaa22befdbf71803d430430a49c52dcbfe9a5c2d55b0dd9c94366fb17\": rpc error: code = NotFound desc = could not find container \"2cd586dbaa22befdbf71803d430430a49c52dcbfe9a5c2d55b0dd9c94366fb17\": container with ID starting with 2cd586dbaa22befdbf71803d430430a49c52dcbfe9a5c2d55b0dd9c94366fb17 not found: ID does not exist" Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.938439 4840 scope.go:117] "RemoveContainer" containerID="aaa3bcc54d3f98a297c5b0df079d2159a0bb3f67a2ab715e06116de3dad27ce9" Mar 11 09:12:09 crc kubenswrapper[4840]: E0311 09:12:09.938839 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaa3bcc54d3f98a297c5b0df079d2159a0bb3f67a2ab715e06116de3dad27ce9\": container with ID starting with aaa3bcc54d3f98a297c5b0df079d2159a0bb3f67a2ab715e06116de3dad27ce9 not found: ID does not exist" containerID="aaa3bcc54d3f98a297c5b0df079d2159a0bb3f67a2ab715e06116de3dad27ce9" Mar 11 09:12:09 crc kubenswrapper[4840]: I0311 09:12:09.938866 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaa3bcc54d3f98a297c5b0df079d2159a0bb3f67a2ab715e06116de3dad27ce9"} err="failed to get container status \"aaa3bcc54d3f98a297c5b0df079d2159a0bb3f67a2ab715e06116de3dad27ce9\": rpc error: code = NotFound desc = could not find container \"aaa3bcc54d3f98a297c5b0df079d2159a0bb3f67a2ab715e06116de3dad27ce9\": container with ID starting with aaa3bcc54d3f98a297c5b0df079d2159a0bb3f67a2ab715e06116de3dad27ce9 not found: ID does not exist" Mar 11 09:12:10 crc kubenswrapper[4840]: I0311 09:12:10.068563 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49" path="/var/lib/kubelet/pods/ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49/volumes" Mar 11 09:12:25 crc kubenswrapper[4840]: I0311 09:12:25.667796 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6c8c488f96-z6m7f" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.463331 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-xp9gd"] Mar 11 09:12:26 crc kubenswrapper[4840]: E0311 09:12:26.464015 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49" containerName="extract-content" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.464042 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49" containerName="extract-content" Mar 11 09:12:26 crc kubenswrapper[4840]: E0311 09:12:26.464079 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cef34d8-a6f9-4ee3-b869-686486675141" containerName="oc" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.464088 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cef34d8-a6f9-4ee3-b869-686486675141" containerName="oc" Mar 11 09:12:26 crc kubenswrapper[4840]: E0311 09:12:26.464099 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49" containerName="registry-server" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.464107 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49" containerName="registry-server" Mar 11 09:12:26 crc kubenswrapper[4840]: E0311 09:12:26.464131 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49" containerName="extract-utilities" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.464140 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49" containerName="extract-utilities" Mar 11 09:12:26 crc kubenswrapper[4840]: E0311 09:12:26.464158 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd256b66-8b61-485e-b007-8e893951297c" containerName="extract-utilities" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.464166 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd256b66-8b61-485e-b007-8e893951297c" containerName="extract-utilities" Mar 11 09:12:26 crc kubenswrapper[4840]: E0311 09:12:26.464177 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd256b66-8b61-485e-b007-8e893951297c" containerName="extract-content" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.464185 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd256b66-8b61-485e-b007-8e893951297c" containerName="extract-content" Mar 11 09:12:26 crc kubenswrapper[4840]: E0311 09:12:26.464205 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd256b66-8b61-485e-b007-8e893951297c" containerName="registry-server" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.464213 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd256b66-8b61-485e-b007-8e893951297c" containerName="registry-server" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.464453 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd256b66-8b61-485e-b007-8e893951297c" containerName="registry-server" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.464502 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cef34d8-a6f9-4ee3-b869-686486675141" containerName="oc" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.464519 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec07ea0d-1107-4bdd-900d-4ed4b6b3ae49" containerName="registry-server" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.490448 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.495411 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-5f695" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.495953 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.508708 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.535941 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-qjvc2"] Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.537365 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qjvc2" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.542705 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.557253 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-qjvc2"] Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.567545 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/76a3df80-6e5e-4618-8dc4-2b697c4f73b8-reloader\") pod \"frr-k8s-xp9gd\" (UID: \"76a3df80-6e5e-4618-8dc4-2b697c4f73b8\") " pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.567619 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47v5k\" (UniqueName: \"kubernetes.io/projected/8389fd54-9d72-452c-b3e3-df6a7d0808ab-kube-api-access-47v5k\") pod \"frr-k8s-webhook-server-bcc4b6f68-qjvc2\" (UID: \"8389fd54-9d72-452c-b3e3-df6a7d0808ab\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qjvc2" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.567669 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg8pq\" (UniqueName: \"kubernetes.io/projected/76a3df80-6e5e-4618-8dc4-2b697c4f73b8-kube-api-access-qg8pq\") pod \"frr-k8s-xp9gd\" (UID: \"76a3df80-6e5e-4618-8dc4-2b697c4f73b8\") " pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.567737 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/76a3df80-6e5e-4618-8dc4-2b697c4f73b8-metrics\") pod \"frr-k8s-xp9gd\" (UID: \"76a3df80-6e5e-4618-8dc4-2b697c4f73b8\") " pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.567760 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8389fd54-9d72-452c-b3e3-df6a7d0808ab-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-qjvc2\" (UID: \"8389fd54-9d72-452c-b3e3-df6a7d0808ab\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qjvc2" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.567789 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/76a3df80-6e5e-4618-8dc4-2b697c4f73b8-frr-sockets\") pod \"frr-k8s-xp9gd\" (UID: \"76a3df80-6e5e-4618-8dc4-2b697c4f73b8\") " pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.567812 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76a3df80-6e5e-4618-8dc4-2b697c4f73b8-metrics-certs\") pod \"frr-k8s-xp9gd\" (UID: \"76a3df80-6e5e-4618-8dc4-2b697c4f73b8\") " pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.567832 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/76a3df80-6e5e-4618-8dc4-2b697c4f73b8-frr-startup\") pod \"frr-k8s-xp9gd\" (UID: \"76a3df80-6e5e-4618-8dc4-2b697c4f73b8\") " pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.567856 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/76a3df80-6e5e-4618-8dc4-2b697c4f73b8-frr-conf\") pod \"frr-k8s-xp9gd\" (UID: \"76a3df80-6e5e-4618-8dc4-2b697c4f73b8\") " pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.616558 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-sn7q8"] Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.617623 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-sn7q8" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.622173 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.622927 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.623114 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.623255 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-lbxb2" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.634024 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-6fr82"] Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.635031 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-6fr82" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.653796 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.668879 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76a3df80-6e5e-4618-8dc4-2b697c4f73b8-metrics-certs\") pod \"frr-k8s-xp9gd\" (UID: \"76a3df80-6e5e-4618-8dc4-2b697c4f73b8\") " pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.668948 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/76a3df80-6e5e-4618-8dc4-2b697c4f73b8-frr-startup\") pod \"frr-k8s-xp9gd\" (UID: \"76a3df80-6e5e-4618-8dc4-2b697c4f73b8\") " pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.668978 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppsqs\" (UniqueName: \"kubernetes.io/projected/6955be99-a9ae-4c67-a9c8-4e7fe6f909de-kube-api-access-ppsqs\") pod \"speaker-sn7q8\" (UID: \"6955be99-a9ae-4c67-a9c8-4e7fe6f909de\") " pod="metallb-system/speaker-sn7q8" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.668999 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/76a3df80-6e5e-4618-8dc4-2b697c4f73b8-frr-conf\") pod \"frr-k8s-xp9gd\" (UID: \"76a3df80-6e5e-4618-8dc4-2b697c4f73b8\") " pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.669037 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6955be99-a9ae-4c67-a9c8-4e7fe6f909de-metallb-excludel2\") pod \"speaker-sn7q8\" (UID: \"6955be99-a9ae-4c67-a9c8-4e7fe6f909de\") " pod="metallb-system/speaker-sn7q8" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.669064 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/76a3df80-6e5e-4618-8dc4-2b697c4f73b8-reloader\") pod \"frr-k8s-xp9gd\" (UID: \"76a3df80-6e5e-4618-8dc4-2b697c4f73b8\") " pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.669094 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47v5k\" (UniqueName: \"kubernetes.io/projected/8389fd54-9d72-452c-b3e3-df6a7d0808ab-kube-api-access-47v5k\") pod \"frr-k8s-webhook-server-bcc4b6f68-qjvc2\" (UID: \"8389fd54-9d72-452c-b3e3-df6a7d0808ab\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qjvc2" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.669114 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg8pq\" (UniqueName: \"kubernetes.io/projected/76a3df80-6e5e-4618-8dc4-2b697c4f73b8-kube-api-access-qg8pq\") pod \"frr-k8s-xp9gd\" (UID: \"76a3df80-6e5e-4618-8dc4-2b697c4f73b8\") " pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.669139 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a61c288-882f-44a9-a206-2cefcfa66c5c-metrics-certs\") pod \"controller-7bb4cc7c98-6fr82\" (UID: \"8a61c288-882f-44a9-a206-2cefcfa66c5c\") " pod="metallb-system/controller-7bb4cc7c98-6fr82" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.669161 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6955be99-a9ae-4c67-a9c8-4e7fe6f909de-metrics-certs\") pod \"speaker-sn7q8\" (UID: \"6955be99-a9ae-4c67-a9c8-4e7fe6f909de\") " pod="metallb-system/speaker-sn7q8" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.669178 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a61c288-882f-44a9-a206-2cefcfa66c5c-cert\") pod \"controller-7bb4cc7c98-6fr82\" (UID: \"8a61c288-882f-44a9-a206-2cefcfa66c5c\") " pod="metallb-system/controller-7bb4cc7c98-6fr82" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.669193 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6955be99-a9ae-4c67-a9c8-4e7fe6f909de-memberlist\") pod \"speaker-sn7q8\" (UID: \"6955be99-a9ae-4c67-a9c8-4e7fe6f909de\") " pod="metallb-system/speaker-sn7q8" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.669212 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfbh4\" (UniqueName: \"kubernetes.io/projected/8a61c288-882f-44a9-a206-2cefcfa66c5c-kube-api-access-dfbh4\") pod \"controller-7bb4cc7c98-6fr82\" (UID: \"8a61c288-882f-44a9-a206-2cefcfa66c5c\") " pod="metallb-system/controller-7bb4cc7c98-6fr82" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.669230 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/76a3df80-6e5e-4618-8dc4-2b697c4f73b8-metrics\") pod \"frr-k8s-xp9gd\" (UID: \"76a3df80-6e5e-4618-8dc4-2b697c4f73b8\") " pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.669247 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8389fd54-9d72-452c-b3e3-df6a7d0808ab-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-qjvc2\" (UID: \"8389fd54-9d72-452c-b3e3-df6a7d0808ab\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qjvc2" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.669274 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/76a3df80-6e5e-4618-8dc4-2b697c4f73b8-frr-sockets\") pod \"frr-k8s-xp9gd\" (UID: \"76a3df80-6e5e-4618-8dc4-2b697c4f73b8\") " pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.669790 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/76a3df80-6e5e-4618-8dc4-2b697c4f73b8-frr-sockets\") pod \"frr-k8s-xp9gd\" (UID: \"76a3df80-6e5e-4618-8dc4-2b697c4f73b8\") " pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.670007 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/76a3df80-6e5e-4618-8dc4-2b697c4f73b8-frr-conf\") pod \"frr-k8s-xp9gd\" (UID: \"76a3df80-6e5e-4618-8dc4-2b697c4f73b8\") " pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.670346 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/76a3df80-6e5e-4618-8dc4-2b697c4f73b8-reloader\") pod \"frr-k8s-xp9gd\" (UID: \"76a3df80-6e5e-4618-8dc4-2b697c4f73b8\") " pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:26 crc kubenswrapper[4840]: E0311 09:12:26.670677 4840 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.670729 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/76a3df80-6e5e-4618-8dc4-2b697c4f73b8-metrics\") pod \"frr-k8s-xp9gd\" (UID: \"76a3df80-6e5e-4618-8dc4-2b697c4f73b8\") " pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:26 crc kubenswrapper[4840]: E0311 09:12:26.670785 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8389fd54-9d72-452c-b3e3-df6a7d0808ab-cert podName:8389fd54-9d72-452c-b3e3-df6a7d0808ab nodeName:}" failed. No retries permitted until 2026-03-11 09:12:27.170760192 +0000 UTC m=+945.836430007 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8389fd54-9d72-452c-b3e3-df6a7d0808ab-cert") pod "frr-k8s-webhook-server-bcc4b6f68-qjvc2" (UID: "8389fd54-9d72-452c-b3e3-df6a7d0808ab") : secret "frr-k8s-webhook-server-cert" not found Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.671783 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/76a3df80-6e5e-4618-8dc4-2b697c4f73b8-frr-startup\") pod \"frr-k8s-xp9gd\" (UID: \"76a3df80-6e5e-4618-8dc4-2b697c4f73b8\") " pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.679925 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-6fr82"] Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.689598 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/76a3df80-6e5e-4618-8dc4-2b697c4f73b8-metrics-certs\") pod \"frr-k8s-xp9gd\" (UID: \"76a3df80-6e5e-4618-8dc4-2b697c4f73b8\") " pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.699738 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47v5k\" (UniqueName: \"kubernetes.io/projected/8389fd54-9d72-452c-b3e3-df6a7d0808ab-kube-api-access-47v5k\") pod \"frr-k8s-webhook-server-bcc4b6f68-qjvc2\" (UID: \"8389fd54-9d72-452c-b3e3-df6a7d0808ab\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qjvc2" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.727130 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg8pq\" (UniqueName: \"kubernetes.io/projected/76a3df80-6e5e-4618-8dc4-2b697c4f73b8-kube-api-access-qg8pq\") pod \"frr-k8s-xp9gd\" (UID: \"76a3df80-6e5e-4618-8dc4-2b697c4f73b8\") " pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.770615 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6955be99-a9ae-4c67-a9c8-4e7fe6f909de-metallb-excludel2\") pod \"speaker-sn7q8\" (UID: \"6955be99-a9ae-4c67-a9c8-4e7fe6f909de\") " pod="metallb-system/speaker-sn7q8" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.770695 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a61c288-882f-44a9-a206-2cefcfa66c5c-metrics-certs\") pod \"controller-7bb4cc7c98-6fr82\" (UID: \"8a61c288-882f-44a9-a206-2cefcfa66c5c\") " pod="metallb-system/controller-7bb4cc7c98-6fr82" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.770718 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6955be99-a9ae-4c67-a9c8-4e7fe6f909de-metrics-certs\") pod \"speaker-sn7q8\" (UID: \"6955be99-a9ae-4c67-a9c8-4e7fe6f909de\") " pod="metallb-system/speaker-sn7q8" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.770738 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a61c288-882f-44a9-a206-2cefcfa66c5c-cert\") pod \"controller-7bb4cc7c98-6fr82\" (UID: \"8a61c288-882f-44a9-a206-2cefcfa66c5c\") " pod="metallb-system/controller-7bb4cc7c98-6fr82" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.770753 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6955be99-a9ae-4c67-a9c8-4e7fe6f909de-memberlist\") pod \"speaker-sn7q8\" (UID: \"6955be99-a9ae-4c67-a9c8-4e7fe6f909de\") " pod="metallb-system/speaker-sn7q8" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.770775 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfbh4\" (UniqueName: \"kubernetes.io/projected/8a61c288-882f-44a9-a206-2cefcfa66c5c-kube-api-access-dfbh4\") pod \"controller-7bb4cc7c98-6fr82\" (UID: \"8a61c288-882f-44a9-a206-2cefcfa66c5c\") " pod="metallb-system/controller-7bb4cc7c98-6fr82" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.770832 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppsqs\" (UniqueName: \"kubernetes.io/projected/6955be99-a9ae-4c67-a9c8-4e7fe6f909de-kube-api-access-ppsqs\") pod \"speaker-sn7q8\" (UID: \"6955be99-a9ae-4c67-a9c8-4e7fe6f909de\") " pod="metallb-system/speaker-sn7q8" Mar 11 09:12:26 crc kubenswrapper[4840]: E0311 09:12:26.771494 4840 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.771625 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6955be99-a9ae-4c67-a9c8-4e7fe6f909de-metallb-excludel2\") pod \"speaker-sn7q8\" (UID: \"6955be99-a9ae-4c67-a9c8-4e7fe6f909de\") " pod="metallb-system/speaker-sn7q8" Mar 11 09:12:26 crc kubenswrapper[4840]: E0311 09:12:26.771619 4840 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 11 09:12:26 crc kubenswrapper[4840]: E0311 09:12:26.771807 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a61c288-882f-44a9-a206-2cefcfa66c5c-metrics-certs podName:8a61c288-882f-44a9-a206-2cefcfa66c5c nodeName:}" failed. No retries permitted until 2026-03-11 09:12:27.271775733 +0000 UTC m=+945.937445668 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8a61c288-882f-44a9-a206-2cefcfa66c5c-metrics-certs") pod "controller-7bb4cc7c98-6fr82" (UID: "8a61c288-882f-44a9-a206-2cefcfa66c5c") : secret "controller-certs-secret" not found Mar 11 09:12:26 crc kubenswrapper[4840]: E0311 09:12:26.771919 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6955be99-a9ae-4c67-a9c8-4e7fe6f909de-memberlist podName:6955be99-a9ae-4c67-a9c8-4e7fe6f909de nodeName:}" failed. No retries permitted until 2026-03-11 09:12:27.271895026 +0000 UTC m=+945.937565031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6955be99-a9ae-4c67-a9c8-4e7fe6f909de-memberlist") pod "speaker-sn7q8" (UID: "6955be99-a9ae-4c67-a9c8-4e7fe6f909de") : secret "metallb-memberlist" not found Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.775870 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8a61c288-882f-44a9-a206-2cefcfa66c5c-cert\") pod \"controller-7bb4cc7c98-6fr82\" (UID: \"8a61c288-882f-44a9-a206-2cefcfa66c5c\") " pod="metallb-system/controller-7bb4cc7c98-6fr82" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.786451 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6955be99-a9ae-4c67-a9c8-4e7fe6f909de-metrics-certs\") pod \"speaker-sn7q8\" (UID: \"6955be99-a9ae-4c67-a9c8-4e7fe6f909de\") " pod="metallb-system/speaker-sn7q8" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.793817 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppsqs\" (UniqueName: \"kubernetes.io/projected/6955be99-a9ae-4c67-a9c8-4e7fe6f909de-kube-api-access-ppsqs\") pod \"speaker-sn7q8\" (UID: \"6955be99-a9ae-4c67-a9c8-4e7fe6f909de\") " pod="metallb-system/speaker-sn7q8" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.794160 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfbh4\" (UniqueName: \"kubernetes.io/projected/8a61c288-882f-44a9-a206-2cefcfa66c5c-kube-api-access-dfbh4\") pod \"controller-7bb4cc7c98-6fr82\" (UID: \"8a61c288-882f-44a9-a206-2cefcfa66c5c\") " pod="metallb-system/controller-7bb4cc7c98-6fr82" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.816941 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:26 crc kubenswrapper[4840]: I0311 09:12:26.965624 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xp9gd" event={"ID":"76a3df80-6e5e-4618-8dc4-2b697c4f73b8","Type":"ContainerStarted","Data":"ec771a9edcee6c19c2d5c495ad22c31d40d55af99dd1c904c6a4d2168c846336"} Mar 11 09:12:27 crc kubenswrapper[4840]: I0311 09:12:27.177162 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8389fd54-9d72-452c-b3e3-df6a7d0808ab-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-qjvc2\" (UID: \"8389fd54-9d72-452c-b3e3-df6a7d0808ab\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qjvc2" Mar 11 09:12:27 crc kubenswrapper[4840]: I0311 09:12:27.182686 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8389fd54-9d72-452c-b3e3-df6a7d0808ab-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-qjvc2\" (UID: \"8389fd54-9d72-452c-b3e3-df6a7d0808ab\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qjvc2" Mar 11 09:12:27 crc kubenswrapper[4840]: I0311 09:12:27.278637 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a61c288-882f-44a9-a206-2cefcfa66c5c-metrics-certs\") pod \"controller-7bb4cc7c98-6fr82\" (UID: \"8a61c288-882f-44a9-a206-2cefcfa66c5c\") " pod="metallb-system/controller-7bb4cc7c98-6fr82" Mar 11 09:12:27 crc kubenswrapper[4840]: I0311 09:12:27.278707 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6955be99-a9ae-4c67-a9c8-4e7fe6f909de-memberlist\") pod \"speaker-sn7q8\" (UID: \"6955be99-a9ae-4c67-a9c8-4e7fe6f909de\") " pod="metallb-system/speaker-sn7q8" Mar 11 09:12:27 crc kubenswrapper[4840]: E0311 09:12:27.278917 4840 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 11 09:12:27 crc kubenswrapper[4840]: E0311 09:12:27.278978 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6955be99-a9ae-4c67-a9c8-4e7fe6f909de-memberlist podName:6955be99-a9ae-4c67-a9c8-4e7fe6f909de nodeName:}" failed. No retries permitted until 2026-03-11 09:12:28.278962058 +0000 UTC m=+946.944631873 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6955be99-a9ae-4c67-a9c8-4e7fe6f909de-memberlist") pod "speaker-sn7q8" (UID: "6955be99-a9ae-4c67-a9c8-4e7fe6f909de") : secret "metallb-memberlist" not found Mar 11 09:12:27 crc kubenswrapper[4840]: I0311 09:12:27.283098 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a61c288-882f-44a9-a206-2cefcfa66c5c-metrics-certs\") pod \"controller-7bb4cc7c98-6fr82\" (UID: \"8a61c288-882f-44a9-a206-2cefcfa66c5c\") " pod="metallb-system/controller-7bb4cc7c98-6fr82" Mar 11 09:12:27 crc kubenswrapper[4840]: I0311 09:12:27.462734 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qjvc2" Mar 11 09:12:27 crc kubenswrapper[4840]: I0311 09:12:27.554058 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-6fr82" Mar 11 09:12:27 crc kubenswrapper[4840]: I0311 09:12:27.696035 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-qjvc2"] Mar 11 09:12:27 crc kubenswrapper[4840]: W0311 09:12:27.708537 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8389fd54_9d72_452c_b3e3_df6a7d0808ab.slice/crio-9204bdcc9610c5ac7f58a75e69d221f65e0e14dd9b36ada6acb6fb144ac886cd WatchSource:0}: Error finding container 9204bdcc9610c5ac7f58a75e69d221f65e0e14dd9b36ada6acb6fb144ac886cd: Status 404 returned error can't find the container with id 9204bdcc9610c5ac7f58a75e69d221f65e0e14dd9b36ada6acb6fb144ac886cd Mar 11 09:12:27 crc kubenswrapper[4840]: I0311 09:12:27.981882 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qjvc2" event={"ID":"8389fd54-9d72-452c-b3e3-df6a7d0808ab","Type":"ContainerStarted","Data":"9204bdcc9610c5ac7f58a75e69d221f65e0e14dd9b36ada6acb6fb144ac886cd"} Mar 11 09:12:27 crc kubenswrapper[4840]: I0311 09:12:27.987259 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-6fr82"] Mar 11 09:12:27 crc kubenswrapper[4840]: W0311 09:12:27.996657 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a61c288_882f_44a9_a206_2cefcfa66c5c.slice/crio-8234565caa0fded637fc729103a7e8a99112a52def47795090e4138ec473f506 WatchSource:0}: Error finding container 8234565caa0fded637fc729103a7e8a99112a52def47795090e4138ec473f506: Status 404 returned error can't find the container with id 8234565caa0fded637fc729103a7e8a99112a52def47795090e4138ec473f506 Mar 11 09:12:28 crc kubenswrapper[4840]: I0311 09:12:28.300507 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6955be99-a9ae-4c67-a9c8-4e7fe6f909de-memberlist\") pod \"speaker-sn7q8\" (UID: \"6955be99-a9ae-4c67-a9c8-4e7fe6f909de\") " pod="metallb-system/speaker-sn7q8" Mar 11 09:12:28 crc kubenswrapper[4840]: I0311 09:12:28.308439 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6955be99-a9ae-4c67-a9c8-4e7fe6f909de-memberlist\") pod \"speaker-sn7q8\" (UID: \"6955be99-a9ae-4c67-a9c8-4e7fe6f909de\") " pod="metallb-system/speaker-sn7q8" Mar 11 09:12:28 crc kubenswrapper[4840]: I0311 09:12:28.434174 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-sn7q8" Mar 11 09:12:28 crc kubenswrapper[4840]: W0311 09:12:28.458654 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6955be99_a9ae_4c67_a9c8_4e7fe6f909de.slice/crio-e840f10ee85b16d48dcebcddd135c4913d4bd8fec8b528f89ada01ba2febc666 WatchSource:0}: Error finding container e840f10ee85b16d48dcebcddd135c4913d4bd8fec8b528f89ada01ba2febc666: Status 404 returned error can't find the container with id e840f10ee85b16d48dcebcddd135c4913d4bd8fec8b528f89ada01ba2febc666 Mar 11 09:12:28 crc kubenswrapper[4840]: I0311 09:12:28.994951 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-6fr82" event={"ID":"8a61c288-882f-44a9-a206-2cefcfa66c5c","Type":"ContainerStarted","Data":"796ca3d4d2b135e3ed60b830e3e5874ae3db9378298f7004bfa87fd97206ceb9"} Mar 11 09:12:28 crc kubenswrapper[4840]: I0311 09:12:28.995451 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-6fr82" event={"ID":"8a61c288-882f-44a9-a206-2cefcfa66c5c","Type":"ContainerStarted","Data":"62619d5c136f02eaed87c4fe8f2e50d0609d014eee418844765f0fcca0e03a9d"} Mar 11 09:12:28 crc kubenswrapper[4840]: I0311 09:12:28.995477 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-6fr82" event={"ID":"8a61c288-882f-44a9-a206-2cefcfa66c5c","Type":"ContainerStarted","Data":"8234565caa0fded637fc729103a7e8a99112a52def47795090e4138ec473f506"} Mar 11 09:12:28 crc kubenswrapper[4840]: I0311 09:12:28.995777 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-6fr82" Mar 11 09:12:29 crc kubenswrapper[4840]: I0311 09:12:29.001245 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-sn7q8" event={"ID":"6955be99-a9ae-4c67-a9c8-4e7fe6f909de","Type":"ContainerStarted","Data":"b2c183ba00a9b533eb00381c69a82d466200a76b9c57ae757d833f467c982b32"} Mar 11 09:12:29 crc kubenswrapper[4840]: I0311 09:12:29.001310 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-sn7q8" event={"ID":"6955be99-a9ae-4c67-a9c8-4e7fe6f909de","Type":"ContainerStarted","Data":"e840f10ee85b16d48dcebcddd135c4913d4bd8fec8b528f89ada01ba2febc666"} Mar 11 09:12:29 crc kubenswrapper[4840]: I0311 09:12:29.028798 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-6fr82" podStartSLOduration=3.028775733 podStartE2EDuration="3.028775733s" podCreationTimestamp="2026-03-11 09:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:12:29.023977643 +0000 UTC m=+947.689647458" watchObservedRunningTime="2026-03-11 09:12:29.028775733 +0000 UTC m=+947.694445548" Mar 11 09:12:30 crc kubenswrapper[4840]: I0311 09:12:30.027449 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-sn7q8" event={"ID":"6955be99-a9ae-4c67-a9c8-4e7fe6f909de","Type":"ContainerStarted","Data":"21d5d963308ab7ab1af06458f975acedc74e7eab7fc35a30c3c510b73f7f042b"} Mar 11 09:12:30 crc kubenswrapper[4840]: I0311 09:12:30.052148 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-sn7q8" podStartSLOduration=4.052125638 podStartE2EDuration="4.052125638s" podCreationTimestamp="2026-03-11 09:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:12:30.050722643 +0000 UTC m=+948.716392458" watchObservedRunningTime="2026-03-11 09:12:30.052125638 +0000 UTC m=+948.717795453" Mar 11 09:12:31 crc kubenswrapper[4840]: I0311 09:12:31.037502 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-sn7q8" Mar 11 09:12:36 crc kubenswrapper[4840]: I0311 09:12:36.103724 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qjvc2" event={"ID":"8389fd54-9d72-452c-b3e3-df6a7d0808ab","Type":"ContainerStarted","Data":"1bbefea7a9fcf296c88e4f5ceb2a8996a36f60a16a38a845db97a334c3edf585"} Mar 11 09:12:36 crc kubenswrapper[4840]: I0311 09:12:36.104489 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qjvc2" Mar 11 09:12:36 crc kubenswrapper[4840]: I0311 09:12:36.105705 4840 generic.go:334] "Generic (PLEG): container finished" podID="76a3df80-6e5e-4618-8dc4-2b697c4f73b8" containerID="30e31aaf9c762352a56e87cea2affe17e0fb8813ecf8fe69e94de9086baaa3f0" exitCode=0 Mar 11 09:12:36 crc kubenswrapper[4840]: I0311 09:12:36.105744 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xp9gd" event={"ID":"76a3df80-6e5e-4618-8dc4-2b697c4f73b8","Type":"ContainerDied","Data":"30e31aaf9c762352a56e87cea2affe17e0fb8813ecf8fe69e94de9086baaa3f0"} Mar 11 09:12:36 crc kubenswrapper[4840]: I0311 09:12:36.124583 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qjvc2" podStartSLOduration=2.727043595 podStartE2EDuration="10.124560548s" podCreationTimestamp="2026-03-11 09:12:26 +0000 UTC" firstStartedPulling="2026-03-11 09:12:27.710286474 +0000 UTC m=+946.375956289" lastFinishedPulling="2026-03-11 09:12:35.107803417 +0000 UTC m=+953.773473242" observedRunningTime="2026-03-11 09:12:36.120207929 +0000 UTC m=+954.785877744" watchObservedRunningTime="2026-03-11 09:12:36.124560548 +0000 UTC m=+954.790230363" Mar 11 09:12:37 crc kubenswrapper[4840]: I0311 09:12:37.116661 4840 generic.go:334] "Generic (PLEG): container finished" podID="76a3df80-6e5e-4618-8dc4-2b697c4f73b8" containerID="aeacbe401bb4f2f8d5306ae68016bad0f851092734393b782aa77ebe9618bc94" exitCode=0 Mar 11 09:12:37 crc kubenswrapper[4840]: I0311 09:12:37.116717 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xp9gd" event={"ID":"76a3df80-6e5e-4618-8dc4-2b697c4f73b8","Type":"ContainerDied","Data":"aeacbe401bb4f2f8d5306ae68016bad0f851092734393b782aa77ebe9618bc94"} Mar 11 09:12:38 crc kubenswrapper[4840]: I0311 09:12:38.124626 4840 generic.go:334] "Generic (PLEG): container finished" podID="76a3df80-6e5e-4618-8dc4-2b697c4f73b8" containerID="9bcf975287a514e718aff8fa9df09e0af3c0690ea33f2cdb4b505b17c5e7d5eb" exitCode=0 Mar 11 09:12:38 crc kubenswrapper[4840]: I0311 09:12:38.124706 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xp9gd" event={"ID":"76a3df80-6e5e-4618-8dc4-2b697c4f73b8","Type":"ContainerDied","Data":"9bcf975287a514e718aff8fa9df09e0af3c0690ea33f2cdb4b505b17c5e7d5eb"} Mar 11 09:12:38 crc kubenswrapper[4840]: I0311 09:12:38.437925 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-sn7q8" Mar 11 09:12:39 crc kubenswrapper[4840]: I0311 09:12:39.142800 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xp9gd" event={"ID":"76a3df80-6e5e-4618-8dc4-2b697c4f73b8","Type":"ContainerStarted","Data":"544b9dd1bd4e3c8f55a8b3097601e4f72618f3175ca859abf9b206fd45bbb110"} Mar 11 09:12:39 crc kubenswrapper[4840]: I0311 09:12:39.143354 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xp9gd" event={"ID":"76a3df80-6e5e-4618-8dc4-2b697c4f73b8","Type":"ContainerStarted","Data":"8d1367fbdeffed5352a565c1afcf01ea3aae70fdda8c0e77a011ae5dab55fd2e"} Mar 11 09:12:39 crc kubenswrapper[4840]: I0311 09:12:39.143374 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xp9gd" event={"ID":"76a3df80-6e5e-4618-8dc4-2b697c4f73b8","Type":"ContainerStarted","Data":"cdcac22905134a435d24f71c485f2ed43194b5acb6e35b2d3d60fcbb473ac30e"} Mar 11 09:12:39 crc kubenswrapper[4840]: I0311 09:12:39.143385 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xp9gd" event={"ID":"76a3df80-6e5e-4618-8dc4-2b697c4f73b8","Type":"ContainerStarted","Data":"7546c67687590509418bfc675f3eef378ec2a66c76af1ddc3df7e4cc1c01ff48"} Mar 11 09:12:39 crc kubenswrapper[4840]: I0311 09:12:39.143395 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xp9gd" event={"ID":"76a3df80-6e5e-4618-8dc4-2b697c4f73b8","Type":"ContainerStarted","Data":"7b37d8e00511e0f9964bf35815aeaa467590802ac35275eb77f24e21aaecba4b"} Mar 11 09:12:39 crc kubenswrapper[4840]: I0311 09:12:39.963065 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9"] Mar 11 09:12:39 crc kubenswrapper[4840]: I0311 09:12:39.964831 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9" Mar 11 09:12:39 crc kubenswrapper[4840]: I0311 09:12:39.966982 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 11 09:12:39 crc kubenswrapper[4840]: I0311 09:12:39.975827 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9"] Mar 11 09:12:39 crc kubenswrapper[4840]: I0311 09:12:39.979493 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fdc33192-4b69-4f48-95bd-8b1aa1cc7135-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9\" (UID: \"fdc33192-4b69-4f48-95bd-8b1aa1cc7135\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9" Mar 11 09:12:39 crc kubenswrapper[4840]: I0311 09:12:39.979565 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d48r\" (UniqueName: \"kubernetes.io/projected/fdc33192-4b69-4f48-95bd-8b1aa1cc7135-kube-api-access-2d48r\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9\" (UID: \"fdc33192-4b69-4f48-95bd-8b1aa1cc7135\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9" Mar 11 09:12:39 crc kubenswrapper[4840]: I0311 09:12:39.979645 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fdc33192-4b69-4f48-95bd-8b1aa1cc7135-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9\" (UID: \"fdc33192-4b69-4f48-95bd-8b1aa1cc7135\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9" Mar 11 09:12:40 crc kubenswrapper[4840]: I0311 09:12:40.082143 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fdc33192-4b69-4f48-95bd-8b1aa1cc7135-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9\" (UID: \"fdc33192-4b69-4f48-95bd-8b1aa1cc7135\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9" Mar 11 09:12:40 crc kubenswrapper[4840]: I0311 09:12:40.082205 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d48r\" (UniqueName: \"kubernetes.io/projected/fdc33192-4b69-4f48-95bd-8b1aa1cc7135-kube-api-access-2d48r\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9\" (UID: \"fdc33192-4b69-4f48-95bd-8b1aa1cc7135\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9" Mar 11 09:12:40 crc kubenswrapper[4840]: I0311 09:12:40.082308 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fdc33192-4b69-4f48-95bd-8b1aa1cc7135-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9\" (UID: \"fdc33192-4b69-4f48-95bd-8b1aa1cc7135\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9" Mar 11 09:12:40 crc kubenswrapper[4840]: I0311 09:12:40.083185 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fdc33192-4b69-4f48-95bd-8b1aa1cc7135-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9\" (UID: \"fdc33192-4b69-4f48-95bd-8b1aa1cc7135\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9" Mar 11 09:12:40 crc kubenswrapper[4840]: I0311 09:12:40.083458 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fdc33192-4b69-4f48-95bd-8b1aa1cc7135-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9\" (UID: \"fdc33192-4b69-4f48-95bd-8b1aa1cc7135\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9" Mar 11 09:12:40 crc kubenswrapper[4840]: I0311 09:12:40.104033 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d48r\" (UniqueName: \"kubernetes.io/projected/fdc33192-4b69-4f48-95bd-8b1aa1cc7135-kube-api-access-2d48r\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9\" (UID: \"fdc33192-4b69-4f48-95bd-8b1aa1cc7135\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9" Mar 11 09:12:40 crc kubenswrapper[4840]: I0311 09:12:40.154981 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-xp9gd" event={"ID":"76a3df80-6e5e-4618-8dc4-2b697c4f73b8","Type":"ContainerStarted","Data":"2c1eedd1931701fcd91e04a6e8ea799ddd5dabf064543c078d6643811914ad60"} Mar 11 09:12:40 crc kubenswrapper[4840]: I0311 09:12:40.156235 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:40 crc kubenswrapper[4840]: I0311 09:12:40.183858 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-xp9gd" podStartSLOduration=6.050589791 podStartE2EDuration="14.183833735s" podCreationTimestamp="2026-03-11 09:12:26 +0000 UTC" firstStartedPulling="2026-03-11 09:12:26.954688125 +0000 UTC m=+945.620357940" lastFinishedPulling="2026-03-11 09:12:35.087932069 +0000 UTC m=+953.753601884" observedRunningTime="2026-03-11 09:12:40.179348253 +0000 UTC m=+958.845018068" watchObservedRunningTime="2026-03-11 09:12:40.183833735 +0000 UTC m=+958.849503550" Mar 11 09:12:40 crc kubenswrapper[4840]: I0311 09:12:40.288428 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9" Mar 11 09:12:40 crc kubenswrapper[4840]: I0311 09:12:40.671566 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9"] Mar 11 09:12:40 crc kubenswrapper[4840]: W0311 09:12:40.677302 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdc33192_4b69_4f48_95bd_8b1aa1cc7135.slice/crio-a8b5faf0974b2d4a98ae9474ef569c1b958b14751a109f155e2208f67bd58b27 WatchSource:0}: Error finding container a8b5faf0974b2d4a98ae9474ef569c1b958b14751a109f155e2208f67bd58b27: Status 404 returned error can't find the container with id a8b5faf0974b2d4a98ae9474ef569c1b958b14751a109f155e2208f67bd58b27 Mar 11 09:12:41 crc kubenswrapper[4840]: I0311 09:12:41.168306 4840 generic.go:334] "Generic (PLEG): container finished" podID="fdc33192-4b69-4f48-95bd-8b1aa1cc7135" containerID="8f9fc4c88660b5697eee8814e8ca19890362c3e1556329932bb1d3c02c9c6b8a" exitCode=0 Mar 11 09:12:41 crc kubenswrapper[4840]: I0311 09:12:41.168456 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9" event={"ID":"fdc33192-4b69-4f48-95bd-8b1aa1cc7135","Type":"ContainerDied","Data":"8f9fc4c88660b5697eee8814e8ca19890362c3e1556329932bb1d3c02c9c6b8a"} Mar 11 09:12:41 crc kubenswrapper[4840]: I0311 09:12:41.168912 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9" event={"ID":"fdc33192-4b69-4f48-95bd-8b1aa1cc7135","Type":"ContainerStarted","Data":"a8b5faf0974b2d4a98ae9474ef569c1b958b14751a109f155e2208f67bd58b27"} Mar 11 09:12:41 crc kubenswrapper[4840]: I0311 09:12:41.817531 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:41 crc kubenswrapper[4840]: I0311 09:12:41.859710 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:42 crc kubenswrapper[4840]: I0311 09:12:42.734022 4840 scope.go:117] "RemoveContainer" containerID="0b7a1a6e54452669c82f91cbe0b81febfe1a6fdf13e22497f30a83e7972ff16c" Mar 11 09:12:45 crc kubenswrapper[4840]: I0311 09:12:45.219831 4840 generic.go:334] "Generic (PLEG): container finished" podID="fdc33192-4b69-4f48-95bd-8b1aa1cc7135" containerID="f3e9b54e40e3bba6af93206c3f501c9e55128e41f857c7ec958ce919787f21d8" exitCode=0 Mar 11 09:12:45 crc kubenswrapper[4840]: I0311 09:12:45.219892 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9" event={"ID":"fdc33192-4b69-4f48-95bd-8b1aa1cc7135","Type":"ContainerDied","Data":"f3e9b54e40e3bba6af93206c3f501c9e55128e41f857c7ec958ce919787f21d8"} Mar 11 09:12:46 crc kubenswrapper[4840]: I0311 09:12:46.230137 4840 generic.go:334] "Generic (PLEG): container finished" podID="fdc33192-4b69-4f48-95bd-8b1aa1cc7135" containerID="495b3afe8f52e49c1baffa06b0a1b716c9a24e8c9b3bf3862816d29b2d3834a4" exitCode=0 Mar 11 09:12:46 crc kubenswrapper[4840]: I0311 09:12:46.230244 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9" event={"ID":"fdc33192-4b69-4f48-95bd-8b1aa1cc7135","Type":"ContainerDied","Data":"495b3afe8f52e49c1baffa06b0a1b716c9a24e8c9b3bf3862816d29b2d3834a4"} Mar 11 09:12:47 crc kubenswrapper[4840]: I0311 09:12:47.472158 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-qjvc2" Mar 11 09:12:47 crc kubenswrapper[4840]: I0311 09:12:47.488365 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9" Mar 11 09:12:47 crc kubenswrapper[4840]: I0311 09:12:47.491187 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fdc33192-4b69-4f48-95bd-8b1aa1cc7135-util\") pod \"fdc33192-4b69-4f48-95bd-8b1aa1cc7135\" (UID: \"fdc33192-4b69-4f48-95bd-8b1aa1cc7135\") " Mar 11 09:12:47 crc kubenswrapper[4840]: I0311 09:12:47.491266 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fdc33192-4b69-4f48-95bd-8b1aa1cc7135-bundle\") pod \"fdc33192-4b69-4f48-95bd-8b1aa1cc7135\" (UID: \"fdc33192-4b69-4f48-95bd-8b1aa1cc7135\") " Mar 11 09:12:47 crc kubenswrapper[4840]: I0311 09:12:47.491302 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d48r\" (UniqueName: \"kubernetes.io/projected/fdc33192-4b69-4f48-95bd-8b1aa1cc7135-kube-api-access-2d48r\") pod \"fdc33192-4b69-4f48-95bd-8b1aa1cc7135\" (UID: \"fdc33192-4b69-4f48-95bd-8b1aa1cc7135\") " Mar 11 09:12:47 crc kubenswrapper[4840]: I0311 09:12:47.496206 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdc33192-4b69-4f48-95bd-8b1aa1cc7135-bundle" (OuterVolumeSpecName: "bundle") pod "fdc33192-4b69-4f48-95bd-8b1aa1cc7135" (UID: "fdc33192-4b69-4f48-95bd-8b1aa1cc7135"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:12:47 crc kubenswrapper[4840]: I0311 09:12:47.505842 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdc33192-4b69-4f48-95bd-8b1aa1cc7135-kube-api-access-2d48r" (OuterVolumeSpecName: "kube-api-access-2d48r") pod "fdc33192-4b69-4f48-95bd-8b1aa1cc7135" (UID: "fdc33192-4b69-4f48-95bd-8b1aa1cc7135"). InnerVolumeSpecName "kube-api-access-2d48r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:12:47 crc kubenswrapper[4840]: I0311 09:12:47.525299 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdc33192-4b69-4f48-95bd-8b1aa1cc7135-util" (OuterVolumeSpecName: "util") pod "fdc33192-4b69-4f48-95bd-8b1aa1cc7135" (UID: "fdc33192-4b69-4f48-95bd-8b1aa1cc7135"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:12:47 crc kubenswrapper[4840]: I0311 09:12:47.561564 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-6fr82" Mar 11 09:12:47 crc kubenswrapper[4840]: I0311 09:12:47.593444 4840 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fdc33192-4b69-4f48-95bd-8b1aa1cc7135-util\") on node \"crc\" DevicePath \"\"" Mar 11 09:12:47 crc kubenswrapper[4840]: I0311 09:12:47.593546 4840 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fdc33192-4b69-4f48-95bd-8b1aa1cc7135-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:12:47 crc kubenswrapper[4840]: I0311 09:12:47.593601 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d48r\" (UniqueName: \"kubernetes.io/projected/fdc33192-4b69-4f48-95bd-8b1aa1cc7135-kube-api-access-2d48r\") on node \"crc\" DevicePath \"\"" Mar 11 09:12:48 crc kubenswrapper[4840]: I0311 09:12:48.246293 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9" event={"ID":"fdc33192-4b69-4f48-95bd-8b1aa1cc7135","Type":"ContainerDied","Data":"a8b5faf0974b2d4a98ae9474ef569c1b958b14751a109f155e2208f67bd58b27"} Mar 11 09:12:48 crc kubenswrapper[4840]: I0311 09:12:48.246632 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8b5faf0974b2d4a98ae9474ef569c1b958b14751a109f155e2208f67bd58b27" Mar 11 09:12:48 crc kubenswrapper[4840]: I0311 09:12:48.246378 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9" Mar 11 09:12:53 crc kubenswrapper[4840]: I0311 09:12:53.489332 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hqml9"] Mar 11 09:12:53 crc kubenswrapper[4840]: E0311 09:12:53.490318 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc33192-4b69-4f48-95bd-8b1aa1cc7135" containerName="extract" Mar 11 09:12:53 crc kubenswrapper[4840]: I0311 09:12:53.490334 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc33192-4b69-4f48-95bd-8b1aa1cc7135" containerName="extract" Mar 11 09:12:53 crc kubenswrapper[4840]: E0311 09:12:53.490352 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc33192-4b69-4f48-95bd-8b1aa1cc7135" containerName="pull" Mar 11 09:12:53 crc kubenswrapper[4840]: I0311 09:12:53.490359 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc33192-4b69-4f48-95bd-8b1aa1cc7135" containerName="pull" Mar 11 09:12:53 crc kubenswrapper[4840]: E0311 09:12:53.490374 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdc33192-4b69-4f48-95bd-8b1aa1cc7135" containerName="util" Mar 11 09:12:53 crc kubenswrapper[4840]: I0311 09:12:53.490382 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdc33192-4b69-4f48-95bd-8b1aa1cc7135" containerName="util" Mar 11 09:12:53 crc kubenswrapper[4840]: I0311 09:12:53.490531 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdc33192-4b69-4f48-95bd-8b1aa1cc7135" containerName="extract" Mar 11 09:12:53 crc kubenswrapper[4840]: I0311 09:12:53.491161 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hqml9" Mar 11 09:12:53 crc kubenswrapper[4840]: I0311 09:12:53.494325 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Mar 11 09:12:53 crc kubenswrapper[4840]: I0311 09:12:53.495072 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Mar 11 09:12:53 crc kubenswrapper[4840]: I0311 09:12:53.495864 4840 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-8fhcf" Mar 11 09:12:53 crc kubenswrapper[4840]: I0311 09:12:53.510606 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hqml9"] Mar 11 09:12:53 crc kubenswrapper[4840]: I0311 09:12:53.566547 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg4p2\" (UniqueName: \"kubernetes.io/projected/c1a0a2fa-23e3-4b56-aa93-4119cf5c82cf-kube-api-access-tg4p2\") pod \"cert-manager-operator-controller-manager-66c8bdd694-hqml9\" (UID: \"c1a0a2fa-23e3-4b56-aa93-4119cf5c82cf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hqml9" Mar 11 09:12:53 crc kubenswrapper[4840]: I0311 09:12:53.566616 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c1a0a2fa-23e3-4b56-aa93-4119cf5c82cf-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-hqml9\" (UID: \"c1a0a2fa-23e3-4b56-aa93-4119cf5c82cf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hqml9" Mar 11 09:12:53 crc kubenswrapper[4840]: I0311 09:12:53.671059 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg4p2\" (UniqueName: \"kubernetes.io/projected/c1a0a2fa-23e3-4b56-aa93-4119cf5c82cf-kube-api-access-tg4p2\") pod \"cert-manager-operator-controller-manager-66c8bdd694-hqml9\" (UID: \"c1a0a2fa-23e3-4b56-aa93-4119cf5c82cf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hqml9" Mar 11 09:12:53 crc kubenswrapper[4840]: I0311 09:12:53.671143 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c1a0a2fa-23e3-4b56-aa93-4119cf5c82cf-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-hqml9\" (UID: \"c1a0a2fa-23e3-4b56-aa93-4119cf5c82cf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hqml9" Mar 11 09:12:53 crc kubenswrapper[4840]: I0311 09:12:53.671709 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c1a0a2fa-23e3-4b56-aa93-4119cf5c82cf-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-hqml9\" (UID: \"c1a0a2fa-23e3-4b56-aa93-4119cf5c82cf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hqml9" Mar 11 09:12:53 crc kubenswrapper[4840]: I0311 09:12:53.691069 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg4p2\" (UniqueName: \"kubernetes.io/projected/c1a0a2fa-23e3-4b56-aa93-4119cf5c82cf-kube-api-access-tg4p2\") pod \"cert-manager-operator-controller-manager-66c8bdd694-hqml9\" (UID: \"c1a0a2fa-23e3-4b56-aa93-4119cf5c82cf\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hqml9" Mar 11 09:12:53 crc kubenswrapper[4840]: I0311 09:12:53.809764 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hqml9" Mar 11 09:12:54 crc kubenswrapper[4840]: I0311 09:12:54.298568 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hqml9"] Mar 11 09:12:54 crc kubenswrapper[4840]: W0311 09:12:54.306123 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1a0a2fa_23e3_4b56_aa93_4119cf5c82cf.slice/crio-d3ace5217c2af1274fa0bd936fa75641f666d87b6a686ba04de3ef1aff963288 WatchSource:0}: Error finding container d3ace5217c2af1274fa0bd936fa75641f666d87b6a686ba04de3ef1aff963288: Status 404 returned error can't find the container with id d3ace5217c2af1274fa0bd936fa75641f666d87b6a686ba04de3ef1aff963288 Mar 11 09:12:55 crc kubenswrapper[4840]: I0311 09:12:55.294569 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hqml9" event={"ID":"c1a0a2fa-23e3-4b56-aa93-4119cf5c82cf","Type":"ContainerStarted","Data":"d3ace5217c2af1274fa0bd936fa75641f666d87b6a686ba04de3ef1aff963288"} Mar 11 09:12:56 crc kubenswrapper[4840]: I0311 09:12:56.829842 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-xp9gd" Mar 11 09:12:57 crc kubenswrapper[4840]: I0311 09:12:57.445857 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:12:57 crc kubenswrapper[4840]: I0311 09:12:57.445932 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:12:58 crc kubenswrapper[4840]: I0311 09:12:58.316304 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hqml9" event={"ID":"c1a0a2fa-23e3-4b56-aa93-4119cf5c82cf","Type":"ContainerStarted","Data":"1186f5d85d3652fccf05db1f2471a4e74839e960318f4e2135f5d144b3106518"} Mar 11 09:12:58 crc kubenswrapper[4840]: I0311 09:12:58.338977 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-hqml9" podStartSLOduration=1.7486044139999999 podStartE2EDuration="5.338956506s" podCreationTimestamp="2026-03-11 09:12:53 +0000 UTC" firstStartedPulling="2026-03-11 09:12:54.308756826 +0000 UTC m=+972.974426641" lastFinishedPulling="2026-03-11 09:12:57.899108918 +0000 UTC m=+976.564778733" observedRunningTime="2026-03-11 09:12:58.337270794 +0000 UTC m=+977.002940629" watchObservedRunningTime="2026-03-11 09:12:58.338956506 +0000 UTC m=+977.004626321" Mar 11 09:13:02 crc kubenswrapper[4840]: I0311 09:13:02.334930 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-hmd2q"] Mar 11 09:13:02 crc kubenswrapper[4840]: I0311 09:13:02.336085 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-hmd2q" Mar 11 09:13:02 crc kubenswrapper[4840]: I0311 09:13:02.340042 4840 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-dx4cg" Mar 11 09:13:02 crc kubenswrapper[4840]: I0311 09:13:02.340202 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 11 09:13:02 crc kubenswrapper[4840]: I0311 09:13:02.341635 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 11 09:13:02 crc kubenswrapper[4840]: I0311 09:13:02.355305 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-hmd2q"] Mar 11 09:13:02 crc kubenswrapper[4840]: I0311 09:13:02.536830 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d51cb99-73ad-47da-88f0-906a4fb160c4-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-hmd2q\" (UID: \"7d51cb99-73ad-47da-88f0-906a4fb160c4\") " pod="cert-manager/cert-manager-webhook-6888856db4-hmd2q" Mar 11 09:13:02 crc kubenswrapper[4840]: I0311 09:13:02.536876 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhtw5\" (UniqueName: \"kubernetes.io/projected/7d51cb99-73ad-47da-88f0-906a4fb160c4-kube-api-access-jhtw5\") pod \"cert-manager-webhook-6888856db4-hmd2q\" (UID: \"7d51cb99-73ad-47da-88f0-906a4fb160c4\") " pod="cert-manager/cert-manager-webhook-6888856db4-hmd2q" Mar 11 09:13:02 crc kubenswrapper[4840]: I0311 09:13:02.638103 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhtw5\" (UniqueName: \"kubernetes.io/projected/7d51cb99-73ad-47da-88f0-906a4fb160c4-kube-api-access-jhtw5\") pod \"cert-manager-webhook-6888856db4-hmd2q\" (UID: \"7d51cb99-73ad-47da-88f0-906a4fb160c4\") " pod="cert-manager/cert-manager-webhook-6888856db4-hmd2q" Mar 11 09:13:02 crc kubenswrapper[4840]: I0311 09:13:02.638601 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d51cb99-73ad-47da-88f0-906a4fb160c4-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-hmd2q\" (UID: \"7d51cb99-73ad-47da-88f0-906a4fb160c4\") " pod="cert-manager/cert-manager-webhook-6888856db4-hmd2q" Mar 11 09:13:02 crc kubenswrapper[4840]: I0311 09:13:02.663577 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7d51cb99-73ad-47da-88f0-906a4fb160c4-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-hmd2q\" (UID: \"7d51cb99-73ad-47da-88f0-906a4fb160c4\") " pod="cert-manager/cert-manager-webhook-6888856db4-hmd2q" Mar 11 09:13:02 crc kubenswrapper[4840]: I0311 09:13:02.664774 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhtw5\" (UniqueName: \"kubernetes.io/projected/7d51cb99-73ad-47da-88f0-906a4fb160c4-kube-api-access-jhtw5\") pod \"cert-manager-webhook-6888856db4-hmd2q\" (UID: \"7d51cb99-73ad-47da-88f0-906a4fb160c4\") " pod="cert-manager/cert-manager-webhook-6888856db4-hmd2q" Mar 11 09:13:02 crc kubenswrapper[4840]: I0311 09:13:02.953362 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-hmd2q" Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.078984 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-6gktm"] Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.080517 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-6gktm" Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.082674 4840 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-pg98t" Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.096791 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-6gktm"] Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.247344 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/494dd1d0-27c8-4680-9012-2e598d171b99-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-6gktm\" (UID: \"494dd1d0-27c8-4680-9012-2e598d171b99\") " pod="cert-manager/cert-manager-cainjector-5545bd876-6gktm" Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.247715 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlh9g\" (UniqueName: \"kubernetes.io/projected/494dd1d0-27c8-4680-9012-2e598d171b99-kube-api-access-xlh9g\") pod \"cert-manager-cainjector-5545bd876-6gktm\" (UID: \"494dd1d0-27c8-4680-9012-2e598d171b99\") " pod="cert-manager/cert-manager-cainjector-5545bd876-6gktm" Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.308620 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-hmd2q"] Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.350239 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlh9g\" (UniqueName: \"kubernetes.io/projected/494dd1d0-27c8-4680-9012-2e598d171b99-kube-api-access-xlh9g\") pod \"cert-manager-cainjector-5545bd876-6gktm\" (UID: \"494dd1d0-27c8-4680-9012-2e598d171b99\") " pod="cert-manager/cert-manager-cainjector-5545bd876-6gktm" Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.350781 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/494dd1d0-27c8-4680-9012-2e598d171b99-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-6gktm\" (UID: \"494dd1d0-27c8-4680-9012-2e598d171b99\") " pod="cert-manager/cert-manager-cainjector-5545bd876-6gktm" Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.352327 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-hmd2q" event={"ID":"7d51cb99-73ad-47da-88f0-906a4fb160c4","Type":"ContainerStarted","Data":"6b31bd74998bdf97d53d5999b73583ba50d55ee2930346bf97ff0694ce315604"} Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.372751 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/494dd1d0-27c8-4680-9012-2e598d171b99-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-6gktm\" (UID: \"494dd1d0-27c8-4680-9012-2e598d171b99\") " pod="cert-manager/cert-manager-cainjector-5545bd876-6gktm" Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.374434 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlh9g\" (UniqueName: \"kubernetes.io/projected/494dd1d0-27c8-4680-9012-2e598d171b99-kube-api-access-xlh9g\") pod \"cert-manager-cainjector-5545bd876-6gktm\" (UID: \"494dd1d0-27c8-4680-9012-2e598d171b99\") " pod="cert-manager/cert-manager-cainjector-5545bd876-6gktm" Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.396529 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-6gktm" Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.512328 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nk8xz"] Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.513578 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nk8xz" Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.526777 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nk8xz"] Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.661049 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ebe7653-5dbc-4f86-af79-ce2d25726aa9-utilities\") pod \"certified-operators-nk8xz\" (UID: \"6ebe7653-5dbc-4f86-af79-ce2d25726aa9\") " pod="openshift-marketplace/certified-operators-nk8xz" Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.661146 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ebe7653-5dbc-4f86-af79-ce2d25726aa9-catalog-content\") pod \"certified-operators-nk8xz\" (UID: \"6ebe7653-5dbc-4f86-af79-ce2d25726aa9\") " pod="openshift-marketplace/certified-operators-nk8xz" Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.661176 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8f92\" (UniqueName: \"kubernetes.io/projected/6ebe7653-5dbc-4f86-af79-ce2d25726aa9-kube-api-access-w8f92\") pod \"certified-operators-nk8xz\" (UID: \"6ebe7653-5dbc-4f86-af79-ce2d25726aa9\") " pod="openshift-marketplace/certified-operators-nk8xz" Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.762118 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ebe7653-5dbc-4f86-af79-ce2d25726aa9-utilities\") pod \"certified-operators-nk8xz\" (UID: \"6ebe7653-5dbc-4f86-af79-ce2d25726aa9\") " pod="openshift-marketplace/certified-operators-nk8xz" Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.762173 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ebe7653-5dbc-4f86-af79-ce2d25726aa9-catalog-content\") pod \"certified-operators-nk8xz\" (UID: \"6ebe7653-5dbc-4f86-af79-ce2d25726aa9\") " pod="openshift-marketplace/certified-operators-nk8xz" Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.762199 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8f92\" (UniqueName: \"kubernetes.io/projected/6ebe7653-5dbc-4f86-af79-ce2d25726aa9-kube-api-access-w8f92\") pod \"certified-operators-nk8xz\" (UID: \"6ebe7653-5dbc-4f86-af79-ce2d25726aa9\") " pod="openshift-marketplace/certified-operators-nk8xz" Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.762903 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ebe7653-5dbc-4f86-af79-ce2d25726aa9-catalog-content\") pod \"certified-operators-nk8xz\" (UID: \"6ebe7653-5dbc-4f86-af79-ce2d25726aa9\") " pod="openshift-marketplace/certified-operators-nk8xz" Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.762959 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ebe7653-5dbc-4f86-af79-ce2d25726aa9-utilities\") pod \"certified-operators-nk8xz\" (UID: \"6ebe7653-5dbc-4f86-af79-ce2d25726aa9\") " pod="openshift-marketplace/certified-operators-nk8xz" Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.785167 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8f92\" (UniqueName: \"kubernetes.io/projected/6ebe7653-5dbc-4f86-af79-ce2d25726aa9-kube-api-access-w8f92\") pod \"certified-operators-nk8xz\" (UID: \"6ebe7653-5dbc-4f86-af79-ce2d25726aa9\") " pod="openshift-marketplace/certified-operators-nk8xz" Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.847145 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nk8xz" Mar 11 09:13:03 crc kubenswrapper[4840]: I0311 09:13:03.907204 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-6gktm"] Mar 11 09:13:04 crc kubenswrapper[4840]: I0311 09:13:04.348265 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nk8xz"] Mar 11 09:13:04 crc kubenswrapper[4840]: W0311 09:13:04.355926 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ebe7653_5dbc_4f86_af79_ce2d25726aa9.slice/crio-16fc2a043cb899b88168c2747b17a410f030a2f9faeecae1843ba3175a1c5df5 WatchSource:0}: Error finding container 16fc2a043cb899b88168c2747b17a410f030a2f9faeecae1843ba3175a1c5df5: Status 404 returned error can't find the container with id 16fc2a043cb899b88168c2747b17a410f030a2f9faeecae1843ba3175a1c5df5 Mar 11 09:13:04 crc kubenswrapper[4840]: I0311 09:13:04.359388 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-6gktm" event={"ID":"494dd1d0-27c8-4680-9012-2e598d171b99","Type":"ContainerStarted","Data":"458c5129b5d7399bc32e50d70cc5ad163346ccf0706c20f8bddcb49bb5abe634"} Mar 11 09:13:05 crc kubenswrapper[4840]: I0311 09:13:05.370966 4840 generic.go:334] "Generic (PLEG): container finished" podID="6ebe7653-5dbc-4f86-af79-ce2d25726aa9" containerID="1aa05e8e6a5c676816a0b2abf66ca4a80c2029fcb3e195e1155383b5ba51ec1c" exitCode=0 Mar 11 09:13:05 crc kubenswrapper[4840]: I0311 09:13:05.371332 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nk8xz" event={"ID":"6ebe7653-5dbc-4f86-af79-ce2d25726aa9","Type":"ContainerDied","Data":"1aa05e8e6a5c676816a0b2abf66ca4a80c2029fcb3e195e1155383b5ba51ec1c"} Mar 11 09:13:05 crc kubenswrapper[4840]: I0311 09:13:05.371380 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nk8xz" event={"ID":"6ebe7653-5dbc-4f86-af79-ce2d25726aa9","Type":"ContainerStarted","Data":"16fc2a043cb899b88168c2747b17a410f030a2f9faeecae1843ba3175a1c5df5"} Mar 11 09:13:07 crc kubenswrapper[4840]: I0311 09:13:07.399611 4840 generic.go:334] "Generic (PLEG): container finished" podID="6ebe7653-5dbc-4f86-af79-ce2d25726aa9" containerID="6f4117326fda221fc0ee94a46eb25b0160e8c3a1c8e3f6da4bbf3d445ce76ced" exitCode=0 Mar 11 09:13:07 crc kubenswrapper[4840]: I0311 09:13:07.399726 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nk8xz" event={"ID":"6ebe7653-5dbc-4f86-af79-ce2d25726aa9","Type":"ContainerDied","Data":"6f4117326fda221fc0ee94a46eb25b0160e8c3a1c8e3f6da4bbf3d445ce76ced"} Mar 11 09:13:09 crc kubenswrapper[4840]: I0311 09:13:09.416000 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-6gktm" event={"ID":"494dd1d0-27c8-4680-9012-2e598d171b99","Type":"ContainerStarted","Data":"5dbc75f55136bec617ebebbb1c9721d15b8dbb3e10b1dafb03907e6d0fac6f8e"} Mar 11 09:13:09 crc kubenswrapper[4840]: I0311 09:13:09.420296 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nk8xz" event={"ID":"6ebe7653-5dbc-4f86-af79-ce2d25726aa9","Type":"ContainerStarted","Data":"8ac22889665d7c3bfcc2c2cdfa21e62d2b8699789cd58144dfa0c01184b386aa"} Mar 11 09:13:09 crc kubenswrapper[4840]: I0311 09:13:09.423167 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-hmd2q" event={"ID":"7d51cb99-73ad-47da-88f0-906a4fb160c4","Type":"ContainerStarted","Data":"7f4474e52f625206ab31c5c4d36d54f06f6d1a951231aeb1457839b0fbbdcfad"} Mar 11 09:13:09 crc kubenswrapper[4840]: I0311 09:13:09.424091 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-hmd2q" Mar 11 09:13:09 crc kubenswrapper[4840]: I0311 09:13:09.432704 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-6gktm" podStartSLOduration=1.690932978 podStartE2EDuration="6.432680342s" podCreationTimestamp="2026-03-11 09:13:03 +0000 UTC" firstStartedPulling="2026-03-11 09:13:03.942648673 +0000 UTC m=+982.608318498" lastFinishedPulling="2026-03-11 09:13:08.684396047 +0000 UTC m=+987.350065862" observedRunningTime="2026-03-11 09:13:09.430364685 +0000 UTC m=+988.096034500" watchObservedRunningTime="2026-03-11 09:13:09.432680342 +0000 UTC m=+988.098350167" Mar 11 09:13:09 crc kubenswrapper[4840]: I0311 09:13:09.456216 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nk8xz" podStartSLOduration=2.7091487340000002 podStartE2EDuration="6.456190181s" podCreationTimestamp="2026-03-11 09:13:03 +0000 UTC" firstStartedPulling="2026-03-11 09:13:05.373769793 +0000 UTC m=+984.039439608" lastFinishedPulling="2026-03-11 09:13:09.12081124 +0000 UTC m=+987.786481055" observedRunningTime="2026-03-11 09:13:09.452943 +0000 UTC m=+988.118612815" watchObservedRunningTime="2026-03-11 09:13:09.456190181 +0000 UTC m=+988.121859996" Mar 11 09:13:09 crc kubenswrapper[4840]: I0311 09:13:09.484864 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-hmd2q" podStartSLOduration=2.148028178 podStartE2EDuration="7.484829929s" podCreationTimestamp="2026-03-11 09:13:02 +0000 UTC" firstStartedPulling="2026-03-11 09:13:03.31779331 +0000 UTC m=+981.983463125" lastFinishedPulling="2026-03-11 09:13:08.654595071 +0000 UTC m=+987.320264876" observedRunningTime="2026-03-11 09:13:09.475740091 +0000 UTC m=+988.141409906" watchObservedRunningTime="2026-03-11 09:13:09.484829929 +0000 UTC m=+988.150499744" Mar 11 09:13:13 crc kubenswrapper[4840]: I0311 09:13:13.852126 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nk8xz" Mar 11 09:13:13 crc kubenswrapper[4840]: I0311 09:13:13.852684 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nk8xz" Mar 11 09:13:13 crc kubenswrapper[4840]: I0311 09:13:13.904394 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nk8xz" Mar 11 09:13:14 crc kubenswrapper[4840]: I0311 09:13:14.506044 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nk8xz" Mar 11 09:13:14 crc kubenswrapper[4840]: I0311 09:13:14.558892 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nk8xz"] Mar 11 09:13:16 crc kubenswrapper[4840]: I0311 09:13:16.471518 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nk8xz" podUID="6ebe7653-5dbc-4f86-af79-ce2d25726aa9" containerName="registry-server" containerID="cri-o://8ac22889665d7c3bfcc2c2cdfa21e62d2b8699789cd58144dfa0c01184b386aa" gracePeriod=2 Mar 11 09:13:16 crc kubenswrapper[4840]: I0311 09:13:16.929531 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nk8xz" Mar 11 09:13:17 crc kubenswrapper[4840]: I0311 09:13:17.075816 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ebe7653-5dbc-4f86-af79-ce2d25726aa9-catalog-content\") pod \"6ebe7653-5dbc-4f86-af79-ce2d25726aa9\" (UID: \"6ebe7653-5dbc-4f86-af79-ce2d25726aa9\") " Mar 11 09:13:17 crc kubenswrapper[4840]: I0311 09:13:17.075869 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8f92\" (UniqueName: \"kubernetes.io/projected/6ebe7653-5dbc-4f86-af79-ce2d25726aa9-kube-api-access-w8f92\") pod \"6ebe7653-5dbc-4f86-af79-ce2d25726aa9\" (UID: \"6ebe7653-5dbc-4f86-af79-ce2d25726aa9\") " Mar 11 09:13:17 crc kubenswrapper[4840]: I0311 09:13:17.075927 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ebe7653-5dbc-4f86-af79-ce2d25726aa9-utilities\") pod \"6ebe7653-5dbc-4f86-af79-ce2d25726aa9\" (UID: \"6ebe7653-5dbc-4f86-af79-ce2d25726aa9\") " Mar 11 09:13:17 crc kubenswrapper[4840]: I0311 09:13:17.076854 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ebe7653-5dbc-4f86-af79-ce2d25726aa9-utilities" (OuterVolumeSpecName: "utilities") pod "6ebe7653-5dbc-4f86-af79-ce2d25726aa9" (UID: "6ebe7653-5dbc-4f86-af79-ce2d25726aa9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:13:17 crc kubenswrapper[4840]: I0311 09:13:17.082450 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ebe7653-5dbc-4f86-af79-ce2d25726aa9-kube-api-access-w8f92" (OuterVolumeSpecName: "kube-api-access-w8f92") pod "6ebe7653-5dbc-4f86-af79-ce2d25726aa9" (UID: "6ebe7653-5dbc-4f86-af79-ce2d25726aa9"). InnerVolumeSpecName "kube-api-access-w8f92". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:13:17 crc kubenswrapper[4840]: I0311 09:13:17.138085 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ebe7653-5dbc-4f86-af79-ce2d25726aa9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ebe7653-5dbc-4f86-af79-ce2d25726aa9" (UID: "6ebe7653-5dbc-4f86-af79-ce2d25726aa9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:13:17 crc kubenswrapper[4840]: I0311 09:13:17.177565 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ebe7653-5dbc-4f86-af79-ce2d25726aa9-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:13:17 crc kubenswrapper[4840]: I0311 09:13:17.177612 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8f92\" (UniqueName: \"kubernetes.io/projected/6ebe7653-5dbc-4f86-af79-ce2d25726aa9-kube-api-access-w8f92\") on node \"crc\" DevicePath \"\"" Mar 11 09:13:17 crc kubenswrapper[4840]: I0311 09:13:17.177631 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ebe7653-5dbc-4f86-af79-ce2d25726aa9-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:13:17 crc kubenswrapper[4840]: I0311 09:13:17.484938 4840 generic.go:334] "Generic (PLEG): container finished" podID="6ebe7653-5dbc-4f86-af79-ce2d25726aa9" containerID="8ac22889665d7c3bfcc2c2cdfa21e62d2b8699789cd58144dfa0c01184b386aa" exitCode=0 Mar 11 09:13:17 crc kubenswrapper[4840]: I0311 09:13:17.485014 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nk8xz" event={"ID":"6ebe7653-5dbc-4f86-af79-ce2d25726aa9","Type":"ContainerDied","Data":"8ac22889665d7c3bfcc2c2cdfa21e62d2b8699789cd58144dfa0c01184b386aa"} Mar 11 09:13:17 crc kubenswrapper[4840]: I0311 09:13:17.485055 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nk8xz" event={"ID":"6ebe7653-5dbc-4f86-af79-ce2d25726aa9","Type":"ContainerDied","Data":"16fc2a043cb899b88168c2747b17a410f030a2f9faeecae1843ba3175a1c5df5"} Mar 11 09:13:17 crc kubenswrapper[4840]: I0311 09:13:17.485082 4840 scope.go:117] "RemoveContainer" containerID="8ac22889665d7c3bfcc2c2cdfa21e62d2b8699789cd58144dfa0c01184b386aa" Mar 11 09:13:17 crc kubenswrapper[4840]: I0311 09:13:17.485363 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nk8xz" Mar 11 09:13:17 crc kubenswrapper[4840]: I0311 09:13:17.518773 4840 scope.go:117] "RemoveContainer" containerID="6f4117326fda221fc0ee94a46eb25b0160e8c3a1c8e3f6da4bbf3d445ce76ced" Mar 11 09:13:17 crc kubenswrapper[4840]: I0311 09:13:17.540790 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nk8xz"] Mar 11 09:13:17 crc kubenswrapper[4840]: I0311 09:13:17.550244 4840 scope.go:117] "RemoveContainer" containerID="1aa05e8e6a5c676816a0b2abf66ca4a80c2029fcb3e195e1155383b5ba51ec1c" Mar 11 09:13:17 crc kubenswrapper[4840]: I0311 09:13:17.552407 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nk8xz"] Mar 11 09:13:17 crc kubenswrapper[4840]: I0311 09:13:17.586404 4840 scope.go:117] "RemoveContainer" containerID="8ac22889665d7c3bfcc2c2cdfa21e62d2b8699789cd58144dfa0c01184b386aa" Mar 11 09:13:17 crc kubenswrapper[4840]: E0311 09:13:17.587114 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ac22889665d7c3bfcc2c2cdfa21e62d2b8699789cd58144dfa0c01184b386aa\": container with ID starting with 8ac22889665d7c3bfcc2c2cdfa21e62d2b8699789cd58144dfa0c01184b386aa not found: ID does not exist" containerID="8ac22889665d7c3bfcc2c2cdfa21e62d2b8699789cd58144dfa0c01184b386aa" Mar 11 09:13:17 crc kubenswrapper[4840]: I0311 09:13:17.587176 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ac22889665d7c3bfcc2c2cdfa21e62d2b8699789cd58144dfa0c01184b386aa"} err="failed to get container status \"8ac22889665d7c3bfcc2c2cdfa21e62d2b8699789cd58144dfa0c01184b386aa\": rpc error: code = NotFound desc = could not find container \"8ac22889665d7c3bfcc2c2cdfa21e62d2b8699789cd58144dfa0c01184b386aa\": container with ID starting with 8ac22889665d7c3bfcc2c2cdfa21e62d2b8699789cd58144dfa0c01184b386aa not found: ID does not exist" Mar 11 09:13:17 crc kubenswrapper[4840]: I0311 09:13:17.587216 4840 scope.go:117] "RemoveContainer" containerID="6f4117326fda221fc0ee94a46eb25b0160e8c3a1c8e3f6da4bbf3d445ce76ced" Mar 11 09:13:17 crc kubenswrapper[4840]: E0311 09:13:17.588061 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f4117326fda221fc0ee94a46eb25b0160e8c3a1c8e3f6da4bbf3d445ce76ced\": container with ID starting with 6f4117326fda221fc0ee94a46eb25b0160e8c3a1c8e3f6da4bbf3d445ce76ced not found: ID does not exist" containerID="6f4117326fda221fc0ee94a46eb25b0160e8c3a1c8e3f6da4bbf3d445ce76ced" Mar 11 09:13:17 crc kubenswrapper[4840]: I0311 09:13:17.588135 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f4117326fda221fc0ee94a46eb25b0160e8c3a1c8e3f6da4bbf3d445ce76ced"} err="failed to get container status \"6f4117326fda221fc0ee94a46eb25b0160e8c3a1c8e3f6da4bbf3d445ce76ced\": rpc error: code = NotFound desc = could not find container \"6f4117326fda221fc0ee94a46eb25b0160e8c3a1c8e3f6da4bbf3d445ce76ced\": container with ID starting with 6f4117326fda221fc0ee94a46eb25b0160e8c3a1c8e3f6da4bbf3d445ce76ced not found: ID does not exist" Mar 11 09:13:17 crc kubenswrapper[4840]: I0311 09:13:17.588160 4840 scope.go:117] "RemoveContainer" containerID="1aa05e8e6a5c676816a0b2abf66ca4a80c2029fcb3e195e1155383b5ba51ec1c" Mar 11 09:13:17 crc kubenswrapper[4840]: E0311 09:13:17.588541 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aa05e8e6a5c676816a0b2abf66ca4a80c2029fcb3e195e1155383b5ba51ec1c\": container with ID starting with 1aa05e8e6a5c676816a0b2abf66ca4a80c2029fcb3e195e1155383b5ba51ec1c not found: ID does not exist" containerID="1aa05e8e6a5c676816a0b2abf66ca4a80c2029fcb3e195e1155383b5ba51ec1c" Mar 11 09:13:17 crc kubenswrapper[4840]: I0311 09:13:17.588579 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aa05e8e6a5c676816a0b2abf66ca4a80c2029fcb3e195e1155383b5ba51ec1c"} err="failed to get container status \"1aa05e8e6a5c676816a0b2abf66ca4a80c2029fcb3e195e1155383b5ba51ec1c\": rpc error: code = NotFound desc = could not find container \"1aa05e8e6a5c676816a0b2abf66ca4a80c2029fcb3e195e1155383b5ba51ec1c\": container with ID starting with 1aa05e8e6a5c676816a0b2abf66ca4a80c2029fcb3e195e1155383b5ba51ec1c not found: ID does not exist" Mar 11 09:13:17 crc kubenswrapper[4840]: I0311 09:13:17.957067 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-hmd2q" Mar 11 09:13:18 crc kubenswrapper[4840]: I0311 09:13:18.067067 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ebe7653-5dbc-4f86-af79-ce2d25726aa9" path="/var/lib/kubelet/pods/6ebe7653-5dbc-4f86-af79-ce2d25726aa9/volumes" Mar 11 09:13:20 crc kubenswrapper[4840]: I0311 09:13:20.987077 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-mm7xm"] Mar 11 09:13:20 crc kubenswrapper[4840]: E0311 09:13:20.987718 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ebe7653-5dbc-4f86-af79-ce2d25726aa9" containerName="extract-utilities" Mar 11 09:13:20 crc kubenswrapper[4840]: I0311 09:13:20.987735 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ebe7653-5dbc-4f86-af79-ce2d25726aa9" containerName="extract-utilities" Mar 11 09:13:20 crc kubenswrapper[4840]: E0311 09:13:20.987752 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ebe7653-5dbc-4f86-af79-ce2d25726aa9" containerName="registry-server" Mar 11 09:13:20 crc kubenswrapper[4840]: I0311 09:13:20.987761 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ebe7653-5dbc-4f86-af79-ce2d25726aa9" containerName="registry-server" Mar 11 09:13:20 crc kubenswrapper[4840]: E0311 09:13:20.987795 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ebe7653-5dbc-4f86-af79-ce2d25726aa9" containerName="extract-content" Mar 11 09:13:20 crc kubenswrapper[4840]: I0311 09:13:20.987805 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ebe7653-5dbc-4f86-af79-ce2d25726aa9" containerName="extract-content" Mar 11 09:13:20 crc kubenswrapper[4840]: I0311 09:13:20.987986 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ebe7653-5dbc-4f86-af79-ce2d25726aa9" containerName="registry-server" Mar 11 09:13:20 crc kubenswrapper[4840]: I0311 09:13:20.988532 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-mm7xm" Mar 11 09:13:20 crc kubenswrapper[4840]: I0311 09:13:20.995778 4840 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-pbgwg" Mar 11 09:13:21 crc kubenswrapper[4840]: I0311 09:13:21.004867 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-mm7xm"] Mar 11 09:13:21 crc kubenswrapper[4840]: I0311 09:13:21.140252 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qttjq\" (UniqueName: \"kubernetes.io/projected/e30eac2b-bf79-4cae-8551-d114587a58bc-kube-api-access-qttjq\") pod \"cert-manager-545d4d4674-mm7xm\" (UID: \"e30eac2b-bf79-4cae-8551-d114587a58bc\") " pod="cert-manager/cert-manager-545d4d4674-mm7xm" Mar 11 09:13:21 crc kubenswrapper[4840]: I0311 09:13:21.140692 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e30eac2b-bf79-4cae-8551-d114587a58bc-bound-sa-token\") pod \"cert-manager-545d4d4674-mm7xm\" (UID: \"e30eac2b-bf79-4cae-8551-d114587a58bc\") " pod="cert-manager/cert-manager-545d4d4674-mm7xm" Mar 11 09:13:21 crc kubenswrapper[4840]: I0311 09:13:21.242593 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qttjq\" (UniqueName: \"kubernetes.io/projected/e30eac2b-bf79-4cae-8551-d114587a58bc-kube-api-access-qttjq\") pod \"cert-manager-545d4d4674-mm7xm\" (UID: \"e30eac2b-bf79-4cae-8551-d114587a58bc\") " pod="cert-manager/cert-manager-545d4d4674-mm7xm" Mar 11 09:13:21 crc kubenswrapper[4840]: I0311 09:13:21.243094 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e30eac2b-bf79-4cae-8551-d114587a58bc-bound-sa-token\") pod \"cert-manager-545d4d4674-mm7xm\" (UID: \"e30eac2b-bf79-4cae-8551-d114587a58bc\") " pod="cert-manager/cert-manager-545d4d4674-mm7xm" Mar 11 09:13:21 crc kubenswrapper[4840]: I0311 09:13:21.272334 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qttjq\" (UniqueName: \"kubernetes.io/projected/e30eac2b-bf79-4cae-8551-d114587a58bc-kube-api-access-qttjq\") pod \"cert-manager-545d4d4674-mm7xm\" (UID: \"e30eac2b-bf79-4cae-8551-d114587a58bc\") " pod="cert-manager/cert-manager-545d4d4674-mm7xm" Mar 11 09:13:21 crc kubenswrapper[4840]: I0311 09:13:21.273610 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e30eac2b-bf79-4cae-8551-d114587a58bc-bound-sa-token\") pod \"cert-manager-545d4d4674-mm7xm\" (UID: \"e30eac2b-bf79-4cae-8551-d114587a58bc\") " pod="cert-manager/cert-manager-545d4d4674-mm7xm" Mar 11 09:13:21 crc kubenswrapper[4840]: I0311 09:13:21.318285 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-mm7xm" Mar 11 09:13:21 crc kubenswrapper[4840]: I0311 09:13:21.766057 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-mm7xm"] Mar 11 09:13:22 crc kubenswrapper[4840]: I0311 09:13:22.524583 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-mm7xm" event={"ID":"e30eac2b-bf79-4cae-8551-d114587a58bc","Type":"ContainerStarted","Data":"2806826cd95d62d17536f6dd9d6e63ed4d31cf8941fbbe0b25b92c53f6db1060"} Mar 11 09:13:22 crc kubenswrapper[4840]: I0311 09:13:22.524649 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-mm7xm" event={"ID":"e30eac2b-bf79-4cae-8551-d114587a58bc","Type":"ContainerStarted","Data":"fdd6450fd4c48f8eb4c89c56ce5547495877512215b332399ff0e0cce986d2fd"} Mar 11 09:13:22 crc kubenswrapper[4840]: I0311 09:13:22.554898 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-mm7xm" podStartSLOduration=2.554866476 podStartE2EDuration="2.554866476s" podCreationTimestamp="2026-03-11 09:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:13:22.543171303 +0000 UTC m=+1001.208841128" watchObservedRunningTime="2026-03-11 09:13:22.554866476 +0000 UTC m=+1001.220536311" Mar 11 09:13:27 crc kubenswrapper[4840]: I0311 09:13:27.445792 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:13:27 crc kubenswrapper[4840]: I0311 09:13:27.446238 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:13:31 crc kubenswrapper[4840]: I0311 09:13:31.306229 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-84xtq"] Mar 11 09:13:31 crc kubenswrapper[4840]: I0311 09:13:31.307865 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-84xtq" Mar 11 09:13:31 crc kubenswrapper[4840]: I0311 09:13:31.309839 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 11 09:13:31 crc kubenswrapper[4840]: I0311 09:13:31.309909 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-wrvpk" Mar 11 09:13:31 crc kubenswrapper[4840]: I0311 09:13:31.309943 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 11 09:13:31 crc kubenswrapper[4840]: I0311 09:13:31.332644 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-84xtq"] Mar 11 09:13:31 crc kubenswrapper[4840]: I0311 09:13:31.395059 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbgp7\" (UniqueName: \"kubernetes.io/projected/90a98a79-3efc-40c9-8d79-b67a7688d7c4-kube-api-access-hbgp7\") pod \"openstack-operator-index-84xtq\" (UID: \"90a98a79-3efc-40c9-8d79-b67a7688d7c4\") " pod="openstack-operators/openstack-operator-index-84xtq" Mar 11 09:13:31 crc kubenswrapper[4840]: I0311 09:13:31.496510 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbgp7\" (UniqueName: \"kubernetes.io/projected/90a98a79-3efc-40c9-8d79-b67a7688d7c4-kube-api-access-hbgp7\") pod \"openstack-operator-index-84xtq\" (UID: \"90a98a79-3efc-40c9-8d79-b67a7688d7c4\") " pod="openstack-operators/openstack-operator-index-84xtq" Mar 11 09:13:31 crc kubenswrapper[4840]: I0311 09:13:31.534767 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbgp7\" (UniqueName: \"kubernetes.io/projected/90a98a79-3efc-40c9-8d79-b67a7688d7c4-kube-api-access-hbgp7\") pod \"openstack-operator-index-84xtq\" (UID: \"90a98a79-3efc-40c9-8d79-b67a7688d7c4\") " pod="openstack-operators/openstack-operator-index-84xtq" Mar 11 09:13:31 crc kubenswrapper[4840]: I0311 09:13:31.628075 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-84xtq" Mar 11 09:13:31 crc kubenswrapper[4840]: I0311 09:13:31.842900 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-84xtq"] Mar 11 09:13:32 crc kubenswrapper[4840]: I0311 09:13:32.595339 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-84xtq" event={"ID":"90a98a79-3efc-40c9-8d79-b67a7688d7c4","Type":"ContainerStarted","Data":"b168d8685151e3c1ce1a520aeeb96ea53ce0e807b8ef1620b8c470e9f33588dd"} Mar 11 09:13:33 crc kubenswrapper[4840]: I0311 09:13:33.474165 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-84xtq"] Mar 11 09:13:33 crc kubenswrapper[4840]: I0311 09:13:33.889761 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-f5l8j"] Mar 11 09:13:33 crc kubenswrapper[4840]: I0311 09:13:33.891187 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f5l8j" Mar 11 09:13:33 crc kubenswrapper[4840]: I0311 09:13:33.896636 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-f5l8j"] Mar 11 09:13:34 crc kubenswrapper[4840]: I0311 09:13:34.032984 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mg4s\" (UniqueName: \"kubernetes.io/projected/35846f03-87ef-4a54-9386-2080ff604a86-kube-api-access-2mg4s\") pod \"openstack-operator-index-f5l8j\" (UID: \"35846f03-87ef-4a54-9386-2080ff604a86\") " pod="openstack-operators/openstack-operator-index-f5l8j" Mar 11 09:13:34 crc kubenswrapper[4840]: I0311 09:13:34.134647 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mg4s\" (UniqueName: \"kubernetes.io/projected/35846f03-87ef-4a54-9386-2080ff604a86-kube-api-access-2mg4s\") pod \"openstack-operator-index-f5l8j\" (UID: \"35846f03-87ef-4a54-9386-2080ff604a86\") " pod="openstack-operators/openstack-operator-index-f5l8j" Mar 11 09:13:34 crc kubenswrapper[4840]: I0311 09:13:34.155147 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mg4s\" (UniqueName: \"kubernetes.io/projected/35846f03-87ef-4a54-9386-2080ff604a86-kube-api-access-2mg4s\") pod \"openstack-operator-index-f5l8j\" (UID: \"35846f03-87ef-4a54-9386-2080ff604a86\") " pod="openstack-operators/openstack-operator-index-f5l8j" Mar 11 09:13:34 crc kubenswrapper[4840]: I0311 09:13:34.260951 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-f5l8j" Mar 11 09:13:34 crc kubenswrapper[4840]: I0311 09:13:34.471964 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-f5l8j"] Mar 11 09:13:34 crc kubenswrapper[4840]: I0311 09:13:34.611812 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f5l8j" event={"ID":"35846f03-87ef-4a54-9386-2080ff604a86","Type":"ContainerStarted","Data":"239a6d12455fec023bcdaa78bd921bb2f24bf1c6204b0ced49ee27223c350e1f"} Mar 11 09:13:34 crc kubenswrapper[4840]: I0311 09:13:34.612952 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-84xtq" event={"ID":"90a98a79-3efc-40c9-8d79-b67a7688d7c4","Type":"ContainerStarted","Data":"5f1d8d029c479ba241c542b0cea60e44563bda38d8b51fa4f15b9f9dd9a94c6f"} Mar 11 09:13:34 crc kubenswrapper[4840]: I0311 09:13:34.613162 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-84xtq" podUID="90a98a79-3efc-40c9-8d79-b67a7688d7c4" containerName="registry-server" containerID="cri-o://5f1d8d029c479ba241c542b0cea60e44563bda38d8b51fa4f15b9f9dd9a94c6f" gracePeriod=2 Mar 11 09:13:34 crc kubenswrapper[4840]: I0311 09:13:34.634383 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-84xtq" podStartSLOduration=1.8940385480000002 podStartE2EDuration="3.634363146s" podCreationTimestamp="2026-03-11 09:13:31 +0000 UTC" firstStartedPulling="2026-03-11 09:13:31.853818211 +0000 UTC m=+1010.519488026" lastFinishedPulling="2026-03-11 09:13:33.594142809 +0000 UTC m=+1012.259812624" observedRunningTime="2026-03-11 09:13:34.63330765 +0000 UTC m=+1013.298977515" watchObservedRunningTime="2026-03-11 09:13:34.634363146 +0000 UTC m=+1013.300032981" Mar 11 09:13:35 crc kubenswrapper[4840]: I0311 09:13:35.092102 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-84xtq" Mar 11 09:13:35 crc kubenswrapper[4840]: I0311 09:13:35.267295 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbgp7\" (UniqueName: \"kubernetes.io/projected/90a98a79-3efc-40c9-8d79-b67a7688d7c4-kube-api-access-hbgp7\") pod \"90a98a79-3efc-40c9-8d79-b67a7688d7c4\" (UID: \"90a98a79-3efc-40c9-8d79-b67a7688d7c4\") " Mar 11 09:13:35 crc kubenswrapper[4840]: I0311 09:13:35.275829 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90a98a79-3efc-40c9-8d79-b67a7688d7c4-kube-api-access-hbgp7" (OuterVolumeSpecName: "kube-api-access-hbgp7") pod "90a98a79-3efc-40c9-8d79-b67a7688d7c4" (UID: "90a98a79-3efc-40c9-8d79-b67a7688d7c4"). InnerVolumeSpecName "kube-api-access-hbgp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:13:35 crc kubenswrapper[4840]: I0311 09:13:35.369179 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbgp7\" (UniqueName: \"kubernetes.io/projected/90a98a79-3efc-40c9-8d79-b67a7688d7c4-kube-api-access-hbgp7\") on node \"crc\" DevicePath \"\"" Mar 11 09:13:35 crc kubenswrapper[4840]: I0311 09:13:35.621935 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-f5l8j" event={"ID":"35846f03-87ef-4a54-9386-2080ff604a86","Type":"ContainerStarted","Data":"f5bae8a4e993bd2454153767b66100d2c355ec341fbd417ace7fa5695e58cca9"} Mar 11 09:13:35 crc kubenswrapper[4840]: I0311 09:13:35.624342 4840 generic.go:334] "Generic (PLEG): container finished" podID="90a98a79-3efc-40c9-8d79-b67a7688d7c4" containerID="5f1d8d029c479ba241c542b0cea60e44563bda38d8b51fa4f15b9f9dd9a94c6f" exitCode=0 Mar 11 09:13:35 crc kubenswrapper[4840]: I0311 09:13:35.624410 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-84xtq" event={"ID":"90a98a79-3efc-40c9-8d79-b67a7688d7c4","Type":"ContainerDied","Data":"5f1d8d029c479ba241c542b0cea60e44563bda38d8b51fa4f15b9f9dd9a94c6f"} Mar 11 09:13:35 crc kubenswrapper[4840]: I0311 09:13:35.624585 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-84xtq" event={"ID":"90a98a79-3efc-40c9-8d79-b67a7688d7c4","Type":"ContainerDied","Data":"b168d8685151e3c1ce1a520aeeb96ea53ce0e807b8ef1620b8c470e9f33588dd"} Mar 11 09:13:35 crc kubenswrapper[4840]: I0311 09:13:35.624641 4840 scope.go:117] "RemoveContainer" containerID="5f1d8d029c479ba241c542b0cea60e44563bda38d8b51fa4f15b9f9dd9a94c6f" Mar 11 09:13:35 crc kubenswrapper[4840]: I0311 09:13:35.624431 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-84xtq" Mar 11 09:13:35 crc kubenswrapper[4840]: I0311 09:13:35.648458 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-f5l8j" podStartSLOduration=2.190643022 podStartE2EDuration="2.64843882s" podCreationTimestamp="2026-03-11 09:13:33 +0000 UTC" firstStartedPulling="2026-03-11 09:13:34.48241172 +0000 UTC m=+1013.148081535" lastFinishedPulling="2026-03-11 09:13:34.940207518 +0000 UTC m=+1013.605877333" observedRunningTime="2026-03-11 09:13:35.642156873 +0000 UTC m=+1014.307826688" watchObservedRunningTime="2026-03-11 09:13:35.64843882 +0000 UTC m=+1014.314108635" Mar 11 09:13:35 crc kubenswrapper[4840]: I0311 09:13:35.658086 4840 scope.go:117] "RemoveContainer" containerID="5f1d8d029c479ba241c542b0cea60e44563bda38d8b51fa4f15b9f9dd9a94c6f" Mar 11 09:13:35 crc kubenswrapper[4840]: E0311 09:13:35.658635 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f1d8d029c479ba241c542b0cea60e44563bda38d8b51fa4f15b9f9dd9a94c6f\": container with ID starting with 5f1d8d029c479ba241c542b0cea60e44563bda38d8b51fa4f15b9f9dd9a94c6f not found: ID does not exist" containerID="5f1d8d029c479ba241c542b0cea60e44563bda38d8b51fa4f15b9f9dd9a94c6f" Mar 11 09:13:35 crc kubenswrapper[4840]: I0311 09:13:35.658686 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f1d8d029c479ba241c542b0cea60e44563bda38d8b51fa4f15b9f9dd9a94c6f"} err="failed to get container status \"5f1d8d029c479ba241c542b0cea60e44563bda38d8b51fa4f15b9f9dd9a94c6f\": rpc error: code = NotFound desc = could not find container \"5f1d8d029c479ba241c542b0cea60e44563bda38d8b51fa4f15b9f9dd9a94c6f\": container with ID starting with 5f1d8d029c479ba241c542b0cea60e44563bda38d8b51fa4f15b9f9dd9a94c6f not found: ID does not exist" Mar 11 09:13:35 crc kubenswrapper[4840]: I0311 09:13:35.659370 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-84xtq"] Mar 11 09:13:35 crc kubenswrapper[4840]: I0311 09:13:35.663887 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-84xtq"] Mar 11 09:13:36 crc kubenswrapper[4840]: I0311 09:13:36.074796 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90a98a79-3efc-40c9-8d79-b67a7688d7c4" path="/var/lib/kubelet/pods/90a98a79-3efc-40c9-8d79-b67a7688d7c4/volumes" Mar 11 09:13:44 crc kubenswrapper[4840]: I0311 09:13:44.261696 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-f5l8j" Mar 11 09:13:44 crc kubenswrapper[4840]: I0311 09:13:44.262705 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-f5l8j" Mar 11 09:13:44 crc kubenswrapper[4840]: I0311 09:13:44.300733 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-f5l8j" Mar 11 09:13:44 crc kubenswrapper[4840]: I0311 09:13:44.721897 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-f5l8j" Mar 11 09:13:50 crc kubenswrapper[4840]: I0311 09:13:50.530250 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6"] Mar 11 09:13:50 crc kubenswrapper[4840]: E0311 09:13:50.531236 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90a98a79-3efc-40c9-8d79-b67a7688d7c4" containerName="registry-server" Mar 11 09:13:50 crc kubenswrapper[4840]: I0311 09:13:50.531256 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="90a98a79-3efc-40c9-8d79-b67a7688d7c4" containerName="registry-server" Mar 11 09:13:50 crc kubenswrapper[4840]: I0311 09:13:50.531425 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="90a98a79-3efc-40c9-8d79-b67a7688d7c4" containerName="registry-server" Mar 11 09:13:50 crc kubenswrapper[4840]: I0311 09:13:50.532801 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6" Mar 11 09:13:50 crc kubenswrapper[4840]: I0311 09:13:50.536176 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-x2tnv" Mar 11 09:13:50 crc kubenswrapper[4840]: I0311 09:13:50.542812 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6"] Mar 11 09:13:50 crc kubenswrapper[4840]: I0311 09:13:50.601642 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkd9k\" (UniqueName: \"kubernetes.io/projected/76328eae-3dfe-4246-8633-2b53684e8312-kube-api-access-bkd9k\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6\" (UID: \"76328eae-3dfe-4246-8633-2b53684e8312\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6" Mar 11 09:13:50 crc kubenswrapper[4840]: I0311 09:13:50.601720 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76328eae-3dfe-4246-8633-2b53684e8312-util\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6\" (UID: \"76328eae-3dfe-4246-8633-2b53684e8312\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6" Mar 11 09:13:50 crc kubenswrapper[4840]: I0311 09:13:50.601745 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76328eae-3dfe-4246-8633-2b53684e8312-bundle\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6\" (UID: \"76328eae-3dfe-4246-8633-2b53684e8312\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6" Mar 11 09:13:50 crc kubenswrapper[4840]: I0311 09:13:50.703118 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76328eae-3dfe-4246-8633-2b53684e8312-bundle\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6\" (UID: \"76328eae-3dfe-4246-8633-2b53684e8312\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6" Mar 11 09:13:50 crc kubenswrapper[4840]: I0311 09:13:50.703268 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkd9k\" (UniqueName: \"kubernetes.io/projected/76328eae-3dfe-4246-8633-2b53684e8312-kube-api-access-bkd9k\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6\" (UID: \"76328eae-3dfe-4246-8633-2b53684e8312\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6" Mar 11 09:13:50 crc kubenswrapper[4840]: I0311 09:13:50.703328 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76328eae-3dfe-4246-8633-2b53684e8312-util\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6\" (UID: \"76328eae-3dfe-4246-8633-2b53684e8312\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6" Mar 11 09:13:50 crc kubenswrapper[4840]: I0311 09:13:50.703749 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76328eae-3dfe-4246-8633-2b53684e8312-bundle\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6\" (UID: \"76328eae-3dfe-4246-8633-2b53684e8312\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6" Mar 11 09:13:50 crc kubenswrapper[4840]: I0311 09:13:50.703923 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76328eae-3dfe-4246-8633-2b53684e8312-util\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6\" (UID: \"76328eae-3dfe-4246-8633-2b53684e8312\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6" Mar 11 09:13:50 crc kubenswrapper[4840]: I0311 09:13:50.739760 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkd9k\" (UniqueName: \"kubernetes.io/projected/76328eae-3dfe-4246-8633-2b53684e8312-kube-api-access-bkd9k\") pod \"951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6\" (UID: \"76328eae-3dfe-4246-8633-2b53684e8312\") " pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6" Mar 11 09:13:50 crc kubenswrapper[4840]: I0311 09:13:50.854534 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6" Mar 11 09:13:51 crc kubenswrapper[4840]: I0311 09:13:51.318762 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6"] Mar 11 09:13:51 crc kubenswrapper[4840]: W0311 09:13:51.323306 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76328eae_3dfe_4246_8633_2b53684e8312.slice/crio-b9b84abb971ca587a5b02bde47a3f5ceec8e4393f35a179dfb0caf35cfb68525 WatchSource:0}: Error finding container b9b84abb971ca587a5b02bde47a3f5ceec8e4393f35a179dfb0caf35cfb68525: Status 404 returned error can't find the container with id b9b84abb971ca587a5b02bde47a3f5ceec8e4393f35a179dfb0caf35cfb68525 Mar 11 09:13:51 crc kubenswrapper[4840]: I0311 09:13:51.743503 4840 generic.go:334] "Generic (PLEG): container finished" podID="76328eae-3dfe-4246-8633-2b53684e8312" containerID="fbeb6d8ffda7529fd829000d1ad5499b4d36b53ca40434830670c1a11309c727" exitCode=0 Mar 11 09:13:51 crc kubenswrapper[4840]: I0311 09:13:51.743580 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6" event={"ID":"76328eae-3dfe-4246-8633-2b53684e8312","Type":"ContainerDied","Data":"fbeb6d8ffda7529fd829000d1ad5499b4d36b53ca40434830670c1a11309c727"} Mar 11 09:13:51 crc kubenswrapper[4840]: I0311 09:13:51.743611 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6" event={"ID":"76328eae-3dfe-4246-8633-2b53684e8312","Type":"ContainerStarted","Data":"b9b84abb971ca587a5b02bde47a3f5ceec8e4393f35a179dfb0caf35cfb68525"} Mar 11 09:13:53 crc kubenswrapper[4840]: I0311 09:13:53.759759 4840 generic.go:334] "Generic (PLEG): container finished" podID="76328eae-3dfe-4246-8633-2b53684e8312" containerID="f22c62bb6f6669d0f3c555b23bfb2e97911d780f30efb53cff42527617334811" exitCode=0 Mar 11 09:13:53 crc kubenswrapper[4840]: I0311 09:13:53.759833 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6" event={"ID":"76328eae-3dfe-4246-8633-2b53684e8312","Type":"ContainerDied","Data":"f22c62bb6f6669d0f3c555b23bfb2e97911d780f30efb53cff42527617334811"} Mar 11 09:13:54 crc kubenswrapper[4840]: I0311 09:13:54.771535 4840 generic.go:334] "Generic (PLEG): container finished" podID="76328eae-3dfe-4246-8633-2b53684e8312" containerID="d9063268a4e9138a8fb0cb6c2b89b502fa87dafb03678fc8b1e7742f58110068" exitCode=0 Mar 11 09:13:54 crc kubenswrapper[4840]: I0311 09:13:54.771683 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6" event={"ID":"76328eae-3dfe-4246-8633-2b53684e8312","Type":"ContainerDied","Data":"d9063268a4e9138a8fb0cb6c2b89b502fa87dafb03678fc8b1e7742f58110068"} Mar 11 09:13:56 crc kubenswrapper[4840]: I0311 09:13:56.067709 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6" Mar 11 09:13:56 crc kubenswrapper[4840]: I0311 09:13:56.188954 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76328eae-3dfe-4246-8633-2b53684e8312-bundle\") pod \"76328eae-3dfe-4246-8633-2b53684e8312\" (UID: \"76328eae-3dfe-4246-8633-2b53684e8312\") " Mar 11 09:13:56 crc kubenswrapper[4840]: I0311 09:13:56.189385 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkd9k\" (UniqueName: \"kubernetes.io/projected/76328eae-3dfe-4246-8633-2b53684e8312-kube-api-access-bkd9k\") pod \"76328eae-3dfe-4246-8633-2b53684e8312\" (UID: \"76328eae-3dfe-4246-8633-2b53684e8312\") " Mar 11 09:13:56 crc kubenswrapper[4840]: I0311 09:13:56.189411 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76328eae-3dfe-4246-8633-2b53684e8312-util\") pod \"76328eae-3dfe-4246-8633-2b53684e8312\" (UID: \"76328eae-3dfe-4246-8633-2b53684e8312\") " Mar 11 09:13:56 crc kubenswrapper[4840]: I0311 09:13:56.190059 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76328eae-3dfe-4246-8633-2b53684e8312-bundle" (OuterVolumeSpecName: "bundle") pod "76328eae-3dfe-4246-8633-2b53684e8312" (UID: "76328eae-3dfe-4246-8633-2b53684e8312"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:13:56 crc kubenswrapper[4840]: I0311 09:13:56.196294 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76328eae-3dfe-4246-8633-2b53684e8312-kube-api-access-bkd9k" (OuterVolumeSpecName: "kube-api-access-bkd9k") pod "76328eae-3dfe-4246-8633-2b53684e8312" (UID: "76328eae-3dfe-4246-8633-2b53684e8312"). InnerVolumeSpecName "kube-api-access-bkd9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:13:56 crc kubenswrapper[4840]: I0311 09:13:56.204607 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76328eae-3dfe-4246-8633-2b53684e8312-util" (OuterVolumeSpecName: "util") pod "76328eae-3dfe-4246-8633-2b53684e8312" (UID: "76328eae-3dfe-4246-8633-2b53684e8312"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:13:56 crc kubenswrapper[4840]: I0311 09:13:56.291215 4840 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/76328eae-3dfe-4246-8633-2b53684e8312-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:13:56 crc kubenswrapper[4840]: I0311 09:13:56.291261 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkd9k\" (UniqueName: \"kubernetes.io/projected/76328eae-3dfe-4246-8633-2b53684e8312-kube-api-access-bkd9k\") on node \"crc\" DevicePath \"\"" Mar 11 09:13:56 crc kubenswrapper[4840]: I0311 09:13:56.291278 4840 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/76328eae-3dfe-4246-8633-2b53684e8312-util\") on node \"crc\" DevicePath \"\"" Mar 11 09:13:56 crc kubenswrapper[4840]: I0311 09:13:56.789758 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6" event={"ID":"76328eae-3dfe-4246-8633-2b53684e8312","Type":"ContainerDied","Data":"b9b84abb971ca587a5b02bde47a3f5ceec8e4393f35a179dfb0caf35cfb68525"} Mar 11 09:13:56 crc kubenswrapper[4840]: I0311 09:13:56.789820 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9b84abb971ca587a5b02bde47a3f5ceec8e4393f35a179dfb0caf35cfb68525" Mar 11 09:13:56 crc kubenswrapper[4840]: I0311 09:13:56.789851 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6" Mar 11 09:13:57 crc kubenswrapper[4840]: I0311 09:13:57.445749 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:13:57 crc kubenswrapper[4840]: I0311 09:13:57.445824 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:13:57 crc kubenswrapper[4840]: I0311 09:13:57.445872 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 09:13:57 crc kubenswrapper[4840]: I0311 09:13:57.446632 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f08eeaa4fe8ff05d2389b41d94a12307326129a99c2a57e3c9c13f2ab4a219eb"} pod="openshift-machine-config-operator/machine-config-daemon-brtht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 09:13:57 crc kubenswrapper[4840]: I0311 09:13:57.446692 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" containerID="cri-o://f08eeaa4fe8ff05d2389b41d94a12307326129a99c2a57e3c9c13f2ab4a219eb" gracePeriod=600 Mar 11 09:13:57 crc kubenswrapper[4840]: I0311 09:13:57.798265 4840 generic.go:334] "Generic (PLEG): container finished" podID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerID="f08eeaa4fe8ff05d2389b41d94a12307326129a99c2a57e3c9c13f2ab4a219eb" exitCode=0 Mar 11 09:13:57 crc kubenswrapper[4840]: I0311 09:13:57.798440 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerDied","Data":"f08eeaa4fe8ff05d2389b41d94a12307326129a99c2a57e3c9c13f2ab4a219eb"} Mar 11 09:13:57 crc kubenswrapper[4840]: I0311 09:13:57.798825 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"c4693d2b28f962f129bc06a9e78798a61e61b356cfdcc19696be10d164614b1d"} Mar 11 09:13:57 crc kubenswrapper[4840]: I0311 09:13:57.798845 4840 scope.go:117] "RemoveContainer" containerID="5d930828edaf48a40ab8d839e51f7f6d23db61f827df30c4134bd6083d7cbb22" Mar 11 09:14:00 crc kubenswrapper[4840]: I0311 09:14:00.140178 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553674-khft7"] Mar 11 09:14:00 crc kubenswrapper[4840]: E0311 09:14:00.140873 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76328eae-3dfe-4246-8633-2b53684e8312" containerName="extract" Mar 11 09:14:00 crc kubenswrapper[4840]: I0311 09:14:00.140886 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="76328eae-3dfe-4246-8633-2b53684e8312" containerName="extract" Mar 11 09:14:00 crc kubenswrapper[4840]: E0311 09:14:00.140903 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76328eae-3dfe-4246-8633-2b53684e8312" containerName="util" Mar 11 09:14:00 crc kubenswrapper[4840]: I0311 09:14:00.140911 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="76328eae-3dfe-4246-8633-2b53684e8312" containerName="util" Mar 11 09:14:00 crc kubenswrapper[4840]: E0311 09:14:00.141011 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76328eae-3dfe-4246-8633-2b53684e8312" containerName="pull" Mar 11 09:14:00 crc kubenswrapper[4840]: I0311 09:14:00.141019 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="76328eae-3dfe-4246-8633-2b53684e8312" containerName="pull" Mar 11 09:14:00 crc kubenswrapper[4840]: I0311 09:14:00.141134 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="76328eae-3dfe-4246-8633-2b53684e8312" containerName="extract" Mar 11 09:14:00 crc kubenswrapper[4840]: I0311 09:14:00.141595 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553674-khft7" Mar 11 09:14:00 crc kubenswrapper[4840]: I0311 09:14:00.144813 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:14:00 crc kubenswrapper[4840]: I0311 09:14:00.145117 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:14:00 crc kubenswrapper[4840]: I0311 09:14:00.146312 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:14:00 crc kubenswrapper[4840]: I0311 09:14:00.154455 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553674-khft7"] Mar 11 09:14:00 crc kubenswrapper[4840]: I0311 09:14:00.250885 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb475\" (UniqueName: \"kubernetes.io/projected/219c315e-397b-4f3a-b7b3-cded51345c0a-kube-api-access-xb475\") pod \"auto-csr-approver-29553674-khft7\" (UID: \"219c315e-397b-4f3a-b7b3-cded51345c0a\") " pod="openshift-infra/auto-csr-approver-29553674-khft7" Mar 11 09:14:00 crc kubenswrapper[4840]: I0311 09:14:00.352530 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb475\" (UniqueName: \"kubernetes.io/projected/219c315e-397b-4f3a-b7b3-cded51345c0a-kube-api-access-xb475\") pod \"auto-csr-approver-29553674-khft7\" (UID: \"219c315e-397b-4f3a-b7b3-cded51345c0a\") " pod="openshift-infra/auto-csr-approver-29553674-khft7" Mar 11 09:14:00 crc kubenswrapper[4840]: I0311 09:14:00.383840 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb475\" (UniqueName: \"kubernetes.io/projected/219c315e-397b-4f3a-b7b3-cded51345c0a-kube-api-access-xb475\") pod \"auto-csr-approver-29553674-khft7\" (UID: \"219c315e-397b-4f3a-b7b3-cded51345c0a\") " pod="openshift-infra/auto-csr-approver-29553674-khft7" Mar 11 09:14:00 crc kubenswrapper[4840]: I0311 09:14:00.491323 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553674-khft7" Mar 11 09:14:00 crc kubenswrapper[4840]: I0311 09:14:00.934612 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553674-khft7"] Mar 11 09:14:01 crc kubenswrapper[4840]: I0311 09:14:01.828850 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553674-khft7" event={"ID":"219c315e-397b-4f3a-b7b3-cded51345c0a","Type":"ContainerStarted","Data":"79ba8dcccbd219f1fc47bc793b9f9f30c94fb22388bf57094771e3d26e31822b"} Mar 11 09:14:02 crc kubenswrapper[4840]: I0311 09:14:02.838749 4840 generic.go:334] "Generic (PLEG): container finished" podID="219c315e-397b-4f3a-b7b3-cded51345c0a" containerID="afb3af5f23f2ecb925672e6812c11a09a9f8523eb61ce4ca8277b8f0ad16e3b3" exitCode=0 Mar 11 09:14:02 crc kubenswrapper[4840]: I0311 09:14:02.838829 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553674-khft7" event={"ID":"219c315e-397b-4f3a-b7b3-cded51345c0a","Type":"ContainerDied","Data":"afb3af5f23f2ecb925672e6812c11a09a9f8523eb61ce4ca8277b8f0ad16e3b3"} Mar 11 09:14:03 crc kubenswrapper[4840]: I0311 09:14:03.249270 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6cf8df7788-wznnz"] Mar 11 09:14:03 crc kubenswrapper[4840]: I0311 09:14:03.250284 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-wznnz" Mar 11 09:14:03 crc kubenswrapper[4840]: I0311 09:14:03.254086 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-lsdv7" Mar 11 09:14:03 crc kubenswrapper[4840]: I0311 09:14:03.355695 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6cf8df7788-wznnz"] Mar 11 09:14:03 crc kubenswrapper[4840]: I0311 09:14:03.395561 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8wxr\" (UniqueName: \"kubernetes.io/projected/a4fa1ab4-5b35-460c-a350-ba40ed046fe5-kube-api-access-b8wxr\") pod \"openstack-operator-controller-init-6cf8df7788-wznnz\" (UID: \"a4fa1ab4-5b35-460c-a350-ba40ed046fe5\") " pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-wznnz" Mar 11 09:14:03 crc kubenswrapper[4840]: I0311 09:14:03.497175 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8wxr\" (UniqueName: \"kubernetes.io/projected/a4fa1ab4-5b35-460c-a350-ba40ed046fe5-kube-api-access-b8wxr\") pod \"openstack-operator-controller-init-6cf8df7788-wznnz\" (UID: \"a4fa1ab4-5b35-460c-a350-ba40ed046fe5\") " pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-wznnz" Mar 11 09:14:03 crc kubenswrapper[4840]: I0311 09:14:03.522480 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8wxr\" (UniqueName: \"kubernetes.io/projected/a4fa1ab4-5b35-460c-a350-ba40ed046fe5-kube-api-access-b8wxr\") pod \"openstack-operator-controller-init-6cf8df7788-wznnz\" (UID: \"a4fa1ab4-5b35-460c-a350-ba40ed046fe5\") " pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-wznnz" Mar 11 09:14:03 crc kubenswrapper[4840]: I0311 09:14:03.571445 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-wznnz" Mar 11 09:14:04 crc kubenswrapper[4840]: I0311 09:14:04.045733 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6cf8df7788-wznnz"] Mar 11 09:14:04 crc kubenswrapper[4840]: W0311 09:14:04.050954 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4fa1ab4_5b35_460c_a350_ba40ed046fe5.slice/crio-5691c1da5405075c722ed1d445be2e2f93c9124c534fb00801532dcb8bd4ec62 WatchSource:0}: Error finding container 5691c1da5405075c722ed1d445be2e2f93c9124c534fb00801532dcb8bd4ec62: Status 404 returned error can't find the container with id 5691c1da5405075c722ed1d445be2e2f93c9124c534fb00801532dcb8bd4ec62 Mar 11 09:14:04 crc kubenswrapper[4840]: I0311 09:14:04.111780 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553674-khft7" Mar 11 09:14:04 crc kubenswrapper[4840]: I0311 09:14:04.210382 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb475\" (UniqueName: \"kubernetes.io/projected/219c315e-397b-4f3a-b7b3-cded51345c0a-kube-api-access-xb475\") pod \"219c315e-397b-4f3a-b7b3-cded51345c0a\" (UID: \"219c315e-397b-4f3a-b7b3-cded51345c0a\") " Mar 11 09:14:04 crc kubenswrapper[4840]: I0311 09:14:04.217538 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/219c315e-397b-4f3a-b7b3-cded51345c0a-kube-api-access-xb475" (OuterVolumeSpecName: "kube-api-access-xb475") pod "219c315e-397b-4f3a-b7b3-cded51345c0a" (UID: "219c315e-397b-4f3a-b7b3-cded51345c0a"). InnerVolumeSpecName "kube-api-access-xb475". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:14:04 crc kubenswrapper[4840]: I0311 09:14:04.312799 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb475\" (UniqueName: \"kubernetes.io/projected/219c315e-397b-4f3a-b7b3-cded51345c0a-kube-api-access-xb475\") on node \"crc\" DevicePath \"\"" Mar 11 09:14:04 crc kubenswrapper[4840]: I0311 09:14:04.857553 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553674-khft7" event={"ID":"219c315e-397b-4f3a-b7b3-cded51345c0a","Type":"ContainerDied","Data":"79ba8dcccbd219f1fc47bc793b9f9f30c94fb22388bf57094771e3d26e31822b"} Mar 11 09:14:04 crc kubenswrapper[4840]: I0311 09:14:04.857609 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79ba8dcccbd219f1fc47bc793b9f9f30c94fb22388bf57094771e3d26e31822b" Mar 11 09:14:04 crc kubenswrapper[4840]: I0311 09:14:04.857686 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553674-khft7" Mar 11 09:14:04 crc kubenswrapper[4840]: I0311 09:14:04.860350 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-wznnz" event={"ID":"a4fa1ab4-5b35-460c-a350-ba40ed046fe5","Type":"ContainerStarted","Data":"5691c1da5405075c722ed1d445be2e2f93c9124c534fb00801532dcb8bd4ec62"} Mar 11 09:14:05 crc kubenswrapper[4840]: I0311 09:14:05.188058 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553668-rcg6x"] Mar 11 09:14:05 crc kubenswrapper[4840]: I0311 09:14:05.192427 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553668-rcg6x"] Mar 11 09:14:06 crc kubenswrapper[4840]: I0311 09:14:06.069689 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f37e6cb1-3d5e-47d7-8829-263a9aab83f8" path="/var/lib/kubelet/pods/f37e6cb1-3d5e-47d7-8829-263a9aab83f8/volumes" Mar 11 09:14:08 crc kubenswrapper[4840]: I0311 09:14:08.956922 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-wznnz" event={"ID":"a4fa1ab4-5b35-460c-a350-ba40ed046fe5","Type":"ContainerStarted","Data":"a0e2571d46d30c1f9f29e0f91898d4e4554a8ba71497a517fccee66a2abc7c78"} Mar 11 09:14:08 crc kubenswrapper[4840]: I0311 09:14:08.958810 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-wznnz" Mar 11 09:14:09 crc kubenswrapper[4840]: I0311 09:14:09.002046 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-wznnz" podStartSLOduration=1.424649232 podStartE2EDuration="6.002022683s" podCreationTimestamp="2026-03-11 09:14:03 +0000 UTC" firstStartedPulling="2026-03-11 09:14:04.053131558 +0000 UTC m=+1042.718801373" lastFinishedPulling="2026-03-11 09:14:08.630505009 +0000 UTC m=+1047.296174824" observedRunningTime="2026-03-11 09:14:08.997704035 +0000 UTC m=+1047.663373840" watchObservedRunningTime="2026-03-11 09:14:09.002022683 +0000 UTC m=+1047.667692498" Mar 11 09:14:13 crc kubenswrapper[4840]: I0311 09:14:13.575506 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6cf8df7788-wznnz" Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.845044 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-kkplv"] Mar 11 09:14:33 crc kubenswrapper[4840]: E0311 09:14:33.846342 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="219c315e-397b-4f3a-b7b3-cded51345c0a" containerName="oc" Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.846360 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="219c315e-397b-4f3a-b7b3-cded51345c0a" containerName="oc" Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.846515 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="219c315e-397b-4f3a-b7b3-cded51345c0a" containerName="oc" Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.847177 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-kkplv" Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.850866 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-ssdqg" Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.851723 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-mngxm"] Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.852764 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-mngxm" Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.855098 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-sj9pg" Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.857713 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-kkplv"] Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.916171 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-tjvvr"] Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.917374 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-tjvvr" Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.919813 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-944lv\" (UniqueName: \"kubernetes.io/projected/2332f92a-b46b-4f63-83f9-f48ea29492b9-kube-api-access-944lv\") pod \"cinder-operator-controller-manager-984cd4dcf-mngxm\" (UID: \"2332f92a-b46b-4f63-83f9-f48ea29492b9\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-mngxm" Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.919874 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6b4n\" (UniqueName: \"kubernetes.io/projected/91aae815-00f1-46d8-8709-f212ab049fdf-kube-api-access-c6b4n\") pod \"barbican-operator-controller-manager-677bd678f7-kkplv\" (UID: \"91aae815-00f1-46d8-8709-f212ab049fdf\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-kkplv" Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.920884 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-mngxm"] Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.925030 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-qbmr9"] Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.926068 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qbmr9" Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.930308 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-62c6h" Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.931133 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-c56xf" Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.952544 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-qbmr9"] Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.964332 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-6gn84"] Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.965579 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-6gn84" Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.968369 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-lhqtg" Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.969224 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-tjvvr"] Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.977027 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-6gn84"] Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.995872 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-k7b9q"] Mar 11 09:14:33 crc kubenswrapper[4840]: I0311 09:14:33.996995 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-k7b9q" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.001435 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-4844m" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.022125 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvmt8\" (UniqueName: \"kubernetes.io/projected/9ba35a68-fbec-4de0-a84a-8f879b9906e5-kube-api-access-vvmt8\") pod \"designate-operator-controller-manager-66d56f6ff4-tjvvr\" (UID: \"9ba35a68-fbec-4de0-a84a-8f879b9906e5\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-tjvvr" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.022214 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-944lv\" (UniqueName: \"kubernetes.io/projected/2332f92a-b46b-4f63-83f9-f48ea29492b9-kube-api-access-944lv\") pod \"cinder-operator-controller-manager-984cd4dcf-mngxm\" (UID: \"2332f92a-b46b-4f63-83f9-f48ea29492b9\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-mngxm" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.022241 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dspns\" (UniqueName: \"kubernetes.io/projected/32013686-938e-476d-b215-0bb597f780da-kube-api-access-dspns\") pod \"heat-operator-controller-manager-77b6666d85-6gn84\" (UID: \"32013686-938e-476d-b215-0bb597f780da\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-6gn84" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.022260 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vzbt\" (UniqueName: \"kubernetes.io/projected/7f7b0431-153a-48e0-8523-1db25d309919-kube-api-access-6vzbt\") pod \"glance-operator-controller-manager-5964f64c48-qbmr9\" (UID: \"7f7b0431-153a-48e0-8523-1db25d309919\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qbmr9" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.022306 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6b4n\" (UniqueName: \"kubernetes.io/projected/91aae815-00f1-46d8-8709-f212ab049fdf-kube-api-access-c6b4n\") pod \"barbican-operator-controller-manager-677bd678f7-kkplv\" (UID: \"91aae815-00f1-46d8-8709-f212ab049fdf\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-kkplv" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.022336 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp7tq\" (UniqueName: \"kubernetes.io/projected/93e6c54e-ac8e-4cec-a872-6e5204f0afdb-kube-api-access-qp7tq\") pod \"horizon-operator-controller-manager-6d9d6b584d-k7b9q\" (UID: \"93e6c54e-ac8e-4cec-a872-6e5204f0afdb\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-k7b9q" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.024590 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-k7b9q"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.050513 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-xldht"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.051393 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-xldht" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.052954 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-944lv\" (UniqueName: \"kubernetes.io/projected/2332f92a-b46b-4f63-83f9-f48ea29492b9-kube-api-access-944lv\") pod \"cinder-operator-controller-manager-984cd4dcf-mngxm\" (UID: \"2332f92a-b46b-4f63-83f9-f48ea29492b9\") " pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-mngxm" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.055132 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-9dffx"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.056319 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-9dffx" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.059968 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6b4n\" (UniqueName: \"kubernetes.io/projected/91aae815-00f1-46d8-8709-f212ab049fdf-kube-api-access-c6b4n\") pod \"barbican-operator-controller-manager-677bd678f7-kkplv\" (UID: \"91aae815-00f1-46d8-8709-f212ab049fdf\") " pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-kkplv" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.060924 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.061261 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-nhd2w" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.062026 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-n2pp2" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.080179 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-xldht"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.080691 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-9dffx"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.088294 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-bzvzx"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.089755 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-bzvzx" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.100407 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-bzvzx"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.111248 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-fsqzg" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.128820 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvmt8\" (UniqueName: \"kubernetes.io/projected/9ba35a68-fbec-4de0-a84a-8f879b9906e5-kube-api-access-vvmt8\") pod \"designate-operator-controller-manager-66d56f6ff4-tjvvr\" (UID: \"9ba35a68-fbec-4de0-a84a-8f879b9906e5\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-tjvvr" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.128877 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68xvf\" (UniqueName: \"kubernetes.io/projected/086eae6a-0cbe-4a9a-884b-272239b8d302-kube-api-access-68xvf\") pod \"keystone-operator-controller-manager-684f77d66d-bzvzx\" (UID: \"086eae6a-0cbe-4a9a-884b-272239b8d302\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-bzvzx" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.128926 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmzgw\" (UniqueName: \"kubernetes.io/projected/ed2a6fa8-2915-4d07-b54d-b274a742c5a7-kube-api-access-bmzgw\") pod \"infra-operator-controller-manager-5995f4446f-xldht\" (UID: \"ed2a6fa8-2915-4d07-b54d-b274a742c5a7\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-xldht" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.128949 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dspns\" (UniqueName: \"kubernetes.io/projected/32013686-938e-476d-b215-0bb597f780da-kube-api-access-dspns\") pod \"heat-operator-controller-manager-77b6666d85-6gn84\" (UID: \"32013686-938e-476d-b215-0bb597f780da\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-6gn84" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.128973 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vzbt\" (UniqueName: \"kubernetes.io/projected/7f7b0431-153a-48e0-8523-1db25d309919-kube-api-access-6vzbt\") pod \"glance-operator-controller-manager-5964f64c48-qbmr9\" (UID: \"7f7b0431-153a-48e0-8523-1db25d309919\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qbmr9" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.128994 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4qzn\" (UniqueName: \"kubernetes.io/projected/b3f01dca-eeb5-40bf-bddb-2fe256ee64f8-kube-api-access-z4qzn\") pod \"ironic-operator-controller-manager-6bbb499bbc-9dffx\" (UID: \"b3f01dca-eeb5-40bf-bddb-2fe256ee64f8\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-9dffx" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.129030 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp7tq\" (UniqueName: \"kubernetes.io/projected/93e6c54e-ac8e-4cec-a872-6e5204f0afdb-kube-api-access-qp7tq\") pod \"horizon-operator-controller-manager-6d9d6b584d-k7b9q\" (UID: \"93e6c54e-ac8e-4cec-a872-6e5204f0afdb\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-k7b9q" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.129056 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed2a6fa8-2915-4d07-b54d-b274a742c5a7-cert\") pod \"infra-operator-controller-manager-5995f4446f-xldht\" (UID: \"ed2a6fa8-2915-4d07-b54d-b274a742c5a7\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-xldht" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.136837 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-pqcxj"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.139809 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-pqcxj" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.143707 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-6prkg" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.156856 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-gg7rz"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.157862 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-gg7rz" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.161951 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-444dp" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.167001 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-kkplv" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.171499 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vzbt\" (UniqueName: \"kubernetes.io/projected/7f7b0431-153a-48e0-8523-1db25d309919-kube-api-access-6vzbt\") pod \"glance-operator-controller-manager-5964f64c48-qbmr9\" (UID: \"7f7b0431-153a-48e0-8523-1db25d309919\") " pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qbmr9" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.174908 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-gg7rz"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.176146 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dspns\" (UniqueName: \"kubernetes.io/projected/32013686-938e-476d-b215-0bb597f780da-kube-api-access-dspns\") pod \"heat-operator-controller-manager-77b6666d85-6gn84\" (UID: \"32013686-938e-476d-b215-0bb597f780da\") " pod="openstack-operators/heat-operator-controller-manager-77b6666d85-6gn84" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.179963 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-mngxm" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.180742 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-pgbkn"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.182005 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-pgbkn" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.188628 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-pqcxj"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.188989 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-kvczf" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.191114 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvmt8\" (UniqueName: \"kubernetes.io/projected/9ba35a68-fbec-4de0-a84a-8f879b9906e5-kube-api-access-vvmt8\") pod \"designate-operator-controller-manager-66d56f6ff4-tjvvr\" (UID: \"9ba35a68-fbec-4de0-a84a-8f879b9906e5\") " pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-tjvvr" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.207090 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp7tq\" (UniqueName: \"kubernetes.io/projected/93e6c54e-ac8e-4cec-a872-6e5204f0afdb-kube-api-access-qp7tq\") pod \"horizon-operator-controller-manager-6d9d6b584d-k7b9q\" (UID: \"93e6c54e-ac8e-4cec-a872-6e5204f0afdb\") " pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-k7b9q" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.240118 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed2a6fa8-2915-4d07-b54d-b274a742c5a7-cert\") pod \"infra-operator-controller-manager-5995f4446f-xldht\" (UID: \"ed2a6fa8-2915-4d07-b54d-b274a742c5a7\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-xldht" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.240245 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68xvf\" (UniqueName: \"kubernetes.io/projected/086eae6a-0cbe-4a9a-884b-272239b8d302-kube-api-access-68xvf\") pod \"keystone-operator-controller-manager-684f77d66d-bzvzx\" (UID: \"086eae6a-0cbe-4a9a-884b-272239b8d302\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-bzvzx" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.240313 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmzgw\" (UniqueName: \"kubernetes.io/projected/ed2a6fa8-2915-4d07-b54d-b274a742c5a7-kube-api-access-bmzgw\") pod \"infra-operator-controller-manager-5995f4446f-xldht\" (UID: \"ed2a6fa8-2915-4d07-b54d-b274a742c5a7\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-xldht" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.240376 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4qzn\" (UniqueName: \"kubernetes.io/projected/b3f01dca-eeb5-40bf-bddb-2fe256ee64f8-kube-api-access-z4qzn\") pod \"ironic-operator-controller-manager-6bbb499bbc-9dffx\" (UID: \"b3f01dca-eeb5-40bf-bddb-2fe256ee64f8\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-9dffx" Mar 11 09:14:34 crc kubenswrapper[4840]: E0311 09:14:34.241819 4840 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 11 09:14:34 crc kubenswrapper[4840]: E0311 09:14:34.242008 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed2a6fa8-2915-4d07-b54d-b274a742c5a7-cert podName:ed2a6fa8-2915-4d07-b54d-b274a742c5a7 nodeName:}" failed. No retries permitted until 2026-03-11 09:14:34.741858093 +0000 UTC m=+1073.407527908 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed2a6fa8-2915-4d07-b54d-b274a742c5a7-cert") pod "infra-operator-controller-manager-5995f4446f-xldht" (UID: "ed2a6fa8-2915-4d07-b54d-b274a742c5a7") : secret "infra-operator-webhook-server-cert" not found Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.245925 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-tjvvr" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.248317 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-pgbkn"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.280221 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qbmr9" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.289702 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmzgw\" (UniqueName: \"kubernetes.io/projected/ed2a6fa8-2915-4d07-b54d-b274a742c5a7-kube-api-access-bmzgw\") pod \"infra-operator-controller-manager-5995f4446f-xldht\" (UID: \"ed2a6fa8-2915-4d07-b54d-b274a742c5a7\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-xldht" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.290423 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-6gn84" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.309383 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68xvf\" (UniqueName: \"kubernetes.io/projected/086eae6a-0cbe-4a9a-884b-272239b8d302-kube-api-access-68xvf\") pod \"keystone-operator-controller-manager-684f77d66d-bzvzx\" (UID: \"086eae6a-0cbe-4a9a-884b-272239b8d302\") " pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-bzvzx" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.316335 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4qzn\" (UniqueName: \"kubernetes.io/projected/b3f01dca-eeb5-40bf-bddb-2fe256ee64f8-kube-api-access-z4qzn\") pod \"ironic-operator-controller-manager-6bbb499bbc-9dffx\" (UID: \"b3f01dca-eeb5-40bf-bddb-2fe256ee64f8\") " pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-9dffx" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.336575 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-q7hvp"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.338321 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-q7hvp" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.342271 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-k7b9q" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.343174 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xww92\" (UniqueName: \"kubernetes.io/projected/6d12563d-1416-4fd9-b38d-40bdadc53b40-kube-api-access-xww92\") pod \"manila-operator-controller-manager-68f45f9d9f-pqcxj\" (UID: \"6d12563d-1416-4fd9-b38d-40bdadc53b40\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-pqcxj" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.343291 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jks2\" (UniqueName: \"kubernetes.io/projected/d5f70a0c-43ba-4cb0-b66b-a24b3e861b56-kube-api-access-5jks2\") pod \"mariadb-operator-controller-manager-658d4cdd5-gg7rz\" (UID: \"d5f70a0c-43ba-4cb0-b66b-a24b3e861b56\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-gg7rz" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.343322 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghpz7\" (UniqueName: \"kubernetes.io/projected/9170b899-2f0e-498c-893d-fd8b64eb96c6-kube-api-access-ghpz7\") pod \"neutron-operator-controller-manager-776c5696bf-pgbkn\" (UID: \"9170b899-2f0e-498c-893d-fd8b64eb96c6\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-pgbkn" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.348397 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-zspz8" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.365108 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-bzvzx" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.394915 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wbkbm"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.396623 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wbkbm" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.402216 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-xbhkq" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.414357 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-q7hvp"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.442196 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wbkbm"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.445927 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xww92\" (UniqueName: \"kubernetes.io/projected/6d12563d-1416-4fd9-b38d-40bdadc53b40-kube-api-access-xww92\") pod \"manila-operator-controller-manager-68f45f9d9f-pqcxj\" (UID: \"6d12563d-1416-4fd9-b38d-40bdadc53b40\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-pqcxj" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.445999 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mrjg\" (UniqueName: \"kubernetes.io/projected/b14fee95-3d63-402a-ae0a-d3f74415f59b-kube-api-access-9mrjg\") pod \"nova-operator-controller-manager-569cc54c5-q7hvp\" (UID: \"b14fee95-3d63-402a-ae0a-d3f74415f59b\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-q7hvp" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.446078 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jks2\" (UniqueName: \"kubernetes.io/projected/d5f70a0c-43ba-4cb0-b66b-a24b3e861b56-kube-api-access-5jks2\") pod \"mariadb-operator-controller-manager-658d4cdd5-gg7rz\" (UID: \"d5f70a0c-43ba-4cb0-b66b-a24b3e861b56\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-gg7rz" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.446129 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghpz7\" (UniqueName: \"kubernetes.io/projected/9170b899-2f0e-498c-893d-fd8b64eb96c6-kube-api-access-ghpz7\") pod \"neutron-operator-controller-manager-776c5696bf-pgbkn\" (UID: \"9170b899-2f0e-498c-893d-fd8b64eb96c6\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-pgbkn" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.462442 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-9dffx" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.471681 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fb7jnm"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.473400 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fb7jnm" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.474690 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xww92\" (UniqueName: \"kubernetes.io/projected/6d12563d-1416-4fd9-b38d-40bdadc53b40-kube-api-access-xww92\") pod \"manila-operator-controller-manager-68f45f9d9f-pqcxj\" (UID: \"6d12563d-1416-4fd9-b38d-40bdadc53b40\") " pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-pqcxj" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.476046 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghpz7\" (UniqueName: \"kubernetes.io/projected/9170b899-2f0e-498c-893d-fd8b64eb96c6-kube-api-access-ghpz7\") pod \"neutron-operator-controller-manager-776c5696bf-pgbkn\" (UID: \"9170b899-2f0e-498c-893d-fd8b64eb96c6\") " pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-pgbkn" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.477626 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-j5plh" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.481549 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.482340 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jks2\" (UniqueName: \"kubernetes.io/projected/d5f70a0c-43ba-4cb0-b66b-a24b3e861b56-kube-api-access-5jks2\") pod \"mariadb-operator-controller-manager-658d4cdd5-gg7rz\" (UID: \"d5f70a0c-43ba-4cb0-b66b-a24b3e861b56\") " pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-gg7rz" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.482824 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-stv5z"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.492751 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-stv5z" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.500192 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-sn8qz" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.513609 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-stv5z"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.521993 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-5dxn9"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.523479 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-5dxn9" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.531630 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-zln79" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.546540 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-j6nng"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.548254 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkdz2\" (UniqueName: \"kubernetes.io/projected/4ed5f2e7-087f-466d-85b3-9088ab43b410-kube-api-access-fkdz2\") pod \"octavia-operator-controller-manager-5f4f55cb5c-wbkbm\" (UID: \"4ed5f2e7-087f-466d-85b3-9088ab43b410\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wbkbm" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.548324 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-j6nng" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.548334 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97bcd81b-f45e-4a98-9079-000fdf4cc50f-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fb7jnm\" (UID: \"97bcd81b-f45e-4a98-9079-000fdf4cc50f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fb7jnm" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.548405 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbx9l\" (UniqueName: \"kubernetes.io/projected/506cf57c-03be-4949-9037-2e806f8b3896-kube-api-access-xbx9l\") pod \"ovn-operator-controller-manager-bbc5b68f9-stv5z\" (UID: \"506cf57c-03be-4949-9037-2e806f8b3896\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-stv5z" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.548432 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5hrp\" (UniqueName: \"kubernetes.io/projected/97bcd81b-f45e-4a98-9079-000fdf4cc50f-kube-api-access-v5hrp\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fb7jnm\" (UID: \"97bcd81b-f45e-4a98-9079-000fdf4cc50f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fb7jnm" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.548488 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mrjg\" (UniqueName: \"kubernetes.io/projected/b14fee95-3d63-402a-ae0a-d3f74415f59b-kube-api-access-9mrjg\") pod \"nova-operator-controller-manager-569cc54c5-q7hvp\" (UID: \"b14fee95-3d63-402a-ae0a-d3f74415f59b\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-q7hvp" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.553446 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-gwrpf" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.567367 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-pqcxj" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.575955 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-6jvcx"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.577331 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-6jvcx" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.590812 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-5dxn9"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.601621 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-tmgs7" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.625638 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fb7jnm"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.652920 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbx9l\" (UniqueName: \"kubernetes.io/projected/506cf57c-03be-4949-9037-2e806f8b3896-kube-api-access-xbx9l\") pod \"ovn-operator-controller-manager-bbc5b68f9-stv5z\" (UID: \"506cf57c-03be-4949-9037-2e806f8b3896\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-stv5z" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.653172 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5hrp\" (UniqueName: \"kubernetes.io/projected/97bcd81b-f45e-4a98-9079-000fdf4cc50f-kube-api-access-v5hrp\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fb7jnm\" (UID: \"97bcd81b-f45e-4a98-9079-000fdf4cc50f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fb7jnm" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.653280 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zvdd\" (UniqueName: \"kubernetes.io/projected/d58a257e-f4f2-48cd-8c89-e0034e37092c-kube-api-access-2zvdd\") pod \"placement-operator-controller-manager-574d45c66c-5dxn9\" (UID: \"d58a257e-f4f2-48cd-8c89-e0034e37092c\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-5dxn9" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.653436 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvrqm\" (UniqueName: \"kubernetes.io/projected/fd5bb41b-837d-473d-9718-56f2247fadcb-kube-api-access-lvrqm\") pod \"swift-operator-controller-manager-677c674df7-6jvcx\" (UID: \"fd5bb41b-837d-473d-9718-56f2247fadcb\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-6jvcx" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.653552 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkdz2\" (UniqueName: \"kubernetes.io/projected/4ed5f2e7-087f-466d-85b3-9088ab43b410-kube-api-access-fkdz2\") pod \"octavia-operator-controller-manager-5f4f55cb5c-wbkbm\" (UID: \"4ed5f2e7-087f-466d-85b3-9088ab43b410\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wbkbm" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.653829 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s89bp\" (UniqueName: \"kubernetes.io/projected/1fb68385-1f54-4612-8bfc-a4bb2e535600-kube-api-access-s89bp\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-j6nng\" (UID: \"1fb68385-1f54-4612-8bfc-a4bb2e535600\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-j6nng" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.653949 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97bcd81b-f45e-4a98-9079-000fdf4cc50f-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fb7jnm\" (UID: \"97bcd81b-f45e-4a98-9079-000fdf4cc50f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fb7jnm" Mar 11 09:14:34 crc kubenswrapper[4840]: E0311 09:14:34.654319 4840 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.655690 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mrjg\" (UniqueName: \"kubernetes.io/projected/b14fee95-3d63-402a-ae0a-d3f74415f59b-kube-api-access-9mrjg\") pod \"nova-operator-controller-manager-569cc54c5-q7hvp\" (UID: \"b14fee95-3d63-402a-ae0a-d3f74415f59b\") " pod="openstack-operators/nova-operator-controller-manager-569cc54c5-q7hvp" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.664311 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-gg7rz" Mar 11 09:14:34 crc kubenswrapper[4840]: E0311 09:14:34.664408 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97bcd81b-f45e-4a98-9079-000fdf4cc50f-cert podName:97bcd81b-f45e-4a98-9079-000fdf4cc50f nodeName:}" failed. No retries permitted until 2026-03-11 09:14:35.154438596 +0000 UTC m=+1073.820108411 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97bcd81b-f45e-4a98-9079-000fdf4cc50f-cert") pod "openstack-baremetal-operator-controller-manager-6647d7885fb7jnm" (UID: "97bcd81b-f45e-4a98-9079-000fdf4cc50f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.671530 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-vxcc4"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.672737 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-vxcc4" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.688609 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-m5svm" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.692459 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-j6nng"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.694420 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-pgbkn" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.717672 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-6jvcx"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.718246 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-q7hvp" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.739795 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5hrp\" (UniqueName: \"kubernetes.io/projected/97bcd81b-f45e-4a98-9079-000fdf4cc50f-kube-api-access-v5hrp\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fb7jnm\" (UID: \"97bcd81b-f45e-4a98-9079-000fdf4cc50f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fb7jnm" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.740436 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkdz2\" (UniqueName: \"kubernetes.io/projected/4ed5f2e7-087f-466d-85b3-9088ab43b410-kube-api-access-fkdz2\") pod \"octavia-operator-controller-manager-5f4f55cb5c-wbkbm\" (UID: \"4ed5f2e7-087f-466d-85b3-9088ab43b410\") " pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wbkbm" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.751144 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbx9l\" (UniqueName: \"kubernetes.io/projected/506cf57c-03be-4949-9037-2e806f8b3896-kube-api-access-xbx9l\") pod \"ovn-operator-controller-manager-bbc5b68f9-stv5z\" (UID: \"506cf57c-03be-4949-9037-2e806f8b3896\") " pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-stv5z" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.755147 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5sbv\" (UniqueName: \"kubernetes.io/projected/ec409cfe-4999-48d4-93f0-cbf22595667e-kube-api-access-r5sbv\") pod \"test-operator-controller-manager-5c5cb9c4d7-vxcc4\" (UID: \"ec409cfe-4999-48d4-93f0-cbf22595667e\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-vxcc4" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.755221 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zvdd\" (UniqueName: \"kubernetes.io/projected/d58a257e-f4f2-48cd-8c89-e0034e37092c-kube-api-access-2zvdd\") pod \"placement-operator-controller-manager-574d45c66c-5dxn9\" (UID: \"d58a257e-f4f2-48cd-8c89-e0034e37092c\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-5dxn9" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.755254 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed2a6fa8-2915-4d07-b54d-b274a742c5a7-cert\") pod \"infra-operator-controller-manager-5995f4446f-xldht\" (UID: \"ed2a6fa8-2915-4d07-b54d-b274a742c5a7\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-xldht" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.755285 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvrqm\" (UniqueName: \"kubernetes.io/projected/fd5bb41b-837d-473d-9718-56f2247fadcb-kube-api-access-lvrqm\") pod \"swift-operator-controller-manager-677c674df7-6jvcx\" (UID: \"fd5bb41b-837d-473d-9718-56f2247fadcb\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-6jvcx" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.755305 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s89bp\" (UniqueName: \"kubernetes.io/projected/1fb68385-1f54-4612-8bfc-a4bb2e535600-kube-api-access-s89bp\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-j6nng\" (UID: \"1fb68385-1f54-4612-8bfc-a4bb2e535600\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-j6nng" Mar 11 09:14:34 crc kubenswrapper[4840]: E0311 09:14:34.755981 4840 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 11 09:14:34 crc kubenswrapper[4840]: E0311 09:14:34.756026 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed2a6fa8-2915-4d07-b54d-b274a742c5a7-cert podName:ed2a6fa8-2915-4d07-b54d-b274a742c5a7 nodeName:}" failed. No retries permitted until 2026-03-11 09:14:35.756008392 +0000 UTC m=+1074.421678207 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed2a6fa8-2915-4d07-b54d-b274a742c5a7-cert") pod "infra-operator-controller-manager-5995f4446f-xldht" (UID: "ed2a6fa8-2915-4d07-b54d-b274a742c5a7") : secret "infra-operator-webhook-server-cert" not found Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.756375 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-vxcc4"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.808187 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zvdd\" (UniqueName: \"kubernetes.io/projected/d58a257e-f4f2-48cd-8c89-e0034e37092c-kube-api-access-2zvdd\") pod \"placement-operator-controller-manager-574d45c66c-5dxn9\" (UID: \"d58a257e-f4f2-48cd-8c89-e0034e37092c\") " pod="openstack-operators/placement-operator-controller-manager-574d45c66c-5dxn9" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.813039 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-dsd85"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.814124 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-dsd85" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.815410 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s89bp\" (UniqueName: \"kubernetes.io/projected/1fb68385-1f54-4612-8bfc-a4bb2e535600-kube-api-access-s89bp\") pod \"telemetry-operator-controller-manager-6cd66dbd4b-j6nng\" (UID: \"1fb68385-1f54-4612-8bfc-a4bb2e535600\") " pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-j6nng" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.827680 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-9b5xq" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.844718 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-dsd85"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.862040 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqrcg\" (UniqueName: \"kubernetes.io/projected/03e30bf6-186b-4ec3-965b-c24f4e8af21b-kube-api-access-gqrcg\") pod \"watcher-operator-controller-manager-6dd88c6f67-dsd85\" (UID: \"03e30bf6-186b-4ec3-965b-c24f4e8af21b\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-dsd85" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.862217 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5sbv\" (UniqueName: \"kubernetes.io/projected/ec409cfe-4999-48d4-93f0-cbf22595667e-kube-api-access-r5sbv\") pod \"test-operator-controller-manager-5c5cb9c4d7-vxcc4\" (UID: \"ec409cfe-4999-48d4-93f0-cbf22595667e\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-vxcc4" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.873284 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvrqm\" (UniqueName: \"kubernetes.io/projected/fd5bb41b-837d-473d-9718-56f2247fadcb-kube-api-access-lvrqm\") pod \"swift-operator-controller-manager-677c674df7-6jvcx\" (UID: \"fd5bb41b-837d-473d-9718-56f2247fadcb\") " pod="openstack-operators/swift-operator-controller-manager-677c674df7-6jvcx" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.873810 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-stv5z" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.918817 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5sbv\" (UniqueName: \"kubernetes.io/projected/ec409cfe-4999-48d4-93f0-cbf22595667e-kube-api-access-r5sbv\") pod \"test-operator-controller-manager-5c5cb9c4d7-vxcc4\" (UID: \"ec409cfe-4999-48d4-93f0-cbf22595667e\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-vxcc4" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.923165 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.924199 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.927969 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.928269 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-2cgnr" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.928569 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.937060 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-5dxn9" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.958485 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln"] Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.963371 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqrcg\" (UniqueName: \"kubernetes.io/projected/03e30bf6-186b-4ec3-965b-c24f4e8af21b-kube-api-access-gqrcg\") pod \"watcher-operator-controller-manager-6dd88c6f67-dsd85\" (UID: \"03e30bf6-186b-4ec3-965b-c24f4e8af21b\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-dsd85" Mar 11 09:14:34 crc kubenswrapper[4840]: I0311 09:14:34.972886 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-j6nng" Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.039491 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqrcg\" (UniqueName: \"kubernetes.io/projected/03e30bf6-186b-4ec3-965b-c24f4e8af21b-kube-api-access-gqrcg\") pod \"watcher-operator-controller-manager-6dd88c6f67-dsd85\" (UID: \"03e30bf6-186b-4ec3-965b-c24f4e8af21b\") " pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-dsd85" Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.040628 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wbkbm" Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.041546 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-677c674df7-6jvcx" Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.066148 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-vxcc4" Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.087938 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdxlt\" (UniqueName: \"kubernetes.io/projected/a121ac36-d9c4-4837-b075-57588b36c8ec-kube-api-access-fdxlt\") pod \"openstack-operator-controller-manager-6679ddfdc7-h8cln\" (UID: \"a121ac36-d9c4-4837-b075-57588b36c8ec\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.088023 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-h8cln\" (UID: \"a121ac36-d9c4-4837-b075-57588b36c8ec\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.088110 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-h8cln\" (UID: \"a121ac36-d9c4-4837-b075-57588b36c8ec\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.183415 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-984cd4dcf-mngxm"] Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.184041 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-dsd85" Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.190409 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-h8cln\" (UID: \"a121ac36-d9c4-4837-b075-57588b36c8ec\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.190544 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97bcd81b-f45e-4a98-9079-000fdf4cc50f-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fb7jnm\" (UID: \"97bcd81b-f45e-4a98-9079-000fdf4cc50f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fb7jnm" Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.190614 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdxlt\" (UniqueName: \"kubernetes.io/projected/a121ac36-d9c4-4837-b075-57588b36c8ec-kube-api-access-fdxlt\") pod \"openstack-operator-controller-manager-6679ddfdc7-h8cln\" (UID: \"a121ac36-d9c4-4837-b075-57588b36c8ec\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.190656 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-h8cln\" (UID: \"a121ac36-d9c4-4837-b075-57588b36c8ec\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" Mar 11 09:14:35 crc kubenswrapper[4840]: E0311 09:14:35.190744 4840 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 11 09:14:35 crc kubenswrapper[4840]: E0311 09:14:35.190940 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-metrics-certs podName:a121ac36-d9c4-4837-b075-57588b36c8ec nodeName:}" failed. No retries permitted until 2026-03-11 09:14:35.690923225 +0000 UTC m=+1074.356593040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-metrics-certs") pod "openstack-operator-controller-manager-6679ddfdc7-h8cln" (UID: "a121ac36-d9c4-4837-b075-57588b36c8ec") : secret "metrics-server-cert" not found Mar 11 09:14:35 crc kubenswrapper[4840]: E0311 09:14:35.190842 4840 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 11 09:14:35 crc kubenswrapper[4840]: E0311 09:14:35.191002 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-webhook-certs podName:a121ac36-d9c4-4837-b075-57588b36c8ec nodeName:}" failed. No retries permitted until 2026-03-11 09:14:35.690975196 +0000 UTC m=+1074.356645011 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-webhook-certs") pod "openstack-operator-controller-manager-6679ddfdc7-h8cln" (UID: "a121ac36-d9c4-4837-b075-57588b36c8ec") : secret "webhook-server-cert" not found Mar 11 09:14:35 crc kubenswrapper[4840]: E0311 09:14:35.190887 4840 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 11 09:14:35 crc kubenswrapper[4840]: E0311 09:14:35.191030 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97bcd81b-f45e-4a98-9079-000fdf4cc50f-cert podName:97bcd81b-f45e-4a98-9079-000fdf4cc50f nodeName:}" failed. No retries permitted until 2026-03-11 09:14:36.191025588 +0000 UTC m=+1074.856695403 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97bcd81b-f45e-4a98-9079-000fdf4cc50f-cert") pod "openstack-baremetal-operator-controller-manager-6647d7885fb7jnm" (UID: "97bcd81b-f45e-4a98-9079-000fdf4cc50f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.193069 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2jbdn"] Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.194401 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2jbdn" Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.202915 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-z85gz" Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.215866 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2jbdn"] Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.232531 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdxlt\" (UniqueName: \"kubernetes.io/projected/a121ac36-d9c4-4837-b075-57588b36c8ec-kube-api-access-fdxlt\") pod \"openstack-operator-controller-manager-6679ddfdc7-h8cln\" (UID: \"a121ac36-d9c4-4837-b075-57588b36c8ec\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.295104 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f76tt\" (UniqueName: \"kubernetes.io/projected/3d457e03-0abd-42cf-83ed-b3e6113781ac-kube-api-access-f76tt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2jbdn\" (UID: \"3d457e03-0abd-42cf-83ed-b3e6113781ac\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2jbdn" Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.312800 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-677bd678f7-kkplv"] Mar 11 09:14:35 crc kubenswrapper[4840]: W0311 09:14:35.339956 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ba35a68_fbec_4de0_a84a_8f879b9906e5.slice/crio-39f2e6732bfcc5228e7af92ebbedba9107368daeb38adf84d24e43abba49ba3d WatchSource:0}: Error finding container 39f2e6732bfcc5228e7af92ebbedba9107368daeb38adf84d24e43abba49ba3d: Status 404 returned error can't find the container with id 39f2e6732bfcc5228e7af92ebbedba9107368daeb38adf84d24e43abba49ba3d Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.343714 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-66d56f6ff4-tjvvr"] Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.398841 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f76tt\" (UniqueName: \"kubernetes.io/projected/3d457e03-0abd-42cf-83ed-b3e6113781ac-kube-api-access-f76tt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2jbdn\" (UID: \"3d457e03-0abd-42cf-83ed-b3e6113781ac\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2jbdn" Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.428638 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f76tt\" (UniqueName: \"kubernetes.io/projected/3d457e03-0abd-42cf-83ed-b3e6113781ac-kube-api-access-f76tt\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2jbdn\" (UID: \"3d457e03-0abd-42cf-83ed-b3e6113781ac\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2jbdn" Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.472873 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d9d6b584d-k7b9q"] Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.547977 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-684f77d66d-bzvzx"] Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.580176 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2jbdn" Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.703551 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-h8cln\" (UID: \"a121ac36-d9c4-4837-b075-57588b36c8ec\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.703640 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-h8cln\" (UID: \"a121ac36-d9c4-4837-b075-57588b36c8ec\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" Mar 11 09:14:35 crc kubenswrapper[4840]: E0311 09:14:35.703847 4840 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 11 09:14:35 crc kubenswrapper[4840]: E0311 09:14:35.703853 4840 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 11 09:14:35 crc kubenswrapper[4840]: E0311 09:14:35.703914 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-metrics-certs podName:a121ac36-d9c4-4837-b075-57588b36c8ec nodeName:}" failed. No retries permitted until 2026-03-11 09:14:36.703892575 +0000 UTC m=+1075.369562390 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-metrics-certs") pod "openstack-operator-controller-manager-6679ddfdc7-h8cln" (UID: "a121ac36-d9c4-4837-b075-57588b36c8ec") : secret "metrics-server-cert" not found Mar 11 09:14:35 crc kubenswrapper[4840]: E0311 09:14:35.703937 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-webhook-certs podName:a121ac36-d9c4-4837-b075-57588b36c8ec nodeName:}" failed. No retries permitted until 2026-03-11 09:14:36.703929336 +0000 UTC m=+1075.369599151 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-webhook-certs") pod "openstack-operator-controller-manager-6679ddfdc7-h8cln" (UID: "a121ac36-d9c4-4837-b075-57588b36c8ec") : secret "webhook-server-cert" not found Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.755436 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-77b6666d85-6gn84"] Mar 11 09:14:35 crc kubenswrapper[4840]: W0311 09:14:35.808664 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32013686_938e_476d_b215_0bb597f780da.slice/crio-d6b877ba212d7bcec8bd19b9605fb60eedc3d94bfd908660b53a81d423ff61f5 WatchSource:0}: Error finding container d6b877ba212d7bcec8bd19b9605fb60eedc3d94bfd908660b53a81d423ff61f5: Status 404 returned error can't find the container with id d6b877ba212d7bcec8bd19b9605fb60eedc3d94bfd908660b53a81d423ff61f5 Mar 11 09:14:35 crc kubenswrapper[4840]: I0311 09:14:35.809741 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed2a6fa8-2915-4d07-b54d-b274a742c5a7-cert\") pod \"infra-operator-controller-manager-5995f4446f-xldht\" (UID: \"ed2a6fa8-2915-4d07-b54d-b274a742c5a7\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-xldht" Mar 11 09:14:35 crc kubenswrapper[4840]: E0311 09:14:35.810029 4840 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 11 09:14:35 crc kubenswrapper[4840]: E0311 09:14:35.810094 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed2a6fa8-2915-4d07-b54d-b274a742c5a7-cert podName:ed2a6fa8-2915-4d07-b54d-b274a742c5a7 nodeName:}" failed. No retries permitted until 2026-03-11 09:14:37.810073747 +0000 UTC m=+1076.475743562 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed2a6fa8-2915-4d07-b54d-b274a742c5a7-cert") pod "infra-operator-controller-manager-5995f4446f-xldht" (UID: "ed2a6fa8-2915-4d07-b54d-b274a742c5a7") : secret "infra-operator-webhook-server-cert" not found Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.009793 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-776c5696bf-pgbkn"] Mar 11 09:14:36 crc kubenswrapper[4840]: W0311 09:14:36.032826 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9170b899_2f0e_498c_893d_fd8b64eb96c6.slice/crio-9309a1e3b924de740c799aded354b3c285df8f09b53abbb2d39af5936f9f91b6 WatchSource:0}: Error finding container 9309a1e3b924de740c799aded354b3c285df8f09b53abbb2d39af5936f9f91b6: Status 404 returned error can't find the container with id 9309a1e3b924de740c799aded354b3c285df8f09b53abbb2d39af5936f9f91b6 Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.047032 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-68f45f9d9f-pqcxj"] Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.086505 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6bbb499bbc-9dffx"] Mar 11 09:14:36 crc kubenswrapper[4840]: W0311 09:14:36.098201 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3f01dca_eeb5_40bf_bddb_2fe256ee64f8.slice/crio-44b27386971d2a1bfc42bffd23d41a693424350ba190acaa75e09de8ee93fd1a WatchSource:0}: Error finding container 44b27386971d2a1bfc42bffd23d41a693424350ba190acaa75e09de8ee93fd1a: Status 404 returned error can't find the container with id 44b27386971d2a1bfc42bffd23d41a693424350ba190acaa75e09de8ee93fd1a Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.195804 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wbkbm"] Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.226012 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-569cc54c5-q7hvp"] Mar 11 09:14:36 crc kubenswrapper[4840]: W0311 09:14:36.229734 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f7b0431_153a_48e0_8523_1db25d309919.slice/crio-99dd6dadbdb2e2549c76bf07167c320bc578013e2c85383d329ee372aa06c549 WatchSource:0}: Error finding container 99dd6dadbdb2e2549c76bf07167c320bc578013e2c85383d329ee372aa06c549: Status 404 returned error can't find the container with id 99dd6dadbdb2e2549c76bf07167c320bc578013e2c85383d329ee372aa06c549 Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.231092 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97bcd81b-f45e-4a98-9079-000fdf4cc50f-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fb7jnm\" (UID: \"97bcd81b-f45e-4a98-9079-000fdf4cc50f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fb7jnm" Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.231131 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-k7b9q" event={"ID":"93e6c54e-ac8e-4cec-a872-6e5204f0afdb","Type":"ContainerStarted","Data":"f3e7ef8afe1f66483125bcafb59eddce68aaf13370e66fa53436a1d0a5f1d046"} Mar 11 09:14:36 crc kubenswrapper[4840]: E0311 09:14:36.231803 4840 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 11 09:14:36 crc kubenswrapper[4840]: E0311 09:14:36.232085 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97bcd81b-f45e-4a98-9079-000fdf4cc50f-cert podName:97bcd81b-f45e-4a98-9079-000fdf4cc50f nodeName:}" failed. No retries permitted until 2026-03-11 09:14:38.232005554 +0000 UTC m=+1076.897675369 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97bcd81b-f45e-4a98-9079-000fdf4cc50f-cert") pod "openstack-baremetal-operator-controller-manager-6647d7885fb7jnm" (UID: "97bcd81b-f45e-4a98-9079-000fdf4cc50f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.235154 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-tjvvr" event={"ID":"9ba35a68-fbec-4de0-a84a-8f879b9906e5","Type":"ContainerStarted","Data":"39f2e6732bfcc5228e7af92ebbedba9107368daeb38adf84d24e43abba49ba3d"} Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.236667 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5964f64c48-qbmr9"] Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.239087 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-bzvzx" event={"ID":"086eae6a-0cbe-4a9a-884b-272239b8d302","Type":"ContainerStarted","Data":"850040d1458147237591ac68e886a00acb8585da4d7717b5adabc2700d51ef60"} Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.245296 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-mngxm" event={"ID":"2332f92a-b46b-4f63-83f9-f48ea29492b9","Type":"ContainerStarted","Data":"c65bb8c257a512fab9938df06009d142e1233647a66d514b2eb5142832bcf12a"} Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.247521 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-9dffx" event={"ID":"b3f01dca-eeb5-40bf-bddb-2fe256ee64f8","Type":"ContainerStarted","Data":"44b27386971d2a1bfc42bffd23d41a693424350ba190acaa75e09de8ee93fd1a"} Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.251502 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-kkplv" event={"ID":"91aae815-00f1-46d8-8709-f212ab049fdf","Type":"ContainerStarted","Data":"8a07387cdacee652c38fabf3c1e511a8a72359b8318d0f71f3d688957ef6ecfa"} Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.254161 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-6gn84" event={"ID":"32013686-938e-476d-b215-0bb597f780da","Type":"ContainerStarted","Data":"d6b877ba212d7bcec8bd19b9605fb60eedc3d94bfd908660b53a81d423ff61f5"} Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.263364 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-pqcxj" event={"ID":"6d12563d-1416-4fd9-b38d-40bdadc53b40","Type":"ContainerStarted","Data":"2c9e051c62eeefed44e326b5b33247a5e996fe43e5232991a3a9dd062d96cdef"} Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.281695 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-pgbkn" event={"ID":"9170b899-2f0e-498c-893d-fd8b64eb96c6","Type":"ContainerStarted","Data":"9309a1e3b924de740c799aded354b3c285df8f09b53abbb2d39af5936f9f91b6"} Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.374316 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-j6nng"] Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.395600 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-677c674df7-6jvcx"] Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.401548 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-vxcc4"] Mar 11 09:14:36 crc kubenswrapper[4840]: W0311 09:14:36.405999 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fb68385_1f54_4612_8bfc_a4bb2e535600.slice/crio-7a9be919969c159f759e77c4e6d25c8fc375cdf9e2dc670af29f335e35c62779 WatchSource:0}: Error finding container 7a9be919969c159f759e77c4e6d25c8fc375cdf9e2dc670af29f335e35c62779: Status 404 returned error can't find the container with id 7a9be919969c159f759e77c4e6d25c8fc375cdf9e2dc670af29f335e35c62779 Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.406716 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-658d4cdd5-gg7rz"] Mar 11 09:14:36 crc kubenswrapper[4840]: W0311 09:14:36.410235 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd5bb41b_837d_473d_9718_56f2247fadcb.slice/crio-e2212bc9a91d4c3c268ec82b366e8d2568e6a82e96a78c3ade1e1fca9f251675 WatchSource:0}: Error finding container e2212bc9a91d4c3c268ec82b366e8d2568e6a82e96a78c3ade1e1fca9f251675: Status 404 returned error can't find the container with id e2212bc9a91d4c3c268ec82b366e8d2568e6a82e96a78c3ade1e1fca9f251675 Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.410737 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-bbc5b68f9-stv5z"] Mar 11 09:14:36 crc kubenswrapper[4840]: W0311 09:14:36.427122 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec409cfe_4999_48d4_93f0_cbf22595667e.slice/crio-3aed1ef26120b0e9ab4579ee3ba480dea3c8ac69e3a487d9986486d6364c4f7b WatchSource:0}: Error finding container 3aed1ef26120b0e9ab4579ee3ba480dea3c8ac69e3a487d9986486d6364c4f7b: Status 404 returned error can't find the container with id 3aed1ef26120b0e9ab4579ee3ba480dea3c8ac69e3a487d9986486d6364c4f7b Mar 11 09:14:36 crc kubenswrapper[4840]: E0311 09:14:36.444436 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b99cd5e08bd85c6aaf717519187ba7bfeea359e1537d43b73a7364b7c38116e2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5jks2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-658d4cdd5-gg7rz_openstack-operators(d5f70a0c-43ba-4cb0-b66b-a24b3e861b56): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 11 09:14:36 crc kubenswrapper[4840]: E0311 09:14:36.446340 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-gg7rz" podUID="d5f70a0c-43ba-4cb0-b66b-a24b3e861b56" Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.533777 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6dd88c6f67-dsd85"] Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.551646 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-574d45c66c-5dxn9"] Mar 11 09:14:36 crc kubenswrapper[4840]: E0311 09:14:36.556083 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4af709a2a6a1a1abb9659dbdd6fb3818122bdec7e66009fcced0bf0949f91554,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gqrcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6dd88c6f67-dsd85_openstack-operators(03e30bf6-186b-4ec3-965b-c24f4e8af21b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 11 09:14:36 crc kubenswrapper[4840]: E0311 09:14:36.557212 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-dsd85" podUID="03e30bf6-186b-4ec3-965b-c24f4e8af21b" Mar 11 09:14:36 crc kubenswrapper[4840]: W0311 09:14:36.558793 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d457e03_0abd_42cf_83ed_b3e6113781ac.slice/crio-6a6bd7b891a6528412f63d18f6ed6d58098c16cfcef48170cd54a0303eeb42d9 WatchSource:0}: Error finding container 6a6bd7b891a6528412f63d18f6ed6d58098c16cfcef48170cd54a0303eeb42d9: Status 404 returned error can't find the container with id 6a6bd7b891a6528412f63d18f6ed6d58098c16cfcef48170cd54a0303eeb42d9 Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.562454 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2jbdn"] Mar 11 09:14:36 crc kubenswrapper[4840]: E0311 09:14:36.568344 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f76tt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-2jbdn_openstack-operators(3d457e03-0abd-42cf-83ed-b3e6113781ac): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 11 09:14:36 crc kubenswrapper[4840]: E0311 09:14:36.569896 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2jbdn" podUID="3d457e03-0abd-42cf-83ed-b3e6113781ac" Mar 11 09:14:36 crc kubenswrapper[4840]: E0311 09:14:36.571367 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e7e865363955c670e41b6c042c4f87abceff78f5495ba5c5c82988baad45c978,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2zvdd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-574d45c66c-5dxn9_openstack-operators(d58a257e-f4f2-48cd-8c89-e0034e37092c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 11 09:14:36 crc kubenswrapper[4840]: E0311 09:14:36.572604 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-5dxn9" podUID="d58a257e-f4f2-48cd-8c89-e0034e37092c" Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.739437 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-h8cln\" (UID: \"a121ac36-d9c4-4837-b075-57588b36c8ec\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" Mar 11 09:14:36 crc kubenswrapper[4840]: E0311 09:14:36.739569 4840 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 11 09:14:36 crc kubenswrapper[4840]: E0311 09:14:36.739696 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-metrics-certs podName:a121ac36-d9c4-4837-b075-57588b36c8ec nodeName:}" failed. No retries permitted until 2026-03-11 09:14:38.739657081 +0000 UTC m=+1077.405326896 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-metrics-certs") pod "openstack-operator-controller-manager-6679ddfdc7-h8cln" (UID: "a121ac36-d9c4-4837-b075-57588b36c8ec") : secret "metrics-server-cert" not found Mar 11 09:14:36 crc kubenswrapper[4840]: I0311 09:14:36.742893 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-h8cln\" (UID: \"a121ac36-d9c4-4837-b075-57588b36c8ec\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" Mar 11 09:14:36 crc kubenswrapper[4840]: E0311 09:14:36.744278 4840 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 11 09:14:36 crc kubenswrapper[4840]: E0311 09:14:36.744393 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-webhook-certs podName:a121ac36-d9c4-4837-b075-57588b36c8ec nodeName:}" failed. No retries permitted until 2026-03-11 09:14:38.744365249 +0000 UTC m=+1077.410035054 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-webhook-certs") pod "openstack-operator-controller-manager-6679ddfdc7-h8cln" (UID: "a121ac36-d9c4-4837-b075-57588b36c8ec") : secret "webhook-server-cert" not found Mar 11 09:14:37 crc kubenswrapper[4840]: I0311 09:14:37.298193 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-dsd85" event={"ID":"03e30bf6-186b-4ec3-965b-c24f4e8af21b","Type":"ContainerStarted","Data":"baae186cbd559fa5e0d9f9eae9ff07ed57484409ffebe2f0d43872c3af64df16"} Mar 11 09:14:37 crc kubenswrapper[4840]: E0311 09:14:37.302405 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4af709a2a6a1a1abb9659dbdd6fb3818122bdec7e66009fcced0bf0949f91554\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-dsd85" podUID="03e30bf6-186b-4ec3-965b-c24f4e8af21b" Mar 11 09:14:37 crc kubenswrapper[4840]: I0311 09:14:37.303739 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-vxcc4" event={"ID":"ec409cfe-4999-48d4-93f0-cbf22595667e","Type":"ContainerStarted","Data":"3aed1ef26120b0e9ab4579ee3ba480dea3c8ac69e3a487d9986486d6364c4f7b"} Mar 11 09:14:37 crc kubenswrapper[4840]: I0311 09:14:37.323280 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wbkbm" event={"ID":"4ed5f2e7-087f-466d-85b3-9088ab43b410","Type":"ContainerStarted","Data":"d86550d191566788647972c8730182d10443a8c29efa88cff3087db7ee91882f"} Mar 11 09:14:37 crc kubenswrapper[4840]: I0311 09:14:37.327081 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qbmr9" event={"ID":"7f7b0431-153a-48e0-8523-1db25d309919","Type":"ContainerStarted","Data":"99dd6dadbdb2e2549c76bf07167c320bc578013e2c85383d329ee372aa06c549"} Mar 11 09:14:37 crc kubenswrapper[4840]: I0311 09:14:37.330364 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-q7hvp" event={"ID":"b14fee95-3d63-402a-ae0a-d3f74415f59b","Type":"ContainerStarted","Data":"0a4fcdcfd182113ae49dc62a7dc4231d461f7e14b06e742de1611b2d9883b1e5"} Mar 11 09:14:37 crc kubenswrapper[4840]: I0311 09:14:37.334714 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2jbdn" event={"ID":"3d457e03-0abd-42cf-83ed-b3e6113781ac","Type":"ContainerStarted","Data":"6a6bd7b891a6528412f63d18f6ed6d58098c16cfcef48170cd54a0303eeb42d9"} Mar 11 09:14:37 crc kubenswrapper[4840]: E0311 09:14:37.337232 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2jbdn" podUID="3d457e03-0abd-42cf-83ed-b3e6113781ac" Mar 11 09:14:37 crc kubenswrapper[4840]: I0311 09:14:37.337536 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-j6nng" event={"ID":"1fb68385-1f54-4612-8bfc-a4bb2e535600","Type":"ContainerStarted","Data":"7a9be919969c159f759e77c4e6d25c8fc375cdf9e2dc670af29f335e35c62779"} Mar 11 09:14:37 crc kubenswrapper[4840]: I0311 09:14:37.341739 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-stv5z" event={"ID":"506cf57c-03be-4949-9037-2e806f8b3896","Type":"ContainerStarted","Data":"458cbf6fc306c271e89bd50c6c59d4a3d9a9f7adbf4e817e8cf0c91597fcad62"} Mar 11 09:14:37 crc kubenswrapper[4840]: I0311 09:14:37.345354 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-5dxn9" event={"ID":"d58a257e-f4f2-48cd-8c89-e0034e37092c","Type":"ContainerStarted","Data":"1116d6c1858466311964c032817da11e14b6c0e7f1747f91b53f279ff7f7587c"} Mar 11 09:14:37 crc kubenswrapper[4840]: E0311 09:14:37.347834 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e7e865363955c670e41b6c042c4f87abceff78f5495ba5c5c82988baad45c978\\\"\"" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-5dxn9" podUID="d58a257e-f4f2-48cd-8c89-e0034e37092c" Mar 11 09:14:37 crc kubenswrapper[4840]: I0311 09:14:37.349642 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-6jvcx" event={"ID":"fd5bb41b-837d-473d-9718-56f2247fadcb","Type":"ContainerStarted","Data":"e2212bc9a91d4c3c268ec82b366e8d2568e6a82e96a78c3ade1e1fca9f251675"} Mar 11 09:14:37 crc kubenswrapper[4840]: I0311 09:14:37.362329 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-gg7rz" event={"ID":"d5f70a0c-43ba-4cb0-b66b-a24b3e861b56","Type":"ContainerStarted","Data":"f7aaac33c01adeda5f78b8a3c8680033dd91167893dadb49afee62f5f049923f"} Mar 11 09:14:37 crc kubenswrapper[4840]: E0311 09:14:37.372086 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b99cd5e08bd85c6aaf717519187ba7bfeea359e1537d43b73a7364b7c38116e2\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-gg7rz" podUID="d5f70a0c-43ba-4cb0-b66b-a24b3e861b56" Mar 11 09:14:37 crc kubenswrapper[4840]: I0311 09:14:37.866362 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed2a6fa8-2915-4d07-b54d-b274a742c5a7-cert\") pod \"infra-operator-controller-manager-5995f4446f-xldht\" (UID: \"ed2a6fa8-2915-4d07-b54d-b274a742c5a7\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-xldht" Mar 11 09:14:37 crc kubenswrapper[4840]: E0311 09:14:37.866574 4840 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 11 09:14:37 crc kubenswrapper[4840]: E0311 09:14:37.866642 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed2a6fa8-2915-4d07-b54d-b274a742c5a7-cert podName:ed2a6fa8-2915-4d07-b54d-b274a742c5a7 nodeName:}" failed. No retries permitted until 2026-03-11 09:14:41.866619833 +0000 UTC m=+1080.532289648 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed2a6fa8-2915-4d07-b54d-b274a742c5a7-cert") pod "infra-operator-controller-manager-5995f4446f-xldht" (UID: "ed2a6fa8-2915-4d07-b54d-b274a742c5a7") : secret "infra-operator-webhook-server-cert" not found Mar 11 09:14:38 crc kubenswrapper[4840]: I0311 09:14:38.277369 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97bcd81b-f45e-4a98-9079-000fdf4cc50f-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fb7jnm\" (UID: \"97bcd81b-f45e-4a98-9079-000fdf4cc50f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fb7jnm" Mar 11 09:14:38 crc kubenswrapper[4840]: E0311 09:14:38.277734 4840 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 11 09:14:38 crc kubenswrapper[4840]: E0311 09:14:38.277792 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97bcd81b-f45e-4a98-9079-000fdf4cc50f-cert podName:97bcd81b-f45e-4a98-9079-000fdf4cc50f nodeName:}" failed. No retries permitted until 2026-03-11 09:14:42.277771429 +0000 UTC m=+1080.943441244 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97bcd81b-f45e-4a98-9079-000fdf4cc50f-cert") pod "openstack-baremetal-operator-controller-manager-6647d7885fb7jnm" (UID: "97bcd81b-f45e-4a98-9079-000fdf4cc50f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 11 09:14:38 crc kubenswrapper[4840]: E0311 09:14:38.375081 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2jbdn" podUID="3d457e03-0abd-42cf-83ed-b3e6113781ac" Mar 11 09:14:38 crc kubenswrapper[4840]: E0311 09:14:38.375734 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e7e865363955c670e41b6c042c4f87abceff78f5495ba5c5c82988baad45c978\\\"\"" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-5dxn9" podUID="d58a257e-f4f2-48cd-8c89-e0034e37092c" Mar 11 09:14:38 crc kubenswrapper[4840]: E0311 09:14:38.375924 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4af709a2a6a1a1abb9659dbdd6fb3818122bdec7e66009fcced0bf0949f91554\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-dsd85" podUID="03e30bf6-186b-4ec3-965b-c24f4e8af21b" Mar 11 09:14:38 crc kubenswrapper[4840]: E0311 09:14:38.376112 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b99cd5e08bd85c6aaf717519187ba7bfeea359e1537d43b73a7364b7c38116e2\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-gg7rz" podUID="d5f70a0c-43ba-4cb0-b66b-a24b3e861b56" Mar 11 09:14:38 crc kubenswrapper[4840]: I0311 09:14:38.787135 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-h8cln\" (UID: \"a121ac36-d9c4-4837-b075-57588b36c8ec\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" Mar 11 09:14:38 crc kubenswrapper[4840]: I0311 09:14:38.787312 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-h8cln\" (UID: \"a121ac36-d9c4-4837-b075-57588b36c8ec\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" Mar 11 09:14:38 crc kubenswrapper[4840]: E0311 09:14:38.787545 4840 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 11 09:14:38 crc kubenswrapper[4840]: E0311 09:14:38.787562 4840 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 11 09:14:38 crc kubenswrapper[4840]: E0311 09:14:38.787615 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-webhook-certs podName:a121ac36-d9c4-4837-b075-57588b36c8ec nodeName:}" failed. No retries permitted until 2026-03-11 09:14:42.787590811 +0000 UTC m=+1081.453260626 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-webhook-certs") pod "openstack-operator-controller-manager-6679ddfdc7-h8cln" (UID: "a121ac36-d9c4-4837-b075-57588b36c8ec") : secret "webhook-server-cert" not found Mar 11 09:14:38 crc kubenswrapper[4840]: E0311 09:14:38.787649 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-metrics-certs podName:a121ac36-d9c4-4837-b075-57588b36c8ec nodeName:}" failed. No retries permitted until 2026-03-11 09:14:42.787625092 +0000 UTC m=+1081.453294957 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-metrics-certs") pod "openstack-operator-controller-manager-6679ddfdc7-h8cln" (UID: "a121ac36-d9c4-4837-b075-57588b36c8ec") : secret "metrics-server-cert" not found Mar 11 09:14:41 crc kubenswrapper[4840]: I0311 09:14:41.947067 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed2a6fa8-2915-4d07-b54d-b274a742c5a7-cert\") pod \"infra-operator-controller-manager-5995f4446f-xldht\" (UID: \"ed2a6fa8-2915-4d07-b54d-b274a742c5a7\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-xldht" Mar 11 09:14:41 crc kubenswrapper[4840]: E0311 09:14:41.947267 4840 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 11 09:14:41 crc kubenswrapper[4840]: E0311 09:14:41.948198 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed2a6fa8-2915-4d07-b54d-b274a742c5a7-cert podName:ed2a6fa8-2915-4d07-b54d-b274a742c5a7 nodeName:}" failed. No retries permitted until 2026-03-11 09:14:49.948174424 +0000 UTC m=+1088.613844239 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed2a6fa8-2915-4d07-b54d-b274a742c5a7-cert") pod "infra-operator-controller-manager-5995f4446f-xldht" (UID: "ed2a6fa8-2915-4d07-b54d-b274a742c5a7") : secret "infra-operator-webhook-server-cert" not found Mar 11 09:14:42 crc kubenswrapper[4840]: I0311 09:14:42.362543 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97bcd81b-f45e-4a98-9079-000fdf4cc50f-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fb7jnm\" (UID: \"97bcd81b-f45e-4a98-9079-000fdf4cc50f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fb7jnm" Mar 11 09:14:42 crc kubenswrapper[4840]: E0311 09:14:42.363042 4840 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 11 09:14:42 crc kubenswrapper[4840]: E0311 09:14:42.363121 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97bcd81b-f45e-4a98-9079-000fdf4cc50f-cert podName:97bcd81b-f45e-4a98-9079-000fdf4cc50f nodeName:}" failed. No retries permitted until 2026-03-11 09:14:50.363097186 +0000 UTC m=+1089.028767001 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97bcd81b-f45e-4a98-9079-000fdf4cc50f-cert") pod "openstack-baremetal-operator-controller-manager-6647d7885fb7jnm" (UID: "97bcd81b-f45e-4a98-9079-000fdf4cc50f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 11 09:14:42 crc kubenswrapper[4840]: I0311 09:14:42.863190 4840 scope.go:117] "RemoveContainer" containerID="90737869b7cab0b463ad285f85abfb1ee45610be90aa4b9485a74e61608a569b" Mar 11 09:14:42 crc kubenswrapper[4840]: I0311 09:14:42.869729 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-h8cln\" (UID: \"a121ac36-d9c4-4837-b075-57588b36c8ec\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" Mar 11 09:14:42 crc kubenswrapper[4840]: I0311 09:14:42.869882 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-h8cln\" (UID: \"a121ac36-d9c4-4837-b075-57588b36c8ec\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" Mar 11 09:14:42 crc kubenswrapper[4840]: E0311 09:14:42.869946 4840 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 11 09:14:42 crc kubenswrapper[4840]: E0311 09:14:42.869984 4840 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 11 09:14:42 crc kubenswrapper[4840]: E0311 09:14:42.870041 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-metrics-certs podName:a121ac36-d9c4-4837-b075-57588b36c8ec nodeName:}" failed. No retries permitted until 2026-03-11 09:14:50.870020434 +0000 UTC m=+1089.535690259 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-metrics-certs") pod "openstack-operator-controller-manager-6679ddfdc7-h8cln" (UID: "a121ac36-d9c4-4837-b075-57588b36c8ec") : secret "metrics-server-cert" not found Mar 11 09:14:42 crc kubenswrapper[4840]: E0311 09:14:42.870058 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-webhook-certs podName:a121ac36-d9c4-4837-b075-57588b36c8ec nodeName:}" failed. No retries permitted until 2026-03-11 09:14:50.870051975 +0000 UTC m=+1089.535721790 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-webhook-certs") pod "openstack-operator-controller-manager-6679ddfdc7-h8cln" (UID: "a121ac36-d9c4-4837-b075-57588b36c8ec") : secret "webhook-server-cert" not found Mar 11 09:14:49 crc kubenswrapper[4840]: E0311 09:14:49.815126 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42" Mar 11 09:14:49 crc kubenswrapper[4840]: E0311 09:14:49.816025 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r5sbv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5c5cb9c4d7-vxcc4_openstack-operators(ec409cfe-4999-48d4-93f0-cbf22595667e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 11 09:14:49 crc kubenswrapper[4840]: E0311 09:14:49.818128 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-vxcc4" podUID="ec409cfe-4999-48d4-93f0-cbf22595667e" Mar 11 09:14:50 crc kubenswrapper[4840]: I0311 09:14:50.028978 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed2a6fa8-2915-4d07-b54d-b274a742c5a7-cert\") pod \"infra-operator-controller-manager-5995f4446f-xldht\" (UID: \"ed2a6fa8-2915-4d07-b54d-b274a742c5a7\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-xldht" Mar 11 09:14:50 crc kubenswrapper[4840]: E0311 09:14:50.029286 4840 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 11 09:14:50 crc kubenswrapper[4840]: E0311 09:14:50.029344 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed2a6fa8-2915-4d07-b54d-b274a742c5a7-cert podName:ed2a6fa8-2915-4d07-b54d-b274a742c5a7 nodeName:}" failed. No retries permitted until 2026-03-11 09:15:06.029323571 +0000 UTC m=+1104.694993386 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ed2a6fa8-2915-4d07-b54d-b274a742c5a7-cert") pod "infra-operator-controller-manager-5995f4446f-xldht" (UID: "ed2a6fa8-2915-4d07-b54d-b274a742c5a7") : secret "infra-operator-webhook-server-cert" not found Mar 11 09:14:50 crc kubenswrapper[4840]: I0311 09:14:50.434731 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97bcd81b-f45e-4a98-9079-000fdf4cc50f-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fb7jnm\" (UID: \"97bcd81b-f45e-4a98-9079-000fdf4cc50f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fb7jnm" Mar 11 09:14:50 crc kubenswrapper[4840]: E0311 09:14:50.434946 4840 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 11 09:14:50 crc kubenswrapper[4840]: E0311 09:14:50.435072 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/97bcd81b-f45e-4a98-9079-000fdf4cc50f-cert podName:97bcd81b-f45e-4a98-9079-000fdf4cc50f nodeName:}" failed. No retries permitted until 2026-03-11 09:15:06.435048902 +0000 UTC m=+1105.100718717 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/97bcd81b-f45e-4a98-9079-000fdf4cc50f-cert") pod "openstack-baremetal-operator-controller-manager-6647d7885fb7jnm" (UID: "97bcd81b-f45e-4a98-9079-000fdf4cc50f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 11 09:14:50 crc kubenswrapper[4840]: E0311 09:14:50.517163 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42\\\"\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-vxcc4" podUID="ec409cfe-4999-48d4-93f0-cbf22595667e" Mar 11 09:14:50 crc kubenswrapper[4840]: I0311 09:14:50.946096 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-h8cln\" (UID: \"a121ac36-d9c4-4837-b075-57588b36c8ec\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" Mar 11 09:14:50 crc kubenswrapper[4840]: I0311 09:14:50.946232 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-h8cln\" (UID: \"a121ac36-d9c4-4837-b075-57588b36c8ec\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" Mar 11 09:14:50 crc kubenswrapper[4840]: E0311 09:14:50.946286 4840 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 11 09:14:50 crc kubenswrapper[4840]: E0311 09:14:50.946391 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-webhook-certs podName:a121ac36-d9c4-4837-b075-57588b36c8ec nodeName:}" failed. No retries permitted until 2026-03-11 09:15:06.94636844 +0000 UTC m=+1105.612038255 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-webhook-certs") pod "openstack-operator-controller-manager-6679ddfdc7-h8cln" (UID: "a121ac36-d9c4-4837-b075-57588b36c8ec") : secret "webhook-server-cert" not found Mar 11 09:14:50 crc kubenswrapper[4840]: E0311 09:14:50.946555 4840 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 11 09:14:50 crc kubenswrapper[4840]: E0311 09:14:50.946690 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-metrics-certs podName:a121ac36-d9c4-4837-b075-57588b36c8ec nodeName:}" failed. No retries permitted until 2026-03-11 09:15:06.946660448 +0000 UTC m=+1105.612330423 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-metrics-certs") pod "openstack-operator-controller-manager-6679ddfdc7-h8cln" (UID: "a121ac36-d9c4-4837-b075-57588b36c8ec") : secret "metrics-server-cert" not found Mar 11 09:14:51 crc kubenswrapper[4840]: E0311 09:14:51.563333 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:9182d1816c6fdb093d6328f1b0bf39296b9eccfa495f35e2198ec4764fa6288f" Mar 11 09:14:51 crc kubenswrapper[4840]: E0311 09:14:51.563651 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:9182d1816c6fdb093d6328f1b0bf39296b9eccfa495f35e2198ec4764fa6288f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z4qzn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6bbb499bbc-9dffx_openstack-operators(b3f01dca-eeb5-40bf-bddb-2fe256ee64f8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 11 09:14:51 crc kubenswrapper[4840]: E0311 09:14:51.566281 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-9dffx" podUID="b3f01dca-eeb5-40bf-bddb-2fe256ee64f8" Mar 11 09:14:52 crc kubenswrapper[4840]: E0311 09:14:52.334940 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:65d0c97340f72a8b23f8e11f4b3efcc6ad37daad9b88e24d4564383a08fa85f7" Mar 11 09:14:52 crc kubenswrapper[4840]: E0311 09:14:52.335506 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:65d0c97340f72a8b23f8e11f4b3efcc6ad37daad9b88e24d4564383a08fa85f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vvmt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-66d56f6ff4-tjvvr_openstack-operators(9ba35a68-fbec-4de0-a84a-8f879b9906e5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 11 09:14:52 crc kubenswrapper[4840]: E0311 09:14:52.336754 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-tjvvr" podUID="9ba35a68-fbec-4de0-a84a-8f879b9906e5" Mar 11 09:14:52 crc kubenswrapper[4840]: E0311 09:14:52.533974 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:65d0c97340f72a8b23f8e11f4b3efcc6ad37daad9b88e24d4564383a08fa85f7\\\"\"" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-tjvvr" podUID="9ba35a68-fbec-4de0-a84a-8f879b9906e5" Mar 11 09:14:52 crc kubenswrapper[4840]: E0311 09:14:52.534097 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:9182d1816c6fdb093d6328f1b0bf39296b9eccfa495f35e2198ec4764fa6288f\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-9dffx" podUID="b3f01dca-eeb5-40bf-bddb-2fe256ee64f8" Mar 11 09:14:53 crc kubenswrapper[4840]: E0311 09:14:53.121553 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:c223309f51714785bd878ad04080f7428567edad793be4f992d492abd77af44c" Mar 11 09:14:53 crc kubenswrapper[4840]: E0311 09:14:53.121746 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c223309f51714785bd878ad04080f7428567edad793be4f992d492abd77af44c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lvrqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-677c674df7-6jvcx_openstack-operators(fd5bb41b-837d-473d-9718-56f2247fadcb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 11 09:14:53 crc kubenswrapper[4840]: E0311 09:14:53.123135 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-677c674df7-6jvcx" podUID="fd5bb41b-837d-473d-9718-56f2247fadcb" Mar 11 09:14:53 crc kubenswrapper[4840]: E0311 09:14:53.553831 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c223309f51714785bd878ad04080f7428567edad793be4f992d492abd77af44c\\\"\"" pod="openstack-operators/swift-operator-controller-manager-677c674df7-6jvcx" podUID="fd5bb41b-837d-473d-9718-56f2247fadcb" Mar 11 09:14:54 crc kubenswrapper[4840]: E0311 09:14:54.310556 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:40b84319f2f12a1c7ee478fd86a8b1aa5ac2ea8e24f5ce0f1ca78ad879dea8ca" Mar 11 09:14:54 crc kubenswrapper[4840]: E0311 09:14:54.310737 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:40b84319f2f12a1c7ee478fd86a8b1aa5ac2ea8e24f5ce0f1ca78ad879dea8ca,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-68xvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-684f77d66d-bzvzx_openstack-operators(086eae6a-0cbe-4a9a-884b-272239b8d302): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 11 09:14:54 crc kubenswrapper[4840]: E0311 09:14:54.311942 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-bzvzx" podUID="086eae6a-0cbe-4a9a-884b-272239b8d302" Mar 11 09:14:54 crc kubenswrapper[4840]: E0311 09:14:54.563813 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:40b84319f2f12a1c7ee478fd86a8b1aa5ac2ea8e24f5ce0f1ca78ad879dea8ca\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-bzvzx" podUID="086eae6a-0cbe-4a9a-884b-272239b8d302" Mar 11 09:15:00 crc kubenswrapper[4840]: I0311 09:15:00.226821 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553675-5pxkp"] Mar 11 09:15:00 crc kubenswrapper[4840]: I0311 09:15:00.229075 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-5pxkp" Mar 11 09:15:00 crc kubenswrapper[4840]: I0311 09:15:00.231816 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 11 09:15:00 crc kubenswrapper[4840]: I0311 09:15:00.232585 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 11 09:15:00 crc kubenswrapper[4840]: I0311 09:15:00.237121 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553675-5pxkp"] Mar 11 09:15:00 crc kubenswrapper[4840]: I0311 09:15:00.306518 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb4c8bbf-d375-4db3-8a45-e14a7c2400e2-secret-volume\") pod \"collect-profiles-29553675-5pxkp\" (UID: \"fb4c8bbf-d375-4db3-8a45-e14a7c2400e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-5pxkp" Mar 11 09:15:00 crc kubenswrapper[4840]: I0311 09:15:00.306617 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddn84\" (UniqueName: \"kubernetes.io/projected/fb4c8bbf-d375-4db3-8a45-e14a7c2400e2-kube-api-access-ddn84\") pod \"collect-profiles-29553675-5pxkp\" (UID: \"fb4c8bbf-d375-4db3-8a45-e14a7c2400e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-5pxkp" Mar 11 09:15:00 crc kubenswrapper[4840]: I0311 09:15:00.306640 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb4c8bbf-d375-4db3-8a45-e14a7c2400e2-config-volume\") pod \"collect-profiles-29553675-5pxkp\" (UID: \"fb4c8bbf-d375-4db3-8a45-e14a7c2400e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-5pxkp" Mar 11 09:15:00 crc kubenswrapper[4840]: I0311 09:15:00.407664 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb4c8bbf-d375-4db3-8a45-e14a7c2400e2-secret-volume\") pod \"collect-profiles-29553675-5pxkp\" (UID: \"fb4c8bbf-d375-4db3-8a45-e14a7c2400e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-5pxkp" Mar 11 09:15:00 crc kubenswrapper[4840]: I0311 09:15:00.407812 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb4c8bbf-d375-4db3-8a45-e14a7c2400e2-config-volume\") pod \"collect-profiles-29553675-5pxkp\" (UID: \"fb4c8bbf-d375-4db3-8a45-e14a7c2400e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-5pxkp" Mar 11 09:15:00 crc kubenswrapper[4840]: I0311 09:15:00.407834 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddn84\" (UniqueName: \"kubernetes.io/projected/fb4c8bbf-d375-4db3-8a45-e14a7c2400e2-kube-api-access-ddn84\") pod \"collect-profiles-29553675-5pxkp\" (UID: \"fb4c8bbf-d375-4db3-8a45-e14a7c2400e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-5pxkp" Mar 11 09:15:00 crc kubenswrapper[4840]: I0311 09:15:00.409283 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb4c8bbf-d375-4db3-8a45-e14a7c2400e2-config-volume\") pod \"collect-profiles-29553675-5pxkp\" (UID: \"fb4c8bbf-d375-4db3-8a45-e14a7c2400e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-5pxkp" Mar 11 09:15:00 crc kubenswrapper[4840]: I0311 09:15:00.415947 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb4c8bbf-d375-4db3-8a45-e14a7c2400e2-secret-volume\") pod \"collect-profiles-29553675-5pxkp\" (UID: \"fb4c8bbf-d375-4db3-8a45-e14a7c2400e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-5pxkp" Mar 11 09:15:00 crc kubenswrapper[4840]: I0311 09:15:00.434685 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddn84\" (UniqueName: \"kubernetes.io/projected/fb4c8bbf-d375-4db3-8a45-e14a7c2400e2-kube-api-access-ddn84\") pod \"collect-profiles-29553675-5pxkp\" (UID: \"fb4c8bbf-d375-4db3-8a45-e14a7c2400e2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-5pxkp" Mar 11 09:15:00 crc kubenswrapper[4840]: E0311 09:15:00.458617 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:2bd37bdd917e3abe72613a734ce5021330242ec8cae9b8da76c57a0765152922" Mar 11 09:15:00 crc kubenswrapper[4840]: E0311 09:15:00.458903 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:2bd37bdd917e3abe72613a734ce5021330242ec8cae9b8da76c57a0765152922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9mrjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-569cc54c5-q7hvp_openstack-operators(b14fee95-3d63-402a-ae0a-d3f74415f59b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 11 09:15:00 crc kubenswrapper[4840]: E0311 09:15:00.460130 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-q7hvp" podUID="b14fee95-3d63-402a-ae0a-d3f74415f59b" Mar 11 09:15:00 crc kubenswrapper[4840]: I0311 09:15:00.566806 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-5pxkp" Mar 11 09:15:00 crc kubenswrapper[4840]: E0311 09:15:00.601794 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:2bd37bdd917e3abe72613a734ce5021330242ec8cae9b8da76c57a0765152922\\\"\"" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-q7hvp" podUID="b14fee95-3d63-402a-ae0a-d3f74415f59b" Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.332365 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553675-5pxkp"] Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.656064 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-gg7rz" event={"ID":"d5f70a0c-43ba-4cb0-b66b-a24b3e861b56","Type":"ContainerStarted","Data":"5ab512848c9811e8afaaa4e13834b5c29448aadde8c645defccc949b65a26e8c"} Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.656437 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-gg7rz" Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.662216 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2jbdn" event={"ID":"3d457e03-0abd-42cf-83ed-b3e6113781ac","Type":"ContainerStarted","Data":"acefa967c79dbf8b261ae756d94a964d3351baf1507a00c6956ec28ac6719283"} Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.664402 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-6gn84" event={"ID":"32013686-938e-476d-b215-0bb597f780da","Type":"ContainerStarted","Data":"30addec48526ee32967a53ab4e1392c770663780cf5f68d292ebc54bf5c3f479"} Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.664909 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-6gn84" Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.666088 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-dsd85" event={"ID":"03e30bf6-186b-4ec3-965b-c24f4e8af21b","Type":"ContainerStarted","Data":"e0d0550bdb0d2b4b63da0ae3b10d4fcca46e89f57249a3706aa39bac8cfcd072"} Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.666560 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-dsd85" Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.668293 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qbmr9" event={"ID":"7f7b0431-153a-48e0-8523-1db25d309919","Type":"ContainerStarted","Data":"fbe55619e2763da50270cec47f4d4987a3b94946d9f83c97216476e8b8c2c1ac"} Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.668412 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qbmr9" Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.674422 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-5pxkp" event={"ID":"fb4c8bbf-d375-4db3-8a45-e14a7c2400e2","Type":"ContainerStarted","Data":"6c0b1be5126e8d0ee313de963ae2c8496da65a2d4dd020b53c2bdafbb2244d71"} Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.674509 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-5pxkp" event={"ID":"fb4c8bbf-d375-4db3-8a45-e14a7c2400e2","Type":"ContainerStarted","Data":"50b13407f16bb741f7ada4c1a775fc71eef49b247149ce029eff434a34831b3a"} Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.681975 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-mngxm" event={"ID":"2332f92a-b46b-4f63-83f9-f48ea29492b9","Type":"ContainerStarted","Data":"ab2a0277134c3d109766ec3c13a3f34f913aa240a99f38f6a512de28d799954a"} Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.682095 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-mngxm" Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.691679 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-5dxn9" event={"ID":"d58a257e-f4f2-48cd-8c89-e0034e37092c","Type":"ContainerStarted","Data":"5db4173a7613643f4fa79f778672804bf39c6b97d6ff15628217aa6fa8c6b4f4"} Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.692025 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-5dxn9" Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.696482 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-pgbkn" event={"ID":"9170b899-2f0e-498c-893d-fd8b64eb96c6","Type":"ContainerStarted","Data":"aeadf40a30eb757b6e5e5fdb9a3fe7d66c3052932354938e5ae43a70674e446a"} Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.696687 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-pgbkn" Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.701298 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-k7b9q" event={"ID":"93e6c54e-ac8e-4cec-a872-6e5204f0afdb","Type":"ContainerStarted","Data":"d3dd6d9c0b35a12ffaa71610a6eb812a733a6eeb8cb224526ffc5f18af62a9d9"} Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.701390 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-k7b9q" Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.707046 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-gg7rz" podStartSLOduration=3.296682705 podStartE2EDuration="30.707022787s" podCreationTimestamp="2026-03-11 09:14:34 +0000 UTC" firstStartedPulling="2026-03-11 09:14:36.444285576 +0000 UTC m=+1075.109955391" lastFinishedPulling="2026-03-11 09:15:03.854625658 +0000 UTC m=+1102.520295473" observedRunningTime="2026-03-11 09:15:04.701627482 +0000 UTC m=+1103.367297307" watchObservedRunningTime="2026-03-11 09:15:04.707022787 +0000 UTC m=+1103.372692602" Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.709555 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-kkplv" event={"ID":"91aae815-00f1-46d8-8709-f212ab049fdf","Type":"ContainerStarted","Data":"8e51497347da4c7a6ba3955a7748c7b031c76a4a77510f18addb42dabcf20b73"} Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.709626 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-kkplv" Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.720677 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-j6nng" event={"ID":"1fb68385-1f54-4612-8bfc-a4bb2e535600","Type":"ContainerStarted","Data":"5d6473ed27c8875a828c649cb5ee11e4fbbefb6724e34fe4ca019f3a81c16edb"} Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.721430 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-j6nng" Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.728511 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-stv5z" event={"ID":"506cf57c-03be-4949-9037-2e806f8b3896","Type":"ContainerStarted","Data":"2ace0172b0af53155323284af4a817a1d1f1e37b5fc2c19c96c3072f6e8c8f0a"} Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.729434 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-stv5z" Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.742342 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-pqcxj" event={"ID":"6d12563d-1416-4fd9-b38d-40bdadc53b40","Type":"ContainerStarted","Data":"cb0d4d1c5567e32f9d71fdb89c477224ebb670fb94ff005c18c9ca5024ef598e"} Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.743173 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-pqcxj" Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.756210 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-vxcc4" event={"ID":"ec409cfe-4999-48d4-93f0-cbf22595667e","Type":"ContainerStarted","Data":"145d92b9f4f5dd4cf7322989990376d72b80eb1311ff89cb88e7cd7478bc1cc2"} Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.757248 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-vxcc4" Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.763761 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wbkbm" event={"ID":"4ed5f2e7-087f-466d-85b3-9088ab43b410","Type":"ContainerStarted","Data":"dc273f6a39c1cc56e73859018196c9405cbcb739b0c2a03f505da12f04eb922b"} Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.764647 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wbkbm" Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.781597 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qbmr9" podStartSLOduration=7.589725228 podStartE2EDuration="31.781574106s" podCreationTimestamp="2026-03-11 09:14:33 +0000 UTC" firstStartedPulling="2026-03-11 09:14:36.236020555 +0000 UTC m=+1074.901690370" lastFinishedPulling="2026-03-11 09:15:00.427869443 +0000 UTC m=+1099.093539248" observedRunningTime="2026-03-11 09:15:04.780100449 +0000 UTC m=+1103.445770284" watchObservedRunningTime="2026-03-11 09:15:04.781574106 +0000 UTC m=+1103.447243921" Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.782818 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-mngxm" podStartSLOduration=7.375110868 podStartE2EDuration="31.782808747s" podCreationTimestamp="2026-03-11 09:14:33 +0000 UTC" firstStartedPulling="2026-03-11 09:14:35.189380506 +0000 UTC m=+1073.855050321" lastFinishedPulling="2026-03-11 09:14:59.597078385 +0000 UTC m=+1098.262748200" observedRunningTime="2026-03-11 09:15:04.740014484 +0000 UTC m=+1103.405684299" watchObservedRunningTime="2026-03-11 09:15:04.782808747 +0000 UTC m=+1103.448478562" Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.896932 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-5dxn9" podStartSLOduration=3.612469672 podStartE2EDuration="30.896905648s" podCreationTimestamp="2026-03-11 09:14:34 +0000 UTC" firstStartedPulling="2026-03-11 09:14:36.571150746 +0000 UTC m=+1075.236820561" lastFinishedPulling="2026-03-11 09:15:03.855586712 +0000 UTC m=+1102.521256537" observedRunningTime="2026-03-11 09:15:04.889345738 +0000 UTC m=+1103.555015563" watchObservedRunningTime="2026-03-11 09:15:04.896905648 +0000 UTC m=+1103.562575463" Mar 11 09:15:04 crc kubenswrapper[4840]: I0311 09:15:04.935588 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2jbdn" podStartSLOduration=3.649115901 podStartE2EDuration="30.935560287s" podCreationTimestamp="2026-03-11 09:14:34 +0000 UTC" firstStartedPulling="2026-03-11 09:14:36.568182242 +0000 UTC m=+1075.233852047" lastFinishedPulling="2026-03-11 09:15:03.854626618 +0000 UTC m=+1102.520296433" observedRunningTime="2026-03-11 09:15:04.93129723 +0000 UTC m=+1103.596967045" watchObservedRunningTime="2026-03-11 09:15:04.935560287 +0000 UTC m=+1103.601230102" Mar 11 09:15:05 crc kubenswrapper[4840]: I0311 09:15:05.057474 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-pgbkn" podStartSLOduration=6.665224331 podStartE2EDuration="31.057436652s" podCreationTimestamp="2026-03-11 09:14:34 +0000 UTC" firstStartedPulling="2026-03-11 09:14:36.03558785 +0000 UTC m=+1074.701257665" lastFinishedPulling="2026-03-11 09:15:00.427800181 +0000 UTC m=+1099.093469986" observedRunningTime="2026-03-11 09:15:05.015141912 +0000 UTC m=+1103.680811727" watchObservedRunningTime="2026-03-11 09:15:05.057436652 +0000 UTC m=+1103.723106467" Mar 11 09:15:05 crc kubenswrapper[4840]: I0311 09:15:05.120693 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-6gn84" podStartSLOduration=7.510026861 podStartE2EDuration="32.120669827s" podCreationTimestamp="2026-03-11 09:14:33 +0000 UTC" firstStartedPulling="2026-03-11 09:14:35.817170495 +0000 UTC m=+1074.482840310" lastFinishedPulling="2026-03-11 09:15:00.427813461 +0000 UTC m=+1099.093483276" observedRunningTime="2026-03-11 09:15:05.070866549 +0000 UTC m=+1103.736536364" watchObservedRunningTime="2026-03-11 09:15:05.120669827 +0000 UTC m=+1103.786339642" Mar 11 09:15:05 crc kubenswrapper[4840]: I0311 09:15:05.124948 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-5pxkp" podStartSLOduration=5.124939514 podStartE2EDuration="5.124939514s" podCreationTimestamp="2026-03-11 09:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:15:05.119751434 +0000 UTC m=+1103.785421249" watchObservedRunningTime="2026-03-11 09:15:05.124939514 +0000 UTC m=+1103.790609329" Mar 11 09:15:05 crc kubenswrapper[4840]: I0311 09:15:05.150255 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-dsd85" podStartSLOduration=3.7353146710000003 podStartE2EDuration="31.150232238s" podCreationTimestamp="2026-03-11 09:14:34 +0000 UTC" firstStartedPulling="2026-03-11 09:14:36.555904764 +0000 UTC m=+1075.221574569" lastFinishedPulling="2026-03-11 09:15:03.970822321 +0000 UTC m=+1102.636492136" observedRunningTime="2026-03-11 09:15:05.145388727 +0000 UTC m=+1103.811058542" watchObservedRunningTime="2026-03-11 09:15:05.150232238 +0000 UTC m=+1103.815902063" Mar 11 09:15:05 crc kubenswrapper[4840]: I0311 09:15:05.189299 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-k7b9q" podStartSLOduration=7.317508385 podStartE2EDuration="32.189271277s" podCreationTimestamp="2026-03-11 09:14:33 +0000 UTC" firstStartedPulling="2026-03-11 09:14:35.556056879 +0000 UTC m=+1074.221726694" lastFinishedPulling="2026-03-11 09:15:00.427819771 +0000 UTC m=+1099.093489586" observedRunningTime="2026-03-11 09:15:05.177370599 +0000 UTC m=+1103.843040414" watchObservedRunningTime="2026-03-11 09:15:05.189271277 +0000 UTC m=+1103.854941092" Mar 11 09:15:05 crc kubenswrapper[4840]: I0311 09:15:05.220810 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wbkbm" podStartSLOduration=6.9871081109999995 podStartE2EDuration="31.220791137s" podCreationTimestamp="2026-03-11 09:14:34 +0000 UTC" firstStartedPulling="2026-03-11 09:14:36.194134885 +0000 UTC m=+1074.859804700" lastFinishedPulling="2026-03-11 09:15:00.427817911 +0000 UTC m=+1099.093487726" observedRunningTime="2026-03-11 09:15:05.21691799 +0000 UTC m=+1103.882587795" watchObservedRunningTime="2026-03-11 09:15:05.220791137 +0000 UTC m=+1103.886460952" Mar 11 09:15:05 crc kubenswrapper[4840]: I0311 09:15:05.242090 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-kkplv" podStartSLOduration=7.960216035 podStartE2EDuration="32.2420672s" podCreationTimestamp="2026-03-11 09:14:33 +0000 UTC" firstStartedPulling="2026-03-11 09:14:35.315120058 +0000 UTC m=+1073.980789873" lastFinishedPulling="2026-03-11 09:14:59.596971223 +0000 UTC m=+1098.262641038" observedRunningTime="2026-03-11 09:15:05.241090486 +0000 UTC m=+1103.906760291" watchObservedRunningTime="2026-03-11 09:15:05.2420672 +0000 UTC m=+1103.907737015" Mar 11 09:15:05 crc kubenswrapper[4840]: I0311 09:15:05.276327 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-pqcxj" podStartSLOduration=6.908274984 podStartE2EDuration="31.276299049s" podCreationTimestamp="2026-03-11 09:14:34 +0000 UTC" firstStartedPulling="2026-03-11 09:14:36.061203652 +0000 UTC m=+1074.726873467" lastFinishedPulling="2026-03-11 09:15:00.429227717 +0000 UTC m=+1099.094897532" observedRunningTime="2026-03-11 09:15:05.273761385 +0000 UTC m=+1103.939431200" watchObservedRunningTime="2026-03-11 09:15:05.276299049 +0000 UTC m=+1103.941968864" Mar 11 09:15:05 crc kubenswrapper[4840]: I0311 09:15:05.315091 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-stv5z" podStartSLOduration=7.329994366 podStartE2EDuration="31.31506001s" podCreationTimestamp="2026-03-11 09:14:34 +0000 UTC" firstStartedPulling="2026-03-11 09:14:36.443860765 +0000 UTC m=+1075.109530580" lastFinishedPulling="2026-03-11 09:15:00.428926409 +0000 UTC m=+1099.094596224" observedRunningTime="2026-03-11 09:15:05.310973918 +0000 UTC m=+1103.976643733" watchObservedRunningTime="2026-03-11 09:15:05.31506001 +0000 UTC m=+1103.980729825" Mar 11 09:15:05 crc kubenswrapper[4840]: I0311 09:15:05.379907 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-vxcc4" podStartSLOduration=3.850214761 podStartE2EDuration="31.379880145s" podCreationTimestamp="2026-03-11 09:14:34 +0000 UTC" firstStartedPulling="2026-03-11 09:14:36.441054485 +0000 UTC m=+1075.106724300" lastFinishedPulling="2026-03-11 09:15:03.970719869 +0000 UTC m=+1102.636389684" observedRunningTime="2026-03-11 09:15:05.359526685 +0000 UTC m=+1104.025196500" watchObservedRunningTime="2026-03-11 09:15:05.379880145 +0000 UTC m=+1104.045549960" Mar 11 09:15:05 crc kubenswrapper[4840]: I0311 09:15:05.417897 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-j6nng" podStartSLOduration=6.20323103 podStartE2EDuration="31.417876198s" podCreationTimestamp="2026-03-11 09:14:34 +0000 UTC" firstStartedPulling="2026-03-11 09:14:36.410360726 +0000 UTC m=+1075.076030541" lastFinishedPulling="2026-03-11 09:15:01.625005904 +0000 UTC m=+1100.290675709" observedRunningTime="2026-03-11 09:15:05.415827177 +0000 UTC m=+1104.081496992" watchObservedRunningTime="2026-03-11 09:15:05.417876198 +0000 UTC m=+1104.083546013" Mar 11 09:15:05 crc kubenswrapper[4840]: I0311 09:15:05.773857 4840 generic.go:334] "Generic (PLEG): container finished" podID="fb4c8bbf-d375-4db3-8a45-e14a7c2400e2" containerID="6c0b1be5126e8d0ee313de963ae2c8496da65a2d4dd020b53c2bdafbb2244d71" exitCode=0 Mar 11 09:15:05 crc kubenswrapper[4840]: I0311 09:15:05.773943 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-5pxkp" event={"ID":"fb4c8bbf-d375-4db3-8a45-e14a7c2400e2","Type":"ContainerDied","Data":"6c0b1be5126e8d0ee313de963ae2c8496da65a2d4dd020b53c2bdafbb2244d71"} Mar 11 09:15:06 crc kubenswrapper[4840]: I0311 09:15:06.129343 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed2a6fa8-2915-4d07-b54d-b274a742c5a7-cert\") pod \"infra-operator-controller-manager-5995f4446f-xldht\" (UID: \"ed2a6fa8-2915-4d07-b54d-b274a742c5a7\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-xldht" Mar 11 09:15:06 crc kubenswrapper[4840]: I0311 09:15:06.136770 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ed2a6fa8-2915-4d07-b54d-b274a742c5a7-cert\") pod \"infra-operator-controller-manager-5995f4446f-xldht\" (UID: \"ed2a6fa8-2915-4d07-b54d-b274a742c5a7\") " pod="openstack-operators/infra-operator-controller-manager-5995f4446f-xldht" Mar 11 09:15:06 crc kubenswrapper[4840]: I0311 09:15:06.255414 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-n2pp2" Mar 11 09:15:06 crc kubenswrapper[4840]: I0311 09:15:06.263615 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-xldht" Mar 11 09:15:06 crc kubenswrapper[4840]: I0311 09:15:06.436711 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97bcd81b-f45e-4a98-9079-000fdf4cc50f-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fb7jnm\" (UID: \"97bcd81b-f45e-4a98-9079-000fdf4cc50f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fb7jnm" Mar 11 09:15:06 crc kubenswrapper[4840]: I0311 09:15:06.446725 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97bcd81b-f45e-4a98-9079-000fdf4cc50f-cert\") pod \"openstack-baremetal-operator-controller-manager-6647d7885fb7jnm\" (UID: \"97bcd81b-f45e-4a98-9079-000fdf4cc50f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fb7jnm" Mar 11 09:15:06 crc kubenswrapper[4840]: I0311 09:15:06.599449 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5995f4446f-xldht"] Mar 11 09:15:06 crc kubenswrapper[4840]: W0311 09:15:06.631959 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded2a6fa8_2915_4d07_b54d_b274a742c5a7.slice/crio-b97592c8f8544c989d22b4d229cc47d46c04dcdb667fe11ccc83f2ccaeb7415d WatchSource:0}: Error finding container b97592c8f8544c989d22b4d229cc47d46c04dcdb667fe11ccc83f2ccaeb7415d: Status 404 returned error can't find the container with id b97592c8f8544c989d22b4d229cc47d46c04dcdb667fe11ccc83f2ccaeb7415d Mar 11 09:15:06 crc kubenswrapper[4840]: I0311 09:15:06.638080 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-j5plh" Mar 11 09:15:06 crc kubenswrapper[4840]: I0311 09:15:06.643480 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fb7jnm" Mar 11 09:15:06 crc kubenswrapper[4840]: I0311 09:15:06.807187 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-tjvvr" event={"ID":"9ba35a68-fbec-4de0-a84a-8f879b9906e5","Type":"ContainerStarted","Data":"65e90d67158767220bec8c3f147c647795aa1d67a82fb9402d83d46d91fd0bec"} Mar 11 09:15:06 crc kubenswrapper[4840]: I0311 09:15:06.809857 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-tjvvr" Mar 11 09:15:06 crc kubenswrapper[4840]: I0311 09:15:06.815534 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-xldht" event={"ID":"ed2a6fa8-2915-4d07-b54d-b274a742c5a7","Type":"ContainerStarted","Data":"b97592c8f8544c989d22b4d229cc47d46c04dcdb667fe11ccc83f2ccaeb7415d"} Mar 11 09:15:06 crc kubenswrapper[4840]: I0311 09:15:06.846049 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-tjvvr" podStartSLOduration=2.66663733 podStartE2EDuration="33.84601848s" podCreationTimestamp="2026-03-11 09:14:33 +0000 UTC" firstStartedPulling="2026-03-11 09:14:35.347429028 +0000 UTC m=+1074.013098843" lastFinishedPulling="2026-03-11 09:15:06.526810178 +0000 UTC m=+1105.192479993" observedRunningTime="2026-03-11 09:15:06.832728716 +0000 UTC m=+1105.498398551" watchObservedRunningTime="2026-03-11 09:15:06.84601848 +0000 UTC m=+1105.511688295" Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.049759 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-h8cln\" (UID: \"a121ac36-d9c4-4837-b075-57588b36c8ec\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.049859 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-h8cln\" (UID: \"a121ac36-d9c4-4837-b075-57588b36c8ec\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.056978 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-metrics-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-h8cln\" (UID: \"a121ac36-d9c4-4837-b075-57588b36c8ec\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.057269 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a121ac36-d9c4-4837-b075-57588b36c8ec-webhook-certs\") pod \"openstack-operator-controller-manager-6679ddfdc7-h8cln\" (UID: \"a121ac36-d9c4-4837-b075-57588b36c8ec\") " pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.066884 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-2cgnr" Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.076696 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.141687 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-5pxkp" Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.174659 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fb7jnm"] Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.253687 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddn84\" (UniqueName: \"kubernetes.io/projected/fb4c8bbf-d375-4db3-8a45-e14a7c2400e2-kube-api-access-ddn84\") pod \"fb4c8bbf-d375-4db3-8a45-e14a7c2400e2\" (UID: \"fb4c8bbf-d375-4db3-8a45-e14a7c2400e2\") " Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.254749 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb4c8bbf-d375-4db3-8a45-e14a7c2400e2-config-volume\") pod \"fb4c8bbf-d375-4db3-8a45-e14a7c2400e2\" (UID: \"fb4c8bbf-d375-4db3-8a45-e14a7c2400e2\") " Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.254835 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb4c8bbf-d375-4db3-8a45-e14a7c2400e2-secret-volume\") pod \"fb4c8bbf-d375-4db3-8a45-e14a7c2400e2\" (UID: \"fb4c8bbf-d375-4db3-8a45-e14a7c2400e2\") " Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.256051 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb4c8bbf-d375-4db3-8a45-e14a7c2400e2-config-volume" (OuterVolumeSpecName: "config-volume") pod "fb4c8bbf-d375-4db3-8a45-e14a7c2400e2" (UID: "fb4c8bbf-d375-4db3-8a45-e14a7c2400e2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.258857 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb4c8bbf-d375-4db3-8a45-e14a7c2400e2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fb4c8bbf-d375-4db3-8a45-e14a7c2400e2" (UID: "fb4c8bbf-d375-4db3-8a45-e14a7c2400e2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.263011 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb4c8bbf-d375-4db3-8a45-e14a7c2400e2-kube-api-access-ddn84" (OuterVolumeSpecName: "kube-api-access-ddn84") pod "fb4c8bbf-d375-4db3-8a45-e14a7c2400e2" (UID: "fb4c8bbf-d375-4db3-8a45-e14a7c2400e2"). InnerVolumeSpecName "kube-api-access-ddn84". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.355846 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddn84\" (UniqueName: \"kubernetes.io/projected/fb4c8bbf-d375-4db3-8a45-e14a7c2400e2-kube-api-access-ddn84\") on node \"crc\" DevicePath \"\"" Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.355886 4840 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb4c8bbf-d375-4db3-8a45-e14a7c2400e2-config-volume\") on node \"crc\" DevicePath \"\"" Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.355895 4840 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb4c8bbf-d375-4db3-8a45-e14a7c2400e2-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.403736 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln"] Mar 11 09:15:07 crc kubenswrapper[4840]: W0311 09:15:07.418428 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda121ac36_d9c4_4837_b075_57588b36c8ec.slice/crio-f1797c4d1c123503dda91fd56c137e23fc2583572198dea5f6780d3e2a5f0eaa WatchSource:0}: Error finding container f1797c4d1c123503dda91fd56c137e23fc2583572198dea5f6780d3e2a5f0eaa: Status 404 returned error can't find the container with id f1797c4d1c123503dda91fd56c137e23fc2583572198dea5f6780d3e2a5f0eaa Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.827710 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fb7jnm" event={"ID":"97bcd81b-f45e-4a98-9079-000fdf4cc50f","Type":"ContainerStarted","Data":"5a17245bf900b7fdb2d5505342653b19ce5dd77b0df01de3fbeb5456f1646949"} Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.830532 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-5pxkp" event={"ID":"fb4c8bbf-d375-4db3-8a45-e14a7c2400e2","Type":"ContainerDied","Data":"50b13407f16bb741f7ada4c1a775fc71eef49b247149ce029eff434a34831b3a"} Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.830621 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50b13407f16bb741f7ada4c1a775fc71eef49b247149ce029eff434a34831b3a" Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.830756 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553675-5pxkp" Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.839072 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" event={"ID":"a121ac36-d9c4-4837-b075-57588b36c8ec","Type":"ContainerStarted","Data":"acaa0a20301f4d389b82d68ae406cea4ed8e29db14c54ec60153b0ce34120148"} Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.839491 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" event={"ID":"a121ac36-d9c4-4837-b075-57588b36c8ec","Type":"ContainerStarted","Data":"f1797c4d1c123503dda91fd56c137e23fc2583572198dea5f6780d3e2a5f0eaa"} Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.839511 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.842966 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-677c674df7-6jvcx" event={"ID":"fd5bb41b-837d-473d-9718-56f2247fadcb","Type":"ContainerStarted","Data":"26d344717b9f19a47d455e38fc6e3a72a27651fc19b9316d66eca404dbaac797"} Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.843272 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-677c674df7-6jvcx" Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.894187 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" podStartSLOduration=33.894159176 podStartE2EDuration="33.894159176s" podCreationTimestamp="2026-03-11 09:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:15:07.86241961 +0000 UTC m=+1106.528089425" watchObservedRunningTime="2026-03-11 09:15:07.894159176 +0000 UTC m=+1106.559829001" Mar 11 09:15:07 crc kubenswrapper[4840]: I0311 09:15:07.907269 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-677c674df7-6jvcx" podStartSLOduration=2.7219833380000003 podStartE2EDuration="33.907241084s" podCreationTimestamp="2026-03-11 09:14:34 +0000 UTC" firstStartedPulling="2026-03-11 09:14:36.413961036 +0000 UTC m=+1075.079630851" lastFinishedPulling="2026-03-11 09:15:07.599218772 +0000 UTC m=+1106.264888597" observedRunningTime="2026-03-11 09:15:07.90191123 +0000 UTC m=+1106.567581055" watchObservedRunningTime="2026-03-11 09:15:07.907241084 +0000 UTC m=+1106.572910899" Mar 11 09:15:08 crc kubenswrapper[4840]: I0311 09:15:08.850936 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-bzvzx" event={"ID":"086eae6a-0cbe-4a9a-884b-272239b8d302","Type":"ContainerStarted","Data":"37779fa64b3786f74a60c63cc281bf957188f9317e99e38687f06f758fac5ccd"} Mar 11 09:15:08 crc kubenswrapper[4840]: I0311 09:15:08.851505 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-bzvzx" Mar 11 09:15:08 crc kubenswrapper[4840]: I0311 09:15:08.852108 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-9dffx" event={"ID":"b3f01dca-eeb5-40bf-bddb-2fe256ee64f8","Type":"ContainerStarted","Data":"b06c3a0aecbca7d4d27da7f12136fa99636434ad55af7d35852a66147f9e11e9"} Mar 11 09:15:08 crc kubenswrapper[4840]: I0311 09:15:08.852539 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-9dffx" Mar 11 09:15:08 crc kubenswrapper[4840]: I0311 09:15:08.874758 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-bzvzx" podStartSLOduration=3.010917451 podStartE2EDuration="35.874725388s" podCreationTimestamp="2026-03-11 09:14:33 +0000 UTC" firstStartedPulling="2026-03-11 09:14:35.661275886 +0000 UTC m=+1074.326945701" lastFinishedPulling="2026-03-11 09:15:08.525083823 +0000 UTC m=+1107.190753638" observedRunningTime="2026-03-11 09:15:08.867844996 +0000 UTC m=+1107.533514811" watchObservedRunningTime="2026-03-11 09:15:08.874725388 +0000 UTC m=+1107.540395203" Mar 11 09:15:08 crc kubenswrapper[4840]: I0311 09:15:08.892576 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-9dffx" podStartSLOduration=4.139633128 podStartE2EDuration="35.892540825s" podCreationTimestamp="2026-03-11 09:14:33 +0000 UTC" firstStartedPulling="2026-03-11 09:14:36.110388865 +0000 UTC m=+1074.776058680" lastFinishedPulling="2026-03-11 09:15:07.863296562 +0000 UTC m=+1106.528966377" observedRunningTime="2026-03-11 09:15:08.891932919 +0000 UTC m=+1107.557602744" watchObservedRunningTime="2026-03-11 09:15:08.892540825 +0000 UTC m=+1107.558210640" Mar 11 09:15:10 crc kubenswrapper[4840]: I0311 09:15:10.871311 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-xldht" event={"ID":"ed2a6fa8-2915-4d07-b54d-b274a742c5a7","Type":"ContainerStarted","Data":"fa4799065257c188b0b67af3fda3a0b8905eca0315fb863d43d9704717b89d37"} Mar 11 09:15:10 crc kubenswrapper[4840]: I0311 09:15:10.871763 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-xldht" Mar 11 09:15:10 crc kubenswrapper[4840]: I0311 09:15:10.873197 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fb7jnm" event={"ID":"97bcd81b-f45e-4a98-9079-000fdf4cc50f","Type":"ContainerStarted","Data":"ef1c03cd307b45b36b0ac5575f98fca975ade3a5c5af49f5015c22a18f49e1c3"} Mar 11 09:15:10 crc kubenswrapper[4840]: I0311 09:15:10.873314 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fb7jnm" Mar 11 09:15:10 crc kubenswrapper[4840]: I0311 09:15:10.897014 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-xldht" podStartSLOduration=33.997836746 podStartE2EDuration="37.896996544s" podCreationTimestamp="2026-03-11 09:14:33 +0000 UTC" firstStartedPulling="2026-03-11 09:15:06.642045267 +0000 UTC m=+1105.307715082" lastFinishedPulling="2026-03-11 09:15:10.541205055 +0000 UTC m=+1109.206874880" observedRunningTime="2026-03-11 09:15:10.891912477 +0000 UTC m=+1109.557582292" watchObservedRunningTime="2026-03-11 09:15:10.896996544 +0000 UTC m=+1109.562666359" Mar 11 09:15:10 crc kubenswrapper[4840]: I0311 09:15:10.934062 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fb7jnm" podStartSLOduration=33.617269745 podStartE2EDuration="36.934041693s" podCreationTimestamp="2026-03-11 09:14:34 +0000 UTC" firstStartedPulling="2026-03-11 09:15:07.224212371 +0000 UTC m=+1105.889882186" lastFinishedPulling="2026-03-11 09:15:10.540984319 +0000 UTC m=+1109.206654134" observedRunningTime="2026-03-11 09:15:10.926636927 +0000 UTC m=+1109.592306742" watchObservedRunningTime="2026-03-11 09:15:10.934041693 +0000 UTC m=+1109.599711508" Mar 11 09:15:11 crc kubenswrapper[4840]: I0311 09:15:11.882386 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-q7hvp" event={"ID":"b14fee95-3d63-402a-ae0a-d3f74415f59b","Type":"ContainerStarted","Data":"1634c0a9ee738e19174f22efc40053930440fde4eeb2e6e8365bee1acfa66e20"} Mar 11 09:15:11 crc kubenswrapper[4840]: I0311 09:15:11.882802 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-q7hvp" Mar 11 09:15:11 crc kubenswrapper[4840]: I0311 09:15:11.906500 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-q7hvp" podStartSLOduration=2.579016204 podStartE2EDuration="37.906458991s" podCreationTimestamp="2026-03-11 09:14:34 +0000 UTC" firstStartedPulling="2026-03-11 09:14:36.191226242 +0000 UTC m=+1074.856896067" lastFinishedPulling="2026-03-11 09:15:11.518668989 +0000 UTC m=+1110.184338854" observedRunningTime="2026-03-11 09:15:11.905417345 +0000 UTC m=+1110.571087190" watchObservedRunningTime="2026-03-11 09:15:11.906458991 +0000 UTC m=+1110.572128806" Mar 11 09:15:14 crc kubenswrapper[4840]: I0311 09:15:14.171716 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-kkplv" Mar 11 09:15:14 crc kubenswrapper[4840]: I0311 09:15:14.185272 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-984cd4dcf-mngxm" Mar 11 09:15:14 crc kubenswrapper[4840]: I0311 09:15:14.251614 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-66d56f6ff4-tjvvr" Mar 11 09:15:14 crc kubenswrapper[4840]: I0311 09:15:14.299678 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5964f64c48-qbmr9" Mar 11 09:15:14 crc kubenswrapper[4840]: I0311 09:15:14.300155 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-77b6666d85-6gn84" Mar 11 09:15:14 crc kubenswrapper[4840]: I0311 09:15:14.356364 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6d9d6b584d-k7b9q" Mar 11 09:15:14 crc kubenswrapper[4840]: I0311 09:15:14.373015 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-684f77d66d-bzvzx" Mar 11 09:15:14 crc kubenswrapper[4840]: I0311 09:15:14.465503 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6bbb499bbc-9dffx" Mar 11 09:15:14 crc kubenswrapper[4840]: I0311 09:15:14.573263 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-pqcxj" Mar 11 09:15:14 crc kubenswrapper[4840]: I0311 09:15:14.667813 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-658d4cdd5-gg7rz" Mar 11 09:15:14 crc kubenswrapper[4840]: I0311 09:15:14.697121 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-776c5696bf-pgbkn" Mar 11 09:15:14 crc kubenswrapper[4840]: I0311 09:15:14.876850 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-bbc5b68f9-stv5z" Mar 11 09:15:14 crc kubenswrapper[4840]: I0311 09:15:14.940209 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-574d45c66c-5dxn9" Mar 11 09:15:14 crc kubenswrapper[4840]: I0311 09:15:14.977462 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6cd66dbd4b-j6nng" Mar 11 09:15:15 crc kubenswrapper[4840]: I0311 09:15:15.045857 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-677c674df7-6jvcx" Mar 11 09:15:15 crc kubenswrapper[4840]: I0311 09:15:15.046147 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4f55cb5c-wbkbm" Mar 11 09:15:15 crc kubenswrapper[4840]: I0311 09:15:15.084948 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-vxcc4" Mar 11 09:15:15 crc kubenswrapper[4840]: I0311 09:15:15.188557 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6dd88c6f67-dsd85" Mar 11 09:15:16 crc kubenswrapper[4840]: I0311 09:15:16.271723 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5995f4446f-xldht" Mar 11 09:15:16 crc kubenswrapper[4840]: I0311 09:15:16.649690 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6647d7885fb7jnm" Mar 11 09:15:17 crc kubenswrapper[4840]: I0311 09:15:17.083581 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6679ddfdc7-h8cln" Mar 11 09:15:24 crc kubenswrapper[4840]: I0311 09:15:24.756993 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-569cc54c5-q7hvp" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.254203 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-589db6c89c-rrkdt"] Mar 11 09:15:39 crc kubenswrapper[4840]: E0311 09:15:39.268264 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb4c8bbf-d375-4db3-8a45-e14a7c2400e2" containerName="collect-profiles" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.268291 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb4c8bbf-d375-4db3-8a45-e14a7c2400e2" containerName="collect-profiles" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.268436 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb4c8bbf-d375-4db3-8a45-e14a7c2400e2" containerName="collect-profiles" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.269322 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589db6c89c-rrkdt" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.275743 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.276022 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.276188 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.276969 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-b5ktj" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.277429 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-589db6c89c-rrkdt"] Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.340087 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86bbd886cf-nc4gg"] Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.341666 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bbd886cf-nc4gg" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.344305 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.363911 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86bbd886cf-nc4gg"] Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.439472 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e-config\") pod \"dnsmasq-dns-589db6c89c-rrkdt\" (UID: \"bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e\") " pod="openstack/dnsmasq-dns-589db6c89c-rrkdt" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.439570 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e49aff8-5315-4fd2-931f-a6997548015b-config\") pod \"dnsmasq-dns-86bbd886cf-nc4gg\" (UID: \"8e49aff8-5315-4fd2-931f-a6997548015b\") " pod="openstack/dnsmasq-dns-86bbd886cf-nc4gg" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.439744 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e49aff8-5315-4fd2-931f-a6997548015b-dns-svc\") pod \"dnsmasq-dns-86bbd886cf-nc4gg\" (UID: \"8e49aff8-5315-4fd2-931f-a6997548015b\") " pod="openstack/dnsmasq-dns-86bbd886cf-nc4gg" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.439786 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmc67\" (UniqueName: \"kubernetes.io/projected/8e49aff8-5315-4fd2-931f-a6997548015b-kube-api-access-cmc67\") pod \"dnsmasq-dns-86bbd886cf-nc4gg\" (UID: \"8e49aff8-5315-4fd2-931f-a6997548015b\") " pod="openstack/dnsmasq-dns-86bbd886cf-nc4gg" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.439888 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxwq9\" (UniqueName: \"kubernetes.io/projected/bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e-kube-api-access-nxwq9\") pod \"dnsmasq-dns-589db6c89c-rrkdt\" (UID: \"bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e\") " pod="openstack/dnsmasq-dns-589db6c89c-rrkdt" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.541846 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e49aff8-5315-4fd2-931f-a6997548015b-dns-svc\") pod \"dnsmasq-dns-86bbd886cf-nc4gg\" (UID: \"8e49aff8-5315-4fd2-931f-a6997548015b\") " pod="openstack/dnsmasq-dns-86bbd886cf-nc4gg" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.541895 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmc67\" (UniqueName: \"kubernetes.io/projected/8e49aff8-5315-4fd2-931f-a6997548015b-kube-api-access-cmc67\") pod \"dnsmasq-dns-86bbd886cf-nc4gg\" (UID: \"8e49aff8-5315-4fd2-931f-a6997548015b\") " pod="openstack/dnsmasq-dns-86bbd886cf-nc4gg" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.541936 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxwq9\" (UniqueName: \"kubernetes.io/projected/bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e-kube-api-access-nxwq9\") pod \"dnsmasq-dns-589db6c89c-rrkdt\" (UID: \"bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e\") " pod="openstack/dnsmasq-dns-589db6c89c-rrkdt" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.542004 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e-config\") pod \"dnsmasq-dns-589db6c89c-rrkdt\" (UID: \"bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e\") " pod="openstack/dnsmasq-dns-589db6c89c-rrkdt" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.542047 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e49aff8-5315-4fd2-931f-a6997548015b-config\") pod \"dnsmasq-dns-86bbd886cf-nc4gg\" (UID: \"8e49aff8-5315-4fd2-931f-a6997548015b\") " pod="openstack/dnsmasq-dns-86bbd886cf-nc4gg" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.543411 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e49aff8-5315-4fd2-931f-a6997548015b-dns-svc\") pod \"dnsmasq-dns-86bbd886cf-nc4gg\" (UID: \"8e49aff8-5315-4fd2-931f-a6997548015b\") " pod="openstack/dnsmasq-dns-86bbd886cf-nc4gg" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.543442 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e-config\") pod \"dnsmasq-dns-589db6c89c-rrkdt\" (UID: \"bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e\") " pod="openstack/dnsmasq-dns-589db6c89c-rrkdt" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.543411 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e49aff8-5315-4fd2-931f-a6997548015b-config\") pod \"dnsmasq-dns-86bbd886cf-nc4gg\" (UID: \"8e49aff8-5315-4fd2-931f-a6997548015b\") " pod="openstack/dnsmasq-dns-86bbd886cf-nc4gg" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.564072 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxwq9\" (UniqueName: \"kubernetes.io/projected/bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e-kube-api-access-nxwq9\") pod \"dnsmasq-dns-589db6c89c-rrkdt\" (UID: \"bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e\") " pod="openstack/dnsmasq-dns-589db6c89c-rrkdt" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.566205 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmc67\" (UniqueName: \"kubernetes.io/projected/8e49aff8-5315-4fd2-931f-a6997548015b-kube-api-access-cmc67\") pod \"dnsmasq-dns-86bbd886cf-nc4gg\" (UID: \"8e49aff8-5315-4fd2-931f-a6997548015b\") " pod="openstack/dnsmasq-dns-86bbd886cf-nc4gg" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.597937 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589db6c89c-rrkdt" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.661254 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bbd886cf-nc4gg" Mar 11 09:15:39 crc kubenswrapper[4840]: I0311 09:15:39.938597 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86bbd886cf-nc4gg"] Mar 11 09:15:40 crc kubenswrapper[4840]: I0311 09:15:40.051159 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-589db6c89c-rrkdt"] Mar 11 09:15:40 crc kubenswrapper[4840]: W0311 09:15:40.051851 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdbab8ba_45d2_49d1_878e_daf1bcaf3d2e.slice/crio-bce5faee62858a1ab6721b7698b3558883afd80fcf4beab937d6cb303499ea49 WatchSource:0}: Error finding container bce5faee62858a1ab6721b7698b3558883afd80fcf4beab937d6cb303499ea49: Status 404 returned error can't find the container with id bce5faee62858a1ab6721b7698b3558883afd80fcf4beab937d6cb303499ea49 Mar 11 09:15:40 crc kubenswrapper[4840]: I0311 09:15:40.148414 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86bbd886cf-nc4gg" event={"ID":"8e49aff8-5315-4fd2-931f-a6997548015b","Type":"ContainerStarted","Data":"a23b025a8fa9303abfe7f8ec5b446f4ce3c5fae965882a7ab35341b41eb185d8"} Mar 11 09:15:40 crc kubenswrapper[4840]: I0311 09:15:40.149550 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589db6c89c-rrkdt" event={"ID":"bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e","Type":"ContainerStarted","Data":"bce5faee62858a1ab6721b7698b3558883afd80fcf4beab937d6cb303499ea49"} Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.186898 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-589db6c89c-rrkdt"] Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.195983 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79f9fc56ff-r2nss"] Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.197556 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79f9fc56ff-r2nss" Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.204001 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79f9fc56ff-r2nss"] Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.293547 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad8faa32-f620-4ded-b606-0009b5a92cb8-dns-svc\") pod \"dnsmasq-dns-79f9fc56ff-r2nss\" (UID: \"ad8faa32-f620-4ded-b606-0009b5a92cb8\") " pod="openstack/dnsmasq-dns-79f9fc56ff-r2nss" Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.293645 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad8faa32-f620-4ded-b606-0009b5a92cb8-config\") pod \"dnsmasq-dns-79f9fc56ff-r2nss\" (UID: \"ad8faa32-f620-4ded-b606-0009b5a92cb8\") " pod="openstack/dnsmasq-dns-79f9fc56ff-r2nss" Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.293679 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m8vv\" (UniqueName: \"kubernetes.io/projected/ad8faa32-f620-4ded-b606-0009b5a92cb8-kube-api-access-2m8vv\") pod \"dnsmasq-dns-79f9fc56ff-r2nss\" (UID: \"ad8faa32-f620-4ded-b606-0009b5a92cb8\") " pod="openstack/dnsmasq-dns-79f9fc56ff-r2nss" Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.395381 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad8faa32-f620-4ded-b606-0009b5a92cb8-config\") pod \"dnsmasq-dns-79f9fc56ff-r2nss\" (UID: \"ad8faa32-f620-4ded-b606-0009b5a92cb8\") " pod="openstack/dnsmasq-dns-79f9fc56ff-r2nss" Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.395483 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m8vv\" (UniqueName: \"kubernetes.io/projected/ad8faa32-f620-4ded-b606-0009b5a92cb8-kube-api-access-2m8vv\") pod \"dnsmasq-dns-79f9fc56ff-r2nss\" (UID: \"ad8faa32-f620-4ded-b606-0009b5a92cb8\") " pod="openstack/dnsmasq-dns-79f9fc56ff-r2nss" Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.395607 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad8faa32-f620-4ded-b606-0009b5a92cb8-dns-svc\") pod \"dnsmasq-dns-79f9fc56ff-r2nss\" (UID: \"ad8faa32-f620-4ded-b606-0009b5a92cb8\") " pod="openstack/dnsmasq-dns-79f9fc56ff-r2nss" Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.397246 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad8faa32-f620-4ded-b606-0009b5a92cb8-dns-svc\") pod \"dnsmasq-dns-79f9fc56ff-r2nss\" (UID: \"ad8faa32-f620-4ded-b606-0009b5a92cb8\") " pod="openstack/dnsmasq-dns-79f9fc56ff-r2nss" Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.397257 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad8faa32-f620-4ded-b606-0009b5a92cb8-config\") pod \"dnsmasq-dns-79f9fc56ff-r2nss\" (UID: \"ad8faa32-f620-4ded-b606-0009b5a92cb8\") " pod="openstack/dnsmasq-dns-79f9fc56ff-r2nss" Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.432868 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m8vv\" (UniqueName: \"kubernetes.io/projected/ad8faa32-f620-4ded-b606-0009b5a92cb8-kube-api-access-2m8vv\") pod \"dnsmasq-dns-79f9fc56ff-r2nss\" (UID: \"ad8faa32-f620-4ded-b606-0009b5a92cb8\") " pod="openstack/dnsmasq-dns-79f9fc56ff-r2nss" Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.475746 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86bbd886cf-nc4gg"] Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.503479 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c47bcb9f9-krwxw"] Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.505146 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c47bcb9f9-krwxw" Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.534348 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c47bcb9f9-krwxw"] Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.536244 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79f9fc56ff-r2nss" Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.599362 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff691b46-da15-4bfc-9b37-833a8cd599a9-config\") pod \"dnsmasq-dns-7c47bcb9f9-krwxw\" (UID: \"ff691b46-da15-4bfc-9b37-833a8cd599a9\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-krwxw" Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.599406 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff691b46-da15-4bfc-9b37-833a8cd599a9-dns-svc\") pod \"dnsmasq-dns-7c47bcb9f9-krwxw\" (UID: \"ff691b46-da15-4bfc-9b37-833a8cd599a9\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-krwxw" Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.599468 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvssb\" (UniqueName: \"kubernetes.io/projected/ff691b46-da15-4bfc-9b37-833a8cd599a9-kube-api-access-vvssb\") pod \"dnsmasq-dns-7c47bcb9f9-krwxw\" (UID: \"ff691b46-da15-4bfc-9b37-833a8cd599a9\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-krwxw" Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.700441 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff691b46-da15-4bfc-9b37-833a8cd599a9-config\") pod \"dnsmasq-dns-7c47bcb9f9-krwxw\" (UID: \"ff691b46-da15-4bfc-9b37-833a8cd599a9\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-krwxw" Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.700512 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff691b46-da15-4bfc-9b37-833a8cd599a9-dns-svc\") pod \"dnsmasq-dns-7c47bcb9f9-krwxw\" (UID: \"ff691b46-da15-4bfc-9b37-833a8cd599a9\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-krwxw" Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.700588 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvssb\" (UniqueName: \"kubernetes.io/projected/ff691b46-da15-4bfc-9b37-833a8cd599a9-kube-api-access-vvssb\") pod \"dnsmasq-dns-7c47bcb9f9-krwxw\" (UID: \"ff691b46-da15-4bfc-9b37-833a8cd599a9\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-krwxw" Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.702015 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff691b46-da15-4bfc-9b37-833a8cd599a9-config\") pod \"dnsmasq-dns-7c47bcb9f9-krwxw\" (UID: \"ff691b46-da15-4bfc-9b37-833a8cd599a9\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-krwxw" Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.702036 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff691b46-da15-4bfc-9b37-833a8cd599a9-dns-svc\") pod \"dnsmasq-dns-7c47bcb9f9-krwxw\" (UID: \"ff691b46-da15-4bfc-9b37-833a8cd599a9\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-krwxw" Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.764147 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvssb\" (UniqueName: \"kubernetes.io/projected/ff691b46-da15-4bfc-9b37-833a8cd599a9-kube-api-access-vvssb\") pod \"dnsmasq-dns-7c47bcb9f9-krwxw\" (UID: \"ff691b46-da15-4bfc-9b37-833a8cd599a9\") " pod="openstack/dnsmasq-dns-7c47bcb9f9-krwxw" Mar 11 09:15:42 crc kubenswrapper[4840]: I0311 09:15:42.841613 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c47bcb9f9-krwxw" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.266499 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c47bcb9f9-krwxw"] Mar 11 09:15:43 crc kubenswrapper[4840]: W0311 09:15:43.275618 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff691b46_da15_4bfc_9b37_833a8cd599a9.slice/crio-5ba5048199e31865dbf953275305d91ff3219e702305451fcfa5b05d2d2e5306 WatchSource:0}: Error finding container 5ba5048199e31865dbf953275305d91ff3219e702305451fcfa5b05d2d2e5306: Status 404 returned error can't find the container with id 5ba5048199e31865dbf953275305d91ff3219e702305451fcfa5b05d2d2e5306 Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.355783 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.357171 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.360269 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.360565 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.361186 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mh5xd" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.361318 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.361432 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.361436 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.362152 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.365660 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.389990 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79f9fc56ff-r2nss"] Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.515998 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f31748d2-64a9-4839-ac55-691d9682ee8e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.516073 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.516106 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-config-data\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.516516 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hs2p\" (UniqueName: \"kubernetes.io/projected/f31748d2-64a9-4839-ac55-691d9682ee8e-kube-api-access-7hs2p\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.516691 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f31748d2-64a9-4839-ac55-691d9682ee8e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.516845 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f31748d2-64a9-4839-ac55-691d9682ee8e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.517085 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f31748d2-64a9-4839-ac55-691d9682ee8e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.517192 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f31748d2-64a9-4839-ac55-691d9682ee8e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.517217 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.517232 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.517261 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f31748d2-64a9-4839-ac55-691d9682ee8e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.619461 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f31748d2-64a9-4839-ac55-691d9682ee8e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.621939 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.622788 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-config-data\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.622953 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hs2p\" (UniqueName: \"kubernetes.io/projected/f31748d2-64a9-4839-ac55-691d9682ee8e-kube-api-access-7hs2p\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.623207 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f31748d2-64a9-4839-ac55-691d9682ee8e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.623376 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f31748d2-64a9-4839-ac55-691d9682ee8e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.623913 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f31748d2-64a9-4839-ac55-691d9682ee8e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.622555 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.624126 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f31748d2-64a9-4839-ac55-691d9682ee8e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.625067 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-config-data\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.625775 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f31748d2-64a9-4839-ac55-691d9682ee8e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.626397 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.626420 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.626632 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f31748d2-64a9-4839-ac55-691d9682ee8e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.627086 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f31748d2-64a9-4839-ac55-691d9682ee8e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.627418 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.628989 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f31748d2-64a9-4839-ac55-691d9682ee8e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.630262 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f31748d2-64a9-4839-ac55-691d9682ee8e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.631563 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.640374 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f31748d2-64a9-4839-ac55-691d9682ee8e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.642258 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f31748d2-64a9-4839-ac55-691d9682ee8e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.643121 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hs2p\" (UniqueName: \"kubernetes.io/projected/f31748d2-64a9-4839-ac55-691d9682ee8e-kube-api-access-7hs2p\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.643566 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.645008 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.655178 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.655454 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.655846 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.656637 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.658826 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-t64hs" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.659175 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.659395 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.684028 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.719167 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " pod="openstack/rabbitmq-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.727906 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3a964c24-3c53-4a29-98fb-ceaca467c372-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.728055 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3a964c24-3c53-4a29-98fb-ceaca467c372-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.728143 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3a964c24-3c53-4a29-98fb-ceaca467c372-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.728225 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6q45\" (UniqueName: \"kubernetes.io/projected/3a964c24-3c53-4a29-98fb-ceaca467c372-kube-api-access-r6q45\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.728291 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3a964c24-3c53-4a29-98fb-ceaca467c372-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.728367 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3a964c24-3c53-4a29-98fb-ceaca467c372-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.728389 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3a964c24-3c53-4a29-98fb-ceaca467c372-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.728457 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.728568 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3a964c24-3c53-4a29-98fb-ceaca467c372-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.728617 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3a964c24-3c53-4a29-98fb-ceaca467c372-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.728728 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3a964c24-3c53-4a29-98fb-ceaca467c372-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.830149 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3a964c24-3c53-4a29-98fb-ceaca467c372-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.830276 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3a964c24-3c53-4a29-98fb-ceaca467c372-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.830303 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3a964c24-3c53-4a29-98fb-ceaca467c372-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.830334 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3a964c24-3c53-4a29-98fb-ceaca467c372-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.830361 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6q45\" (UniqueName: \"kubernetes.io/projected/3a964c24-3c53-4a29-98fb-ceaca467c372-kube-api-access-r6q45\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.830381 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3a964c24-3c53-4a29-98fb-ceaca467c372-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.830404 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3a964c24-3c53-4a29-98fb-ceaca467c372-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.830423 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3a964c24-3c53-4a29-98fb-ceaca467c372-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.830448 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.830477 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3a964c24-3c53-4a29-98fb-ceaca467c372-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.830512 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3a964c24-3c53-4a29-98fb-ceaca467c372-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.833047 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.833214 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3a964c24-3c53-4a29-98fb-ceaca467c372-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.833616 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3a964c24-3c53-4a29-98fb-ceaca467c372-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.834217 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3a964c24-3c53-4a29-98fb-ceaca467c372-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.834605 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3a964c24-3c53-4a29-98fb-ceaca467c372-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.835712 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3a964c24-3c53-4a29-98fb-ceaca467c372-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.838333 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3a964c24-3c53-4a29-98fb-ceaca467c372-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.842055 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3a964c24-3c53-4a29-98fb-ceaca467c372-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.846310 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3a964c24-3c53-4a29-98fb-ceaca467c372-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.858196 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3a964c24-3c53-4a29-98fb-ceaca467c372-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.860037 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6q45\" (UniqueName: \"kubernetes.io/projected/3a964c24-3c53-4a29-98fb-ceaca467c372-kube-api-access-r6q45\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:43 crc kubenswrapper[4840]: I0311 09:15:43.865223 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.003720 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.087004 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.239887 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79f9fc56ff-r2nss" event={"ID":"ad8faa32-f620-4ded-b606-0009b5a92cb8","Type":"ContainerStarted","Data":"0c7272e641e20d83077264f1a97810bff882fc8c725a7410984c968d88e33c65"} Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.247443 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c47bcb9f9-krwxw" event={"ID":"ff691b46-da15-4bfc-9b37-833a8cd599a9","Type":"ContainerStarted","Data":"5ba5048199e31865dbf953275305d91ff3219e702305451fcfa5b05d2d2e5306"} Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.608405 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.667631 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.712516 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.714073 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.717994 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-29ncv" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.718933 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.719472 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.725798 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.728372 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.729677 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.752010 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c6f4129d-4bc4-449b-be94-82fce07cf1f0-kolla-config\") pod \"openstack-galera-0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " pod="openstack/openstack-galera-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.752079 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c6f4129d-4bc4-449b-be94-82fce07cf1f0-config-data-default\") pod \"openstack-galera-0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " pod="openstack/openstack-galera-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.752116 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " pod="openstack/openstack-galera-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.752138 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6f4129d-4bc4-449b-be94-82fce07cf1f0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " pod="openstack/openstack-galera-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.752183 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c6f4129d-4bc4-449b-be94-82fce07cf1f0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " pod="openstack/openstack-galera-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.752222 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6csbr\" (UniqueName: \"kubernetes.io/projected/c6f4129d-4bc4-449b-be94-82fce07cf1f0-kube-api-access-6csbr\") pod \"openstack-galera-0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " pod="openstack/openstack-galera-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.752247 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6f4129d-4bc4-449b-be94-82fce07cf1f0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " pod="openstack/openstack-galera-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.752264 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6f4129d-4bc4-449b-be94-82fce07cf1f0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " pod="openstack/openstack-galera-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.853454 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c6f4129d-4bc4-449b-be94-82fce07cf1f0-kolla-config\") pod \"openstack-galera-0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " pod="openstack/openstack-galera-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.853547 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c6f4129d-4bc4-449b-be94-82fce07cf1f0-config-data-default\") pod \"openstack-galera-0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " pod="openstack/openstack-galera-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.853594 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " pod="openstack/openstack-galera-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.853621 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6f4129d-4bc4-449b-be94-82fce07cf1f0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " pod="openstack/openstack-galera-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.853674 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c6f4129d-4bc4-449b-be94-82fce07cf1f0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " pod="openstack/openstack-galera-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.853707 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6csbr\" (UniqueName: \"kubernetes.io/projected/c6f4129d-4bc4-449b-be94-82fce07cf1f0-kube-api-access-6csbr\") pod \"openstack-galera-0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " pod="openstack/openstack-galera-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.853740 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6f4129d-4bc4-449b-be94-82fce07cf1f0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " pod="openstack/openstack-galera-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.853765 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6f4129d-4bc4-449b-be94-82fce07cf1f0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " pod="openstack/openstack-galera-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.856296 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c6f4129d-4bc4-449b-be94-82fce07cf1f0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " pod="openstack/openstack-galera-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.857337 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-galera-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.857387 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c6f4129d-4bc4-449b-be94-82fce07cf1f0-kolla-config\") pod \"openstack-galera-0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " pod="openstack/openstack-galera-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.859521 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6f4129d-4bc4-449b-be94-82fce07cf1f0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " pod="openstack/openstack-galera-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.862290 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c6f4129d-4bc4-449b-be94-82fce07cf1f0-config-data-default\") pod \"openstack-galera-0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " pod="openstack/openstack-galera-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.863678 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6f4129d-4bc4-449b-be94-82fce07cf1f0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " pod="openstack/openstack-galera-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.870665 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6f4129d-4bc4-449b-be94-82fce07cf1f0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " pod="openstack/openstack-galera-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.882933 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6csbr\" (UniqueName: \"kubernetes.io/projected/c6f4129d-4bc4-449b-be94-82fce07cf1f0-kube-api-access-6csbr\") pod \"openstack-galera-0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " pod="openstack/openstack-galera-0" Mar 11 09:15:44 crc kubenswrapper[4840]: I0311 09:15:44.890769 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " pod="openstack/openstack-galera-0" Mar 11 09:15:45 crc kubenswrapper[4840]: I0311 09:15:45.041855 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 11 09:15:45 crc kubenswrapper[4840]: I0311 09:15:45.916338 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 11 09:15:45 crc kubenswrapper[4840]: I0311 09:15:45.917954 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:45 crc kubenswrapper[4840]: I0311 09:15:45.922432 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 11 09:15:45 crc kubenswrapper[4840]: I0311 09:15:45.922739 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-ntlhs" Mar 11 09:15:45 crc kubenswrapper[4840]: I0311 09:15:45.923517 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 11 09:15:45 crc kubenswrapper[4840]: I0311 09:15:45.923745 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 11 09:15:45 crc kubenswrapper[4840]: I0311 09:15:45.953901 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.084735 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e93bc954-4d65-4b5f-8070-2b9800ff3db2-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.085113 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e93bc954-4d65-4b5f-8070-2b9800ff3db2-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.085265 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93bc954-4d65-4b5f-8070-2b9800ff3db2-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.085300 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf8xv\" (UniqueName: \"kubernetes.io/projected/e93bc954-4d65-4b5f-8070-2b9800ff3db2-kube-api-access-lf8xv\") pod \"openstack-cell1-galera-0\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.085336 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.085367 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e93bc954-4d65-4b5f-8070-2b9800ff3db2-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.085415 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e93bc954-4d65-4b5f-8070-2b9800ff3db2-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.085438 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e93bc954-4d65-4b5f-8070-2b9800ff3db2-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.132057 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.133133 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.135864 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.135917 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-ss59s" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.136238 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.169937 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.187376 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e93bc954-4d65-4b5f-8070-2b9800ff3db2-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.187453 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e93bc954-4d65-4b5f-8070-2b9800ff3db2-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.187511 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93bc954-4d65-4b5f-8070-2b9800ff3db2-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.187544 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf8xv\" (UniqueName: \"kubernetes.io/projected/e93bc954-4d65-4b5f-8070-2b9800ff3db2-kube-api-access-lf8xv\") pod \"openstack-cell1-galera-0\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.187615 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.187665 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e93bc954-4d65-4b5f-8070-2b9800ff3db2-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.187708 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e93bc954-4d65-4b5f-8070-2b9800ff3db2-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.187767 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e93bc954-4d65-4b5f-8070-2b9800ff3db2-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.189319 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e93bc954-4d65-4b5f-8070-2b9800ff3db2-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.189745 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.191177 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e93bc954-4d65-4b5f-8070-2b9800ff3db2-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.191452 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e93bc954-4d65-4b5f-8070-2b9800ff3db2-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.191994 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e93bc954-4d65-4b5f-8070-2b9800ff3db2-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.206577 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e93bc954-4d65-4b5f-8070-2b9800ff3db2-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.212055 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93bc954-4d65-4b5f-8070-2b9800ff3db2-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.229263 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf8xv\" (UniqueName: \"kubernetes.io/projected/e93bc954-4d65-4b5f-8070-2b9800ff3db2-kube-api-access-lf8xv\") pod \"openstack-cell1-galera-0\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.233334 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.280704 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.288870 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c44e0641-be37-4447-9666-14bf00c08827-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c44e0641-be37-4447-9666-14bf00c08827\") " pod="openstack/memcached-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.288950 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmxxk\" (UniqueName: \"kubernetes.io/projected/c44e0641-be37-4447-9666-14bf00c08827-kube-api-access-cmxxk\") pod \"memcached-0\" (UID: \"c44e0641-be37-4447-9666-14bf00c08827\") " pod="openstack/memcached-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.289031 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c44e0641-be37-4447-9666-14bf00c08827-config-data\") pod \"memcached-0\" (UID: \"c44e0641-be37-4447-9666-14bf00c08827\") " pod="openstack/memcached-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.289135 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c44e0641-be37-4447-9666-14bf00c08827-kolla-config\") pod \"memcached-0\" (UID: \"c44e0641-be37-4447-9666-14bf00c08827\") " pod="openstack/memcached-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.289180 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c44e0641-be37-4447-9666-14bf00c08827-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c44e0641-be37-4447-9666-14bf00c08827\") " pod="openstack/memcached-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.390689 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c44e0641-be37-4447-9666-14bf00c08827-config-data\") pod \"memcached-0\" (UID: \"c44e0641-be37-4447-9666-14bf00c08827\") " pod="openstack/memcached-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.390768 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c44e0641-be37-4447-9666-14bf00c08827-kolla-config\") pod \"memcached-0\" (UID: \"c44e0641-be37-4447-9666-14bf00c08827\") " pod="openstack/memcached-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.390793 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c44e0641-be37-4447-9666-14bf00c08827-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c44e0641-be37-4447-9666-14bf00c08827\") " pod="openstack/memcached-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.390938 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c44e0641-be37-4447-9666-14bf00c08827-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c44e0641-be37-4447-9666-14bf00c08827\") " pod="openstack/memcached-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.390978 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmxxk\" (UniqueName: \"kubernetes.io/projected/c44e0641-be37-4447-9666-14bf00c08827-kube-api-access-cmxxk\") pod \"memcached-0\" (UID: \"c44e0641-be37-4447-9666-14bf00c08827\") " pod="openstack/memcached-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.391674 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c44e0641-be37-4447-9666-14bf00c08827-config-data\") pod \"memcached-0\" (UID: \"c44e0641-be37-4447-9666-14bf00c08827\") " pod="openstack/memcached-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.391875 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c44e0641-be37-4447-9666-14bf00c08827-kolla-config\") pod \"memcached-0\" (UID: \"c44e0641-be37-4447-9666-14bf00c08827\") " pod="openstack/memcached-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.402800 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c44e0641-be37-4447-9666-14bf00c08827-combined-ca-bundle\") pod \"memcached-0\" (UID: \"c44e0641-be37-4447-9666-14bf00c08827\") " pod="openstack/memcached-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.402840 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c44e0641-be37-4447-9666-14bf00c08827-memcached-tls-certs\") pod \"memcached-0\" (UID: \"c44e0641-be37-4447-9666-14bf00c08827\") " pod="openstack/memcached-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.410878 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmxxk\" (UniqueName: \"kubernetes.io/projected/c44e0641-be37-4447-9666-14bf00c08827-kube-api-access-cmxxk\") pod \"memcached-0\" (UID: \"c44e0641-be37-4447-9666-14bf00c08827\") " pod="openstack/memcached-0" Mar 11 09:15:46 crc kubenswrapper[4840]: I0311 09:15:46.467786 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 11 09:15:48 crc kubenswrapper[4840]: I0311 09:15:48.469687 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 11 09:15:48 crc kubenswrapper[4840]: I0311 09:15:48.471103 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 11 09:15:48 crc kubenswrapper[4840]: I0311 09:15:48.473446 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-65gcb" Mar 11 09:15:48 crc kubenswrapper[4840]: I0311 09:15:48.485071 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 11 09:15:48 crc kubenswrapper[4840]: I0311 09:15:48.640135 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrt7c\" (UniqueName: \"kubernetes.io/projected/8346e561-a5cc-4bf9-807e-8837c8e13007-kube-api-access-vrt7c\") pod \"kube-state-metrics-0\" (UID: \"8346e561-a5cc-4bf9-807e-8837c8e13007\") " pod="openstack/kube-state-metrics-0" Mar 11 09:15:48 crc kubenswrapper[4840]: I0311 09:15:48.742317 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrt7c\" (UniqueName: \"kubernetes.io/projected/8346e561-a5cc-4bf9-807e-8837c8e13007-kube-api-access-vrt7c\") pod \"kube-state-metrics-0\" (UID: \"8346e561-a5cc-4bf9-807e-8837c8e13007\") " pod="openstack/kube-state-metrics-0" Mar 11 09:15:48 crc kubenswrapper[4840]: I0311 09:15:48.765972 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrt7c\" (UniqueName: \"kubernetes.io/projected/8346e561-a5cc-4bf9-807e-8837c8e13007-kube-api-access-vrt7c\") pod \"kube-state-metrics-0\" (UID: \"8346e561-a5cc-4bf9-807e-8837c8e13007\") " pod="openstack/kube-state-metrics-0" Mar 11 09:15:48 crc kubenswrapper[4840]: I0311 09:15:48.792050 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 11 09:15:50 crc kubenswrapper[4840]: I0311 09:15:50.339116 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f31748d2-64a9-4839-ac55-691d9682ee8e","Type":"ContainerStarted","Data":"162383aaff69e599f2ef0b4c20753017d692dc71c1fda750efcad8d28187360c"} Mar 11 09:15:50 crc kubenswrapper[4840]: W0311 09:15:50.470617 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a964c24_3c53_4a29_98fb_ceaca467c372.slice/crio-49e11ae21c1d92f20c46632e1af3f03b92def2a83af164cd1422ae7fc9f596dd WatchSource:0}: Error finding container 49e11ae21c1d92f20c46632e1af3f03b92def2a83af164cd1422ae7fc9f596dd: Status 404 returned error can't find the container with id 49e11ae21c1d92f20c46632e1af3f03b92def2a83af164cd1422ae7fc9f596dd Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.304319 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-g2p7c"] Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.305833 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g2p7c" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.309550 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.309855 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.310004 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-jp886" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.322371 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-qtcdv"] Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.324602 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.338547 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g2p7c"] Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.351209 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3a964c24-3c53-4a29-98fb-ceaca467c372","Type":"ContainerStarted","Data":"49e11ae21c1d92f20c46632e1af3f03b92def2a83af164cd1422ae7fc9f596dd"} Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.366154 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-qtcdv"] Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.392737 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/056df7c0-d577-4908-91a8-b5dfb95e0316-var-run-ovn\") pod \"ovn-controller-g2p7c\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " pod="openstack/ovn-controller-g2p7c" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.393212 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/056df7c0-d577-4908-91a8-b5dfb95e0316-combined-ca-bundle\") pod \"ovn-controller-g2p7c\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " pod="openstack/ovn-controller-g2p7c" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.393432 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b58b63e6-0eb4-444e-be2e-dca6bf37030e-etc-ovs\") pod \"ovn-controller-ovs-qtcdv\" (UID: \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\") " pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.393887 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g5q4\" (UniqueName: \"kubernetes.io/projected/b58b63e6-0eb4-444e-be2e-dca6bf37030e-kube-api-access-9g5q4\") pod \"ovn-controller-ovs-qtcdv\" (UID: \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\") " pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.394129 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b58b63e6-0eb4-444e-be2e-dca6bf37030e-var-lib\") pod \"ovn-controller-ovs-qtcdv\" (UID: \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\") " pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.394375 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crjs9\" (UniqueName: \"kubernetes.io/projected/056df7c0-d577-4908-91a8-b5dfb95e0316-kube-api-access-crjs9\") pod \"ovn-controller-g2p7c\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " pod="openstack/ovn-controller-g2p7c" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.394618 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b58b63e6-0eb4-444e-be2e-dca6bf37030e-var-log\") pod \"ovn-controller-ovs-qtcdv\" (UID: \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\") " pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.394784 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/056df7c0-d577-4908-91a8-b5dfb95e0316-var-log-ovn\") pod \"ovn-controller-g2p7c\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " pod="openstack/ovn-controller-g2p7c" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.394942 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/056df7c0-d577-4908-91a8-b5dfb95e0316-ovn-controller-tls-certs\") pod \"ovn-controller-g2p7c\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " pod="openstack/ovn-controller-g2p7c" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.395088 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/056df7c0-d577-4908-91a8-b5dfb95e0316-var-run\") pod \"ovn-controller-g2p7c\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " pod="openstack/ovn-controller-g2p7c" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.395301 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/056df7c0-d577-4908-91a8-b5dfb95e0316-scripts\") pod \"ovn-controller-g2p7c\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " pod="openstack/ovn-controller-g2p7c" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.395632 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b58b63e6-0eb4-444e-be2e-dca6bf37030e-scripts\") pod \"ovn-controller-ovs-qtcdv\" (UID: \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\") " pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.395758 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b58b63e6-0eb4-444e-be2e-dca6bf37030e-var-run\") pod \"ovn-controller-ovs-qtcdv\" (UID: \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\") " pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.497219 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b58b63e6-0eb4-444e-be2e-dca6bf37030e-scripts\") pod \"ovn-controller-ovs-qtcdv\" (UID: \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\") " pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.497267 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b58b63e6-0eb4-444e-be2e-dca6bf37030e-var-run\") pod \"ovn-controller-ovs-qtcdv\" (UID: \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\") " pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.497303 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/056df7c0-d577-4908-91a8-b5dfb95e0316-var-run-ovn\") pod \"ovn-controller-g2p7c\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " pod="openstack/ovn-controller-g2p7c" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.497332 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/056df7c0-d577-4908-91a8-b5dfb95e0316-combined-ca-bundle\") pod \"ovn-controller-g2p7c\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " pod="openstack/ovn-controller-g2p7c" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.497360 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b58b63e6-0eb4-444e-be2e-dca6bf37030e-etc-ovs\") pod \"ovn-controller-ovs-qtcdv\" (UID: \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\") " pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.497427 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g5q4\" (UniqueName: \"kubernetes.io/projected/b58b63e6-0eb4-444e-be2e-dca6bf37030e-kube-api-access-9g5q4\") pod \"ovn-controller-ovs-qtcdv\" (UID: \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\") " pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.497456 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b58b63e6-0eb4-444e-be2e-dca6bf37030e-var-lib\") pod \"ovn-controller-ovs-qtcdv\" (UID: \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\") " pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.497500 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crjs9\" (UniqueName: \"kubernetes.io/projected/056df7c0-d577-4908-91a8-b5dfb95e0316-kube-api-access-crjs9\") pod \"ovn-controller-g2p7c\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " pod="openstack/ovn-controller-g2p7c" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.497519 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b58b63e6-0eb4-444e-be2e-dca6bf37030e-var-log\") pod \"ovn-controller-ovs-qtcdv\" (UID: \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\") " pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.497542 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/056df7c0-d577-4908-91a8-b5dfb95e0316-var-log-ovn\") pod \"ovn-controller-g2p7c\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " pod="openstack/ovn-controller-g2p7c" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.497561 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/056df7c0-d577-4908-91a8-b5dfb95e0316-ovn-controller-tls-certs\") pod \"ovn-controller-g2p7c\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " pod="openstack/ovn-controller-g2p7c" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.497587 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/056df7c0-d577-4908-91a8-b5dfb95e0316-var-run\") pod \"ovn-controller-g2p7c\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " pod="openstack/ovn-controller-g2p7c" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.497616 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/056df7c0-d577-4908-91a8-b5dfb95e0316-scripts\") pod \"ovn-controller-g2p7c\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " pod="openstack/ovn-controller-g2p7c" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.498320 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b58b63e6-0eb4-444e-be2e-dca6bf37030e-var-lib\") pod \"ovn-controller-ovs-qtcdv\" (UID: \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\") " pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.498528 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b58b63e6-0eb4-444e-be2e-dca6bf37030e-var-run\") pod \"ovn-controller-ovs-qtcdv\" (UID: \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\") " pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.498548 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/056df7c0-d577-4908-91a8-b5dfb95e0316-var-run-ovn\") pod \"ovn-controller-g2p7c\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " pod="openstack/ovn-controller-g2p7c" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.498866 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/056df7c0-d577-4908-91a8-b5dfb95e0316-var-log-ovn\") pod \"ovn-controller-g2p7c\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " pod="openstack/ovn-controller-g2p7c" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.499172 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b58b63e6-0eb4-444e-be2e-dca6bf37030e-var-log\") pod \"ovn-controller-ovs-qtcdv\" (UID: \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\") " pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.499362 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b58b63e6-0eb4-444e-be2e-dca6bf37030e-etc-ovs\") pod \"ovn-controller-ovs-qtcdv\" (UID: \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\") " pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.499411 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/056df7c0-d577-4908-91a8-b5dfb95e0316-var-run\") pod \"ovn-controller-g2p7c\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " pod="openstack/ovn-controller-g2p7c" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.500026 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b58b63e6-0eb4-444e-be2e-dca6bf37030e-scripts\") pod \"ovn-controller-ovs-qtcdv\" (UID: \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\") " pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.500558 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/056df7c0-d577-4908-91a8-b5dfb95e0316-scripts\") pod \"ovn-controller-g2p7c\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " pod="openstack/ovn-controller-g2p7c" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.512553 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/056df7c0-d577-4908-91a8-b5dfb95e0316-combined-ca-bundle\") pod \"ovn-controller-g2p7c\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " pod="openstack/ovn-controller-g2p7c" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.512709 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/056df7c0-d577-4908-91a8-b5dfb95e0316-ovn-controller-tls-certs\") pod \"ovn-controller-g2p7c\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " pod="openstack/ovn-controller-g2p7c" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.525721 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g5q4\" (UniqueName: \"kubernetes.io/projected/b58b63e6-0eb4-444e-be2e-dca6bf37030e-kube-api-access-9g5q4\") pod \"ovn-controller-ovs-qtcdv\" (UID: \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\") " pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.526109 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crjs9\" (UniqueName: \"kubernetes.io/projected/056df7c0-d577-4908-91a8-b5dfb95e0316-kube-api-access-crjs9\") pod \"ovn-controller-g2p7c\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " pod="openstack/ovn-controller-g2p7c" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.645491 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g2p7c" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.670012 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.836101 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.837443 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.842650 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.842754 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.842846 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.843024 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.843041 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-b6ggw" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.867094 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.902983 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d076df9-9280-425c-9b61-bf84751f11c1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.903038 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0d076df9-9280-425c-9b61-bf84751f11c1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.903149 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r7w9\" (UniqueName: \"kubernetes.io/projected/0d076df9-9280-425c-9b61-bf84751f11c1-kube-api-access-7r7w9\") pod \"ovsdbserver-nb-0\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.903198 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.903246 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d076df9-9280-425c-9b61-bf84751f11c1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.903341 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d076df9-9280-425c-9b61-bf84751f11c1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.903420 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d076df9-9280-425c-9b61-bf84751f11c1-config\") pod \"ovsdbserver-nb-0\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:51 crc kubenswrapper[4840]: I0311 09:15:51.903582 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d076df9-9280-425c-9b61-bf84751f11c1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:52 crc kubenswrapper[4840]: I0311 09:15:52.006440 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d076df9-9280-425c-9b61-bf84751f11c1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:52 crc kubenswrapper[4840]: I0311 09:15:52.008014 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d076df9-9280-425c-9b61-bf84751f11c1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:52 crc kubenswrapper[4840]: I0311 09:15:52.008089 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0d076df9-9280-425c-9b61-bf84751f11c1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:52 crc kubenswrapper[4840]: I0311 09:15:52.008142 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r7w9\" (UniqueName: \"kubernetes.io/projected/0d076df9-9280-425c-9b61-bf84751f11c1-kube-api-access-7r7w9\") pod \"ovsdbserver-nb-0\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:52 crc kubenswrapper[4840]: I0311 09:15:52.008169 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:52 crc kubenswrapper[4840]: I0311 09:15:52.008378 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d076df9-9280-425c-9b61-bf84751f11c1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:52 crc kubenswrapper[4840]: I0311 09:15:52.008659 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d076df9-9280-425c-9b61-bf84751f11c1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:52 crc kubenswrapper[4840]: I0311 09:15:52.014361 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d076df9-9280-425c-9b61-bf84751f11c1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:52 crc kubenswrapper[4840]: I0311 09:15:52.008819 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:52 crc kubenswrapper[4840]: I0311 09:15:52.014308 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d076df9-9280-425c-9b61-bf84751f11c1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:52 crc kubenswrapper[4840]: I0311 09:15:52.008805 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0d076df9-9280-425c-9b61-bf84751f11c1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:52 crc kubenswrapper[4840]: I0311 09:15:52.014658 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d076df9-9280-425c-9b61-bf84751f11c1-config\") pod \"ovsdbserver-nb-0\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:52 crc kubenswrapper[4840]: I0311 09:15:52.015399 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d076df9-9280-425c-9b61-bf84751f11c1-config\") pod \"ovsdbserver-nb-0\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:52 crc kubenswrapper[4840]: I0311 09:15:52.016002 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d076df9-9280-425c-9b61-bf84751f11c1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:52 crc kubenswrapper[4840]: I0311 09:15:52.019326 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d076df9-9280-425c-9b61-bf84751f11c1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:52 crc kubenswrapper[4840]: I0311 09:15:52.027592 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r7w9\" (UniqueName: \"kubernetes.io/projected/0d076df9-9280-425c-9b61-bf84751f11c1-kube-api-access-7r7w9\") pod \"ovsdbserver-nb-0\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:52 crc kubenswrapper[4840]: I0311 09:15:52.053043 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:52 crc kubenswrapper[4840]: I0311 09:15:52.175339 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 11 09:15:54 crc kubenswrapper[4840]: I0311 09:15:54.437744 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 11 09:15:55 crc kubenswrapper[4840]: I0311 09:15:55.891751 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 11 09:15:55 crc kubenswrapper[4840]: I0311 09:15:55.893526 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:55 crc kubenswrapper[4840]: I0311 09:15:55.908276 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-85ltp" Mar 11 09:15:55 crc kubenswrapper[4840]: I0311 09:15:55.908710 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 11 09:15:55 crc kubenswrapper[4840]: I0311 09:15:55.908894 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 11 09:15:55 crc kubenswrapper[4840]: I0311 09:15:55.909062 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 11 09:15:55 crc kubenswrapper[4840]: I0311 09:15:55.939188 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 11 09:15:55 crc kubenswrapper[4840]: I0311 09:15:55.979445 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b48aab55-15cb-42c4-a97b-692dbadf3353-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:55 crc kubenswrapper[4840]: I0311 09:15:55.979510 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b48aab55-15cb-42c4-a97b-692dbadf3353-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:55 crc kubenswrapper[4840]: I0311 09:15:55.979602 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b48aab55-15cb-42c4-a97b-692dbadf3353-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:55 crc kubenswrapper[4840]: I0311 09:15:55.979745 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b48aab55-15cb-42c4-a97b-692dbadf3353-config\") pod \"ovsdbserver-sb-0\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:55 crc kubenswrapper[4840]: I0311 09:15:55.979902 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9kq9\" (UniqueName: \"kubernetes.io/projected/b48aab55-15cb-42c4-a97b-692dbadf3353-kube-api-access-f9kq9\") pod \"ovsdbserver-sb-0\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:55 crc kubenswrapper[4840]: I0311 09:15:55.979977 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b48aab55-15cb-42c4-a97b-692dbadf3353-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:55 crc kubenswrapper[4840]: I0311 09:15:55.980053 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b48aab55-15cb-42c4-a97b-692dbadf3353-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:55 crc kubenswrapper[4840]: I0311 09:15:55.980109 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:56 crc kubenswrapper[4840]: I0311 09:15:56.081367 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b48aab55-15cb-42c4-a97b-692dbadf3353-config\") pod \"ovsdbserver-sb-0\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:56 crc kubenswrapper[4840]: I0311 09:15:56.081444 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9kq9\" (UniqueName: \"kubernetes.io/projected/b48aab55-15cb-42c4-a97b-692dbadf3353-kube-api-access-f9kq9\") pod \"ovsdbserver-sb-0\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:56 crc kubenswrapper[4840]: I0311 09:15:56.081489 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b48aab55-15cb-42c4-a97b-692dbadf3353-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:56 crc kubenswrapper[4840]: I0311 09:15:56.081520 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b48aab55-15cb-42c4-a97b-692dbadf3353-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:56 crc kubenswrapper[4840]: I0311 09:15:56.081541 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:56 crc kubenswrapper[4840]: I0311 09:15:56.081577 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b48aab55-15cb-42c4-a97b-692dbadf3353-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:56 crc kubenswrapper[4840]: I0311 09:15:56.081598 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b48aab55-15cb-42c4-a97b-692dbadf3353-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:56 crc kubenswrapper[4840]: I0311 09:15:56.081621 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b48aab55-15cb-42c4-a97b-692dbadf3353-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:56 crc kubenswrapper[4840]: I0311 09:15:56.083002 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:56 crc kubenswrapper[4840]: I0311 09:15:56.083325 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b48aab55-15cb-42c4-a97b-692dbadf3353-config\") pod \"ovsdbserver-sb-0\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:56 crc kubenswrapper[4840]: I0311 09:15:56.084125 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b48aab55-15cb-42c4-a97b-692dbadf3353-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:56 crc kubenswrapper[4840]: I0311 09:15:56.084703 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b48aab55-15cb-42c4-a97b-692dbadf3353-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:56 crc kubenswrapper[4840]: I0311 09:15:56.089333 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b48aab55-15cb-42c4-a97b-692dbadf3353-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:56 crc kubenswrapper[4840]: I0311 09:15:56.097979 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b48aab55-15cb-42c4-a97b-692dbadf3353-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:56 crc kubenswrapper[4840]: I0311 09:15:56.098903 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b48aab55-15cb-42c4-a97b-692dbadf3353-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:56 crc kubenswrapper[4840]: I0311 09:15:56.102171 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9kq9\" (UniqueName: \"kubernetes.io/projected/b48aab55-15cb-42c4-a97b-692dbadf3353-kube-api-access-f9kq9\") pod \"ovsdbserver-sb-0\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:56 crc kubenswrapper[4840]: I0311 09:15:56.119755 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:56 crc kubenswrapper[4840]: I0311 09:15:56.234864 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 11 09:15:57 crc kubenswrapper[4840]: I0311 09:15:57.445954 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:15:57 crc kubenswrapper[4840]: I0311 09:15:57.446339 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:16:00 crc kubenswrapper[4840]: I0311 09:16:00.149877 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553676-kjd2f"] Mar 11 09:16:00 crc kubenswrapper[4840]: I0311 09:16:00.152022 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553676-kjd2f" Mar 11 09:16:00 crc kubenswrapper[4840]: I0311 09:16:00.154514 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:16:00 crc kubenswrapper[4840]: I0311 09:16:00.154551 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:16:00 crc kubenswrapper[4840]: I0311 09:16:00.155575 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:16:00 crc kubenswrapper[4840]: I0311 09:16:00.158618 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553676-kjd2f"] Mar 11 09:16:00 crc kubenswrapper[4840]: I0311 09:16:00.253064 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwhvt\" (UniqueName: \"kubernetes.io/projected/a085068d-adb7-44fc-9d8a-7b413ceeee17-kube-api-access-xwhvt\") pod \"auto-csr-approver-29553676-kjd2f\" (UID: \"a085068d-adb7-44fc-9d8a-7b413ceeee17\") " pod="openshift-infra/auto-csr-approver-29553676-kjd2f" Mar 11 09:16:00 crc kubenswrapper[4840]: I0311 09:16:00.354848 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwhvt\" (UniqueName: \"kubernetes.io/projected/a085068d-adb7-44fc-9d8a-7b413ceeee17-kube-api-access-xwhvt\") pod \"auto-csr-approver-29553676-kjd2f\" (UID: \"a085068d-adb7-44fc-9d8a-7b413ceeee17\") " pod="openshift-infra/auto-csr-approver-29553676-kjd2f" Mar 11 09:16:00 crc kubenswrapper[4840]: I0311 09:16:00.383206 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwhvt\" (UniqueName: \"kubernetes.io/projected/a085068d-adb7-44fc-9d8a-7b413ceeee17-kube-api-access-xwhvt\") pod \"auto-csr-approver-29553676-kjd2f\" (UID: \"a085068d-adb7-44fc-9d8a-7b413ceeee17\") " pod="openshift-infra/auto-csr-approver-29553676-kjd2f" Mar 11 09:16:00 crc kubenswrapper[4840]: I0311 09:16:00.475527 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553676-kjd2f" Mar 11 09:16:01 crc kubenswrapper[4840]: W0311 09:16:01.379678 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode93bc954_4d65_4b5f_8070_2b9800ff3db2.slice/crio-4bd9e179773f0fb3c0b3a85a8322d1e11a73ef520f1e644906ceba1e874a3aa3 WatchSource:0}: Error finding container 4bd9e179773f0fb3c0b3a85a8322d1e11a73ef520f1e644906ceba1e874a3aa3: Status 404 returned error can't find the container with id 4bd9e179773f0fb3c0b3a85a8322d1e11a73ef520f1e644906ceba1e874a3aa3 Mar 11 09:16:01 crc kubenswrapper[4840]: I0311 09:16:01.459561 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e93bc954-4d65-4b5f-8070-2b9800ff3db2","Type":"ContainerStarted","Data":"4bd9e179773f0fb3c0b3a85a8322d1e11a73ef520f1e644906ceba1e874a3aa3"} Mar 11 09:16:01 crc kubenswrapper[4840]: I0311 09:16:01.911971 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 11 09:16:03 crc kubenswrapper[4840]: W0311 09:16:03.883082 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8346e561_a5cc_4bf9_807e_8837c8e13007.slice/crio-cdc46dd9c231a1062de2700ff8635c027f1dc99cef3385a9b1a62e344fa6b284 WatchSource:0}: Error finding container cdc46dd9c231a1062de2700ff8635c027f1dc99cef3385a9b1a62e344fa6b284: Status 404 returned error can't find the container with id cdc46dd9c231a1062de2700ff8635c027f1dc99cef3385a9b1a62e344fa6b284 Mar 11 09:16:03 crc kubenswrapper[4840]: E0311 09:16:03.903192 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" Mar 11 09:16:03 crc kubenswrapper[4840]: E0311 09:16:03.903423 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cmc67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-86bbd886cf-nc4gg_openstack(8e49aff8-5315-4fd2-931f-a6997548015b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 11 09:16:03 crc kubenswrapper[4840]: E0311 09:16:03.904714 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-86bbd886cf-nc4gg" podUID="8e49aff8-5315-4fd2-931f-a6997548015b" Mar 11 09:16:03 crc kubenswrapper[4840]: E0311 09:16:03.922169 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" Mar 11 09:16:03 crc kubenswrapper[4840]: E0311 09:16:03.922478 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2m8vv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-79f9fc56ff-r2nss_openstack(ad8faa32-f620-4ded-b606-0009b5a92cb8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 11 09:16:03 crc kubenswrapper[4840]: E0311 09:16:03.927974 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-79f9fc56ff-r2nss" podUID="ad8faa32-f620-4ded-b606-0009b5a92cb8" Mar 11 09:16:03 crc kubenswrapper[4840]: E0311 09:16:03.952319 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" Mar 11 09:16:03 crc kubenswrapper[4840]: E0311 09:16:03.952617 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvssb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7c47bcb9f9-krwxw_openstack(ff691b46-da15-4bfc-9b37-833a8cd599a9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 11 09:16:03 crc kubenswrapper[4840]: E0311 09:16:03.953918 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7c47bcb9f9-krwxw" podUID="ff691b46-da15-4bfc-9b37-833a8cd599a9" Mar 11 09:16:03 crc kubenswrapper[4840]: E0311 09:16:03.992157 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514" Mar 11 09:16:03 crc kubenswrapper[4840]: E0311 09:16:03.992328 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nxwq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-589db6c89c-rrkdt_openstack(bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 11 09:16:03 crc kubenswrapper[4840]: E0311 09:16:03.993989 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-589db6c89c-rrkdt" podUID="bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e" Mar 11 09:16:04 crc kubenswrapper[4840]: I0311 09:16:04.478813 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8346e561-a5cc-4bf9-807e-8837c8e13007","Type":"ContainerStarted","Data":"cdc46dd9c231a1062de2700ff8635c027f1dc99cef3385a9b1a62e344fa6b284"} Mar 11 09:16:04 crc kubenswrapper[4840]: E0311 09:16:04.485842 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514\\\"\"" pod="openstack/dnsmasq-dns-7c47bcb9f9-krwxw" podUID="ff691b46-da15-4bfc-9b37-833a8cd599a9" Mar 11 09:16:04 crc kubenswrapper[4840]: E0311 09:16:04.487357 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:944833b50342d462c10637342bc85197a8cf099a3650df12e23854dde99af514\\\"\"" pod="openstack/dnsmasq-dns-79f9fc56ff-r2nss" podUID="ad8faa32-f620-4ded-b606-0009b5a92cb8" Mar 11 09:16:04 crc kubenswrapper[4840]: I0311 09:16:04.548818 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553676-kjd2f"] Mar 11 09:16:04 crc kubenswrapper[4840]: I0311 09:16:04.600540 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 11 09:16:04 crc kubenswrapper[4840]: I0311 09:16:04.609720 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g2p7c"] Mar 11 09:16:04 crc kubenswrapper[4840]: I0311 09:16:04.624583 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 11 09:16:04 crc kubenswrapper[4840]: I0311 09:16:04.709130 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-qtcdv"] Mar 11 09:16:04 crc kubenswrapper[4840]: W0311 09:16:04.804105 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod056df7c0_d577_4908_91a8_b5dfb95e0316.slice/crio-8aa14a3a147610e283868c4198f18e1a26ac30e3a62f89d6c61fecb20acc6f29 WatchSource:0}: Error finding container 8aa14a3a147610e283868c4198f18e1a26ac30e3a62f89d6c61fecb20acc6f29: Status 404 returned error can't find the container with id 8aa14a3a147610e283868c4198f18e1a26ac30e3a62f89d6c61fecb20acc6f29 Mar 11 09:16:04 crc kubenswrapper[4840]: W0311 09:16:04.807951 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc44e0641_be37_4447_9666_14bf00c08827.slice/crio-a7b84fe23abccdb2ad5bbb8949deb90f7c70551a93447acfcb47c9d0acdc6e49 WatchSource:0}: Error finding container a7b84fe23abccdb2ad5bbb8949deb90f7c70551a93447acfcb47c9d0acdc6e49: Status 404 returned error can't find the container with id a7b84fe23abccdb2ad5bbb8949deb90f7c70551a93447acfcb47c9d0acdc6e49 Mar 11 09:16:04 crc kubenswrapper[4840]: W0311 09:16:04.811281 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb58b63e6_0eb4_444e_be2e_dca6bf37030e.slice/crio-c393d658b7a32c1283a2917efb5d0ed19bb7ecbc6adcecce5dc7b2e7a77eec1c WatchSource:0}: Error finding container c393d658b7a32c1283a2917efb5d0ed19bb7ecbc6adcecce5dc7b2e7a77eec1c: Status 404 returned error can't find the container with id c393d658b7a32c1283a2917efb5d0ed19bb7ecbc6adcecce5dc7b2e7a77eec1c Mar 11 09:16:04 crc kubenswrapper[4840]: W0311 09:16:04.817691 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda085068d_adb7_44fc_9d8a_7b413ceeee17.slice/crio-aa74c7829420e392dce45d5aaaff91ec20793f06c8ce4affaefc411baf22bf52 WatchSource:0}: Error finding container aa74c7829420e392dce45d5aaaff91ec20793f06c8ce4affaefc411baf22bf52: Status 404 returned error can't find the container with id aa74c7829420e392dce45d5aaaff91ec20793f06c8ce4affaefc411baf22bf52 Mar 11 09:16:04 crc kubenswrapper[4840]: I0311 09:16:04.953811 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.208685 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bbd886cf-nc4gg" Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.212769 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589db6c89c-rrkdt" Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.231549 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.380454 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e49aff8-5315-4fd2-931f-a6997548015b-dns-svc\") pod \"8e49aff8-5315-4fd2-931f-a6997548015b\" (UID: \"8e49aff8-5315-4fd2-931f-a6997548015b\") " Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.380552 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e-config\") pod \"bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e\" (UID: \"bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e\") " Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.380667 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxwq9\" (UniqueName: \"kubernetes.io/projected/bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e-kube-api-access-nxwq9\") pod \"bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e\" (UID: \"bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e\") " Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.381621 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e49aff8-5315-4fd2-931f-a6997548015b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8e49aff8-5315-4fd2-931f-a6997548015b" (UID: "8e49aff8-5315-4fd2-931f-a6997548015b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.381755 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmc67\" (UniqueName: \"kubernetes.io/projected/8e49aff8-5315-4fd2-931f-a6997548015b-kube-api-access-cmc67\") pod \"8e49aff8-5315-4fd2-931f-a6997548015b\" (UID: \"8e49aff8-5315-4fd2-931f-a6997548015b\") " Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.381814 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e49aff8-5315-4fd2-931f-a6997548015b-config\") pod \"8e49aff8-5315-4fd2-931f-a6997548015b\" (UID: \"8e49aff8-5315-4fd2-931f-a6997548015b\") " Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.381977 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e-config" (OuterVolumeSpecName: "config") pod "bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e" (UID: "bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.382426 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.382442 4840 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8e49aff8-5315-4fd2-931f-a6997548015b-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.382724 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e49aff8-5315-4fd2-931f-a6997548015b-config" (OuterVolumeSpecName: "config") pod "8e49aff8-5315-4fd2-931f-a6997548015b" (UID: "8e49aff8-5315-4fd2-931f-a6997548015b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.389792 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e49aff8-5315-4fd2-931f-a6997548015b-kube-api-access-cmc67" (OuterVolumeSpecName: "kube-api-access-cmc67") pod "8e49aff8-5315-4fd2-931f-a6997548015b" (UID: "8e49aff8-5315-4fd2-931f-a6997548015b"). InnerVolumeSpecName "kube-api-access-cmc67". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.406986 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e-kube-api-access-nxwq9" (OuterVolumeSpecName: "kube-api-access-nxwq9") pod "bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e" (UID: "bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e"). InnerVolumeSpecName "kube-api-access-nxwq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.486427 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxwq9\" (UniqueName: \"kubernetes.io/projected/bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e-kube-api-access-nxwq9\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.487009 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmc67\" (UniqueName: \"kubernetes.io/projected/8e49aff8-5315-4fd2-931f-a6997548015b-kube-api-access-cmc67\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.487266 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e49aff8-5315-4fd2-931f-a6997548015b-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.487589 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c6f4129d-4bc4-449b-be94-82fce07cf1f0","Type":"ContainerStarted","Data":"329e9bbbb5bfce62409b1d20efebb63436a5ad3a317150adfc3b37b992b688c7"} Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.488517 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b48aab55-15cb-42c4-a97b-692dbadf3353","Type":"ContainerStarted","Data":"1e8db8b381dcff5fcd0c36a76fecd838f5ed132d06427072768d78d9b7dd8517"} Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.489385 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86bbd886cf-nc4gg" event={"ID":"8e49aff8-5315-4fd2-931f-a6997548015b","Type":"ContainerDied","Data":"a23b025a8fa9303abfe7f8ec5b446f4ce3c5fae965882a7ab35341b41eb185d8"} Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.489458 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86bbd886cf-nc4gg" Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.493425 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0d076df9-9280-425c-9b61-bf84751f11c1","Type":"ContainerStarted","Data":"80a97e0a6c473d74e6991b8d2dda489c250850997d57966126038a715c457f2e"} Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.494745 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qtcdv" event={"ID":"b58b63e6-0eb4-444e-be2e-dca6bf37030e","Type":"ContainerStarted","Data":"c393d658b7a32c1283a2917efb5d0ed19bb7ecbc6adcecce5dc7b2e7a77eec1c"} Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.497230 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f31748d2-64a9-4839-ac55-691d9682ee8e","Type":"ContainerStarted","Data":"ee9ff21cca22ce33a9fecf80bd4c008df2c67c2ba0d6988e89b3285d5e560733"} Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.499282 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g2p7c" event={"ID":"056df7c0-d577-4908-91a8-b5dfb95e0316","Type":"ContainerStarted","Data":"8aa14a3a147610e283868c4198f18e1a26ac30e3a62f89d6c61fecb20acc6f29"} Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.502082 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3a964c24-3c53-4a29-98fb-ceaca467c372","Type":"ContainerStarted","Data":"43da3ef6c28eef2051a13159a394be8ff41df563fd5e85c35c8ba4cdcc59aec7"} Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.503668 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-589db6c89c-rrkdt" event={"ID":"bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e","Type":"ContainerDied","Data":"bce5faee62858a1ab6721b7698b3558883afd80fcf4beab937d6cb303499ea49"} Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.503682 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-589db6c89c-rrkdt" Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.504521 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c44e0641-be37-4447-9666-14bf00c08827","Type":"ContainerStarted","Data":"a7b84fe23abccdb2ad5bbb8949deb90f7c70551a93447acfcb47c9d0acdc6e49"} Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.505732 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553676-kjd2f" event={"ID":"a085068d-adb7-44fc-9d8a-7b413ceeee17","Type":"ContainerStarted","Data":"aa74c7829420e392dce45d5aaaff91ec20793f06c8ce4affaefc411baf22bf52"} Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.629424 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86bbd886cf-nc4gg"] Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.663157 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86bbd886cf-nc4gg"] Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.685632 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-589db6c89c-rrkdt"] Mar 11 09:16:05 crc kubenswrapper[4840]: I0311 09:16:05.690730 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-589db6c89c-rrkdt"] Mar 11 09:16:06 crc kubenswrapper[4840]: I0311 09:16:06.083288 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e49aff8-5315-4fd2-931f-a6997548015b" path="/var/lib/kubelet/pods/8e49aff8-5315-4fd2-931f-a6997548015b/volumes" Mar 11 09:16:06 crc kubenswrapper[4840]: I0311 09:16:06.083744 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e" path="/var/lib/kubelet/pods/bdbab8ba-45d2-49d1-878e-daf1bcaf3d2e/volumes" Mar 11 09:16:16 crc kubenswrapper[4840]: I0311 09:16:16.594856 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c6f4129d-4bc4-449b-be94-82fce07cf1f0","Type":"ContainerStarted","Data":"864e18a4e761096f65b64998d9b212c7afd08411bbd197099d71b94005a7016a"} Mar 11 09:16:16 crc kubenswrapper[4840]: I0311 09:16:16.604386 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8346e561-a5cc-4bf9-807e-8837c8e13007","Type":"ContainerStarted","Data":"d8e46432f83ff44762566602f5f078b7458c3b3c787cfeb72abc5d6c99520ac0"} Mar 11 09:16:16 crc kubenswrapper[4840]: I0311 09:16:16.604624 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 11 09:16:16 crc kubenswrapper[4840]: I0311 09:16:16.624301 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=16.348141748 podStartE2EDuration="28.624279036s" podCreationTimestamp="2026-03-11 09:15:48 +0000 UTC" firstStartedPulling="2026-03-11 09:16:03.903188178 +0000 UTC m=+1162.568857993" lastFinishedPulling="2026-03-11 09:16:16.179325456 +0000 UTC m=+1174.844995281" observedRunningTime="2026-03-11 09:16:16.620500361 +0000 UTC m=+1175.286170176" watchObservedRunningTime="2026-03-11 09:16:16.624279036 +0000 UTC m=+1175.289948841" Mar 11 09:16:17 crc kubenswrapper[4840]: I0311 09:16:17.625074 4840 generic.go:334] "Generic (PLEG): container finished" podID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerID="640a0d5f04bb44c120c2003ee9a5bf08967fe8a0fe3f28005a3c81fb1b406954" exitCode=0 Mar 11 09:16:17 crc kubenswrapper[4840]: I0311 09:16:17.626038 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qtcdv" event={"ID":"b58b63e6-0eb4-444e-be2e-dca6bf37030e","Type":"ContainerDied","Data":"640a0d5f04bb44c120c2003ee9a5bf08967fe8a0fe3f28005a3c81fb1b406954"} Mar 11 09:16:17 crc kubenswrapper[4840]: I0311 09:16:17.644363 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b48aab55-15cb-42c4-a97b-692dbadf3353","Type":"ContainerStarted","Data":"c7605e78bc0c06e2ec26eebd5becb94c457edb41d3f28ee05f105bd2a7e5ac3f"} Mar 11 09:16:17 crc kubenswrapper[4840]: I0311 09:16:17.651091 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g2p7c" event={"ID":"056df7c0-d577-4908-91a8-b5dfb95e0316","Type":"ContainerStarted","Data":"416b963acfae5e9237930d8b9e78836de7c9db73300426cdf2fdaf4f51eeb7fe"} Mar 11 09:16:17 crc kubenswrapper[4840]: I0311 09:16:17.651247 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-g2p7c" Mar 11 09:16:17 crc kubenswrapper[4840]: I0311 09:16:17.655372 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e93bc954-4d65-4b5f-8070-2b9800ff3db2","Type":"ContainerStarted","Data":"90bda731ece0d646b8d0e358a1c93f0fa416106233b5751b9b744fa4d0a5ddc0"} Mar 11 09:16:17 crc kubenswrapper[4840]: I0311 09:16:17.657913 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c44e0641-be37-4447-9666-14bf00c08827","Type":"ContainerStarted","Data":"dba9b2b61d92b87502d4ac01bbbc8b61a58c6a11a4e92608207ef818b5d336cd"} Mar 11 09:16:17 crc kubenswrapper[4840]: I0311 09:16:17.658091 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 11 09:16:17 crc kubenswrapper[4840]: I0311 09:16:17.660125 4840 generic.go:334] "Generic (PLEG): container finished" podID="a085068d-adb7-44fc-9d8a-7b413ceeee17" containerID="bf4e4f39e745fee6f9adb212657d2a4ae726f6d09bf565361a186edbb38fcdec" exitCode=0 Mar 11 09:16:17 crc kubenswrapper[4840]: I0311 09:16:17.660284 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553676-kjd2f" event={"ID":"a085068d-adb7-44fc-9d8a-7b413ceeee17","Type":"ContainerDied","Data":"bf4e4f39e745fee6f9adb212657d2a4ae726f6d09bf565361a186edbb38fcdec"} Mar 11 09:16:17 crc kubenswrapper[4840]: I0311 09:16:17.663223 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0d076df9-9280-425c-9b61-bf84751f11c1","Type":"ContainerStarted","Data":"3959aad58a8318dc9102553ae85a912f06348c456f581330b23a50a7f00d5ade"} Mar 11 09:16:17 crc kubenswrapper[4840]: I0311 09:16:17.678134 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-g2p7c" podStartSLOduration=15.307091991 podStartE2EDuration="26.67810683s" podCreationTimestamp="2026-03-11 09:15:51 +0000 UTC" firstStartedPulling="2026-03-11 09:16:04.811701992 +0000 UTC m=+1163.477371807" lastFinishedPulling="2026-03-11 09:16:16.182716831 +0000 UTC m=+1174.848386646" observedRunningTime="2026-03-11 09:16:17.676884889 +0000 UTC m=+1176.342554704" watchObservedRunningTime="2026-03-11 09:16:17.67810683 +0000 UTC m=+1176.343776645" Mar 11 09:16:17 crc kubenswrapper[4840]: I0311 09:16:17.724081 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=20.419672103 podStartE2EDuration="31.724058666s" podCreationTimestamp="2026-03-11 09:15:46 +0000 UTC" firstStartedPulling="2026-03-11 09:16:04.821400386 +0000 UTC m=+1163.487070201" lastFinishedPulling="2026-03-11 09:16:16.125786949 +0000 UTC m=+1174.791456764" observedRunningTime="2026-03-11 09:16:17.719734697 +0000 UTC m=+1176.385404512" watchObservedRunningTime="2026-03-11 09:16:17.724058666 +0000 UTC m=+1176.389728481" Mar 11 09:16:18 crc kubenswrapper[4840]: I0311 09:16:18.675581 4840 generic.go:334] "Generic (PLEG): container finished" podID="ff691b46-da15-4bfc-9b37-833a8cd599a9" containerID="acdaf4cfeaa1d7ace882f8a7e17f269cbd9c506764911549157e115d7a04a9c5" exitCode=0 Mar 11 09:16:18 crc kubenswrapper[4840]: I0311 09:16:18.676295 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c47bcb9f9-krwxw" event={"ID":"ff691b46-da15-4bfc-9b37-833a8cd599a9","Type":"ContainerDied","Data":"acdaf4cfeaa1d7ace882f8a7e17f269cbd9c506764911549157e115d7a04a9c5"} Mar 11 09:16:18 crc kubenswrapper[4840]: I0311 09:16:18.685185 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qtcdv" event={"ID":"b58b63e6-0eb4-444e-be2e-dca6bf37030e","Type":"ContainerStarted","Data":"dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d"} Mar 11 09:16:18 crc kubenswrapper[4840]: I0311 09:16:18.685251 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qtcdv" event={"ID":"b58b63e6-0eb4-444e-be2e-dca6bf37030e","Type":"ContainerStarted","Data":"ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09"} Mar 11 09:16:18 crc kubenswrapper[4840]: I0311 09:16:18.725835 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-qtcdv" podStartSLOduration=16.357718704 podStartE2EDuration="27.725814579s" podCreationTimestamp="2026-03-11 09:15:51 +0000 UTC" firstStartedPulling="2026-03-11 09:16:04.817729194 +0000 UTC m=+1163.483399009" lastFinishedPulling="2026-03-11 09:16:16.185825069 +0000 UTC m=+1174.851494884" observedRunningTime="2026-03-11 09:16:18.722783912 +0000 UTC m=+1177.388453727" watchObservedRunningTime="2026-03-11 09:16:18.725814579 +0000 UTC m=+1177.391484394" Mar 11 09:16:19 crc kubenswrapper[4840]: I0311 09:16:19.704335 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:16:19 crc kubenswrapper[4840]: I0311 09:16:19.704859 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:16:20 crc kubenswrapper[4840]: I0311 09:16:20.713648 4840 generic.go:334] "Generic (PLEG): container finished" podID="c6f4129d-4bc4-449b-be94-82fce07cf1f0" containerID="864e18a4e761096f65b64998d9b212c7afd08411bbd197099d71b94005a7016a" exitCode=0 Mar 11 09:16:20 crc kubenswrapper[4840]: I0311 09:16:20.713732 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c6f4129d-4bc4-449b-be94-82fce07cf1f0","Type":"ContainerDied","Data":"864e18a4e761096f65b64998d9b212c7afd08411bbd197099d71b94005a7016a"} Mar 11 09:16:20 crc kubenswrapper[4840]: I0311 09:16:20.719458 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c47bcb9f9-krwxw" event={"ID":"ff691b46-da15-4bfc-9b37-833a8cd599a9","Type":"ContainerStarted","Data":"362d5311a385a7f648a74f2db17c9845d9dea89170a1166f167a02a07029567e"} Mar 11 09:16:20 crc kubenswrapper[4840]: I0311 09:16:20.720274 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c47bcb9f9-krwxw" Mar 11 09:16:20 crc kubenswrapper[4840]: I0311 09:16:20.723373 4840 generic.go:334] "Generic (PLEG): container finished" podID="e93bc954-4d65-4b5f-8070-2b9800ff3db2" containerID="90bda731ece0d646b8d0e358a1c93f0fa416106233b5751b9b744fa4d0a5ddc0" exitCode=0 Mar 11 09:16:20 crc kubenswrapper[4840]: I0311 09:16:20.723904 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e93bc954-4d65-4b5f-8070-2b9800ff3db2","Type":"ContainerDied","Data":"90bda731ece0d646b8d0e358a1c93f0fa416106233b5751b9b744fa4d0a5ddc0"} Mar 11 09:16:20 crc kubenswrapper[4840]: I0311 09:16:20.753993 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c47bcb9f9-krwxw" podStartSLOduration=4.541360943 podStartE2EDuration="38.753972957s" podCreationTimestamp="2026-03-11 09:15:42 +0000 UTC" firstStartedPulling="2026-03-11 09:15:43.278195485 +0000 UTC m=+1141.943865300" lastFinishedPulling="2026-03-11 09:16:17.490807499 +0000 UTC m=+1176.156477314" observedRunningTime="2026-03-11 09:16:20.752557041 +0000 UTC m=+1179.418226856" watchObservedRunningTime="2026-03-11 09:16:20.753972957 +0000 UTC m=+1179.419642772" Mar 11 09:16:20 crc kubenswrapper[4840]: I0311 09:16:20.878856 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553676-kjd2f" Mar 11 09:16:21 crc kubenswrapper[4840]: I0311 09:16:21.000659 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwhvt\" (UniqueName: \"kubernetes.io/projected/a085068d-adb7-44fc-9d8a-7b413ceeee17-kube-api-access-xwhvt\") pod \"a085068d-adb7-44fc-9d8a-7b413ceeee17\" (UID: \"a085068d-adb7-44fc-9d8a-7b413ceeee17\") " Mar 11 09:16:21 crc kubenswrapper[4840]: I0311 09:16:21.005345 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a085068d-adb7-44fc-9d8a-7b413ceeee17-kube-api-access-xwhvt" (OuterVolumeSpecName: "kube-api-access-xwhvt") pod "a085068d-adb7-44fc-9d8a-7b413ceeee17" (UID: "a085068d-adb7-44fc-9d8a-7b413ceeee17"). InnerVolumeSpecName "kube-api-access-xwhvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:16:21 crc kubenswrapper[4840]: I0311 09:16:21.102937 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwhvt\" (UniqueName: \"kubernetes.io/projected/a085068d-adb7-44fc-9d8a-7b413ceeee17-kube-api-access-xwhvt\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:21 crc kubenswrapper[4840]: I0311 09:16:21.469803 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 11 09:16:21 crc kubenswrapper[4840]: I0311 09:16:21.734166 4840 generic.go:334] "Generic (PLEG): container finished" podID="ad8faa32-f620-4ded-b606-0009b5a92cb8" containerID="b61f5c7d9297ef7beab0682709f86a5ccfcb5752c1261b06e5b562656791f789" exitCode=0 Mar 11 09:16:21 crc kubenswrapper[4840]: I0311 09:16:21.734232 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79f9fc56ff-r2nss" event={"ID":"ad8faa32-f620-4ded-b606-0009b5a92cb8","Type":"ContainerDied","Data":"b61f5c7d9297ef7beab0682709f86a5ccfcb5752c1261b06e5b562656791f789"} Mar 11 09:16:21 crc kubenswrapper[4840]: I0311 09:16:21.737939 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553676-kjd2f" Mar 11 09:16:21 crc kubenswrapper[4840]: I0311 09:16:21.737939 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553676-kjd2f" event={"ID":"a085068d-adb7-44fc-9d8a-7b413ceeee17","Type":"ContainerDied","Data":"aa74c7829420e392dce45d5aaaff91ec20793f06c8ce4affaefc411baf22bf52"} Mar 11 09:16:21 crc kubenswrapper[4840]: I0311 09:16:21.738092 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa74c7829420e392dce45d5aaaff91ec20793f06c8ce4affaefc411baf22bf52" Mar 11 09:16:21 crc kubenswrapper[4840]: I0311 09:16:21.741885 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0d076df9-9280-425c-9b61-bf84751f11c1","Type":"ContainerStarted","Data":"5df6036e4383bcc32cc40a5083113e13895062a09a380000d0d1fb0f5b2f30bb"} Mar 11 09:16:21 crc kubenswrapper[4840]: I0311 09:16:21.745204 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c6f4129d-4bc4-449b-be94-82fce07cf1f0","Type":"ContainerStarted","Data":"25ef9fd444056788b6ce2758460268d8703cbbd6d97d9234bfee447123075141"} Mar 11 09:16:21 crc kubenswrapper[4840]: I0311 09:16:21.750184 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b48aab55-15cb-42c4-a97b-692dbadf3353","Type":"ContainerStarted","Data":"10e0544be31ad9957ce1bc9464d8fd6efda2ed36676a4427c1dd1344bc4e1a58"} Mar 11 09:16:21 crc kubenswrapper[4840]: I0311 09:16:21.758437 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e93bc954-4d65-4b5f-8070-2b9800ff3db2","Type":"ContainerStarted","Data":"b052e1bd7a58acb7f7eff7c22cb37c2b87847e0c994e60dd660321fe2b51b6c8"} Mar 11 09:16:21 crc kubenswrapper[4840]: I0311 09:16:21.779889 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=27.471438102 podStartE2EDuration="38.779863888s" podCreationTimestamp="2026-03-11 09:15:43 +0000 UTC" firstStartedPulling="2026-03-11 09:16:04.829630832 +0000 UTC m=+1163.495300647" lastFinishedPulling="2026-03-11 09:16:16.138056618 +0000 UTC m=+1174.803726433" observedRunningTime="2026-03-11 09:16:21.777288683 +0000 UTC m=+1180.442958498" watchObservedRunningTime="2026-03-11 09:16:21.779863888 +0000 UTC m=+1180.445533703" Mar 11 09:16:21 crc kubenswrapper[4840]: I0311 09:16:21.805627 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=15.83281565 podStartE2EDuration="31.805604785s" podCreationTimestamp="2026-03-11 09:15:50 +0000 UTC" firstStartedPulling="2026-03-11 09:16:04.975980061 +0000 UTC m=+1163.641649876" lastFinishedPulling="2026-03-11 09:16:20.948769186 +0000 UTC m=+1179.614439011" observedRunningTime="2026-03-11 09:16:21.795977243 +0000 UTC m=+1180.461647068" watchObservedRunningTime="2026-03-11 09:16:21.805604785 +0000 UTC m=+1180.471274600" Mar 11 09:16:21 crc kubenswrapper[4840]: I0311 09:16:21.826960 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=12.243284672 podStartE2EDuration="27.826935301s" podCreationTimestamp="2026-03-11 09:15:54 +0000 UTC" firstStartedPulling="2026-03-11 09:16:05.349369361 +0000 UTC m=+1164.015039176" lastFinishedPulling="2026-03-11 09:16:20.93301999 +0000 UTC m=+1179.598689805" observedRunningTime="2026-03-11 09:16:21.818003747 +0000 UTC m=+1180.483673562" watchObservedRunningTime="2026-03-11 09:16:21.826935301 +0000 UTC m=+1180.492605136" Mar 11 09:16:21 crc kubenswrapper[4840]: I0311 09:16:21.853712 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=23.053696323 podStartE2EDuration="37.853690123s" podCreationTimestamp="2026-03-11 09:15:44 +0000 UTC" firstStartedPulling="2026-03-11 09:16:01.382834624 +0000 UTC m=+1160.048504449" lastFinishedPulling="2026-03-11 09:16:16.182828434 +0000 UTC m=+1174.848498249" observedRunningTime="2026-03-11 09:16:21.85194653 +0000 UTC m=+1180.517616345" watchObservedRunningTime="2026-03-11 09:16:21.853690123 +0000 UTC m=+1180.519359938" Mar 11 09:16:21 crc kubenswrapper[4840]: I0311 09:16:21.951896 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553670-cd8gv"] Mar 11 09:16:21 crc kubenswrapper[4840]: I0311 09:16:21.959727 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553670-cd8gv"] Mar 11 09:16:22 crc kubenswrapper[4840]: I0311 09:16:22.072180 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="037b8006-981a-4018-a446-b5c00007db43" path="/var/lib/kubelet/pods/037b8006-981a-4018-a446-b5c00007db43/volumes" Mar 11 09:16:22 crc kubenswrapper[4840]: I0311 09:16:22.176623 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 11 09:16:22 crc kubenswrapper[4840]: I0311 09:16:22.176683 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 11 09:16:22 crc kubenswrapper[4840]: I0311 09:16:22.242878 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 11 09:16:22 crc kubenswrapper[4840]: I0311 09:16:22.768400 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79f9fc56ff-r2nss" event={"ID":"ad8faa32-f620-4ded-b606-0009b5a92cb8","Type":"ContainerStarted","Data":"27e5bfc0177c23c62203733ad850446a3cd74d8a6222283cbd77398aa132ab2f"} Mar 11 09:16:22 crc kubenswrapper[4840]: I0311 09:16:22.795644 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79f9fc56ff-r2nss" podStartSLOduration=-9223371996.059181 podStartE2EDuration="40.795594772s" podCreationTimestamp="2026-03-11 09:15:42 +0000 UTC" firstStartedPulling="2026-03-11 09:15:43.425518909 +0000 UTC m=+1142.091188724" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:16:22.786596306 +0000 UTC m=+1181.452266121" watchObservedRunningTime="2026-03-11 09:16:22.795594772 +0000 UTC m=+1181.461264607" Mar 11 09:16:22 crc kubenswrapper[4840]: I0311 09:16:22.814437 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.118714 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79f9fc56ff-r2nss"] Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.158720 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6444958b7f-kjg47"] Mar 11 09:16:23 crc kubenswrapper[4840]: E0311 09:16:23.159224 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a085068d-adb7-44fc-9d8a-7b413ceeee17" containerName="oc" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.159249 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="a085068d-adb7-44fc-9d8a-7b413ceeee17" containerName="oc" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.159497 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="a085068d-adb7-44fc-9d8a-7b413ceeee17" containerName="oc" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.160555 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6444958b7f-kjg47" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.162421 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.172611 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6444958b7f-kjg47"] Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.221497 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-6x72v"] Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.222809 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6x72v" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.229788 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.232226 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6x72v"] Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.235634 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.247297 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8fbab86-0dd7-4671-9258-a4df9068699c-ovsdbserver-nb\") pod \"dnsmasq-dns-6444958b7f-kjg47\" (UID: \"e8fbab86-0dd7-4671-9258-a4df9068699c\") " pod="openstack/dnsmasq-dns-6444958b7f-kjg47" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.247379 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8fbab86-0dd7-4671-9258-a4df9068699c-config\") pod \"dnsmasq-dns-6444958b7f-kjg47\" (UID: \"e8fbab86-0dd7-4671-9258-a4df9068699c\") " pod="openstack/dnsmasq-dns-6444958b7f-kjg47" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.247409 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwr6j\" (UniqueName: \"kubernetes.io/projected/e8fbab86-0dd7-4671-9258-a4df9068699c-kube-api-access-rwr6j\") pod \"dnsmasq-dns-6444958b7f-kjg47\" (UID: \"e8fbab86-0dd7-4671-9258-a4df9068699c\") " pod="openstack/dnsmasq-dns-6444958b7f-kjg47" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.247437 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8fbab86-0dd7-4671-9258-a4df9068699c-dns-svc\") pod \"dnsmasq-dns-6444958b7f-kjg47\" (UID: \"e8fbab86-0dd7-4671-9258-a4df9068699c\") " pod="openstack/dnsmasq-dns-6444958b7f-kjg47" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.295773 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.349561 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-ovs-rundir\") pod \"ovn-controller-metrics-6x72v\" (UID: \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\") " pod="openstack/ovn-controller-metrics-6x72v" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.349609 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8fbab86-0dd7-4671-9258-a4df9068699c-dns-svc\") pod \"dnsmasq-dns-6444958b7f-kjg47\" (UID: \"e8fbab86-0dd7-4671-9258-a4df9068699c\") " pod="openstack/dnsmasq-dns-6444958b7f-kjg47" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.349632 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-combined-ca-bundle\") pod \"ovn-controller-metrics-6x72v\" (UID: \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\") " pod="openstack/ovn-controller-metrics-6x72v" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.349759 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dc89\" (UniqueName: \"kubernetes.io/projected/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-kube-api-access-4dc89\") pod \"ovn-controller-metrics-6x72v\" (UID: \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\") " pod="openstack/ovn-controller-metrics-6x72v" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.349910 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-ovn-rundir\") pod \"ovn-controller-metrics-6x72v\" (UID: \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\") " pod="openstack/ovn-controller-metrics-6x72v" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.349966 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-config\") pod \"ovn-controller-metrics-6x72v\" (UID: \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\") " pod="openstack/ovn-controller-metrics-6x72v" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.350042 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8fbab86-0dd7-4671-9258-a4df9068699c-ovsdbserver-nb\") pod \"dnsmasq-dns-6444958b7f-kjg47\" (UID: \"e8fbab86-0dd7-4671-9258-a4df9068699c\") " pod="openstack/dnsmasq-dns-6444958b7f-kjg47" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.350157 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8fbab86-0dd7-4671-9258-a4df9068699c-config\") pod \"dnsmasq-dns-6444958b7f-kjg47\" (UID: \"e8fbab86-0dd7-4671-9258-a4df9068699c\") " pod="openstack/dnsmasq-dns-6444958b7f-kjg47" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.350189 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwr6j\" (UniqueName: \"kubernetes.io/projected/e8fbab86-0dd7-4671-9258-a4df9068699c-kube-api-access-rwr6j\") pod \"dnsmasq-dns-6444958b7f-kjg47\" (UID: \"e8fbab86-0dd7-4671-9258-a4df9068699c\") " pod="openstack/dnsmasq-dns-6444958b7f-kjg47" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.350212 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6x72v\" (UID: \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\") " pod="openstack/ovn-controller-metrics-6x72v" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.350600 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8fbab86-0dd7-4671-9258-a4df9068699c-dns-svc\") pod \"dnsmasq-dns-6444958b7f-kjg47\" (UID: \"e8fbab86-0dd7-4671-9258-a4df9068699c\") " pod="openstack/dnsmasq-dns-6444958b7f-kjg47" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.351124 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8fbab86-0dd7-4671-9258-a4df9068699c-ovsdbserver-nb\") pod \"dnsmasq-dns-6444958b7f-kjg47\" (UID: \"e8fbab86-0dd7-4671-9258-a4df9068699c\") " pod="openstack/dnsmasq-dns-6444958b7f-kjg47" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.351198 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8fbab86-0dd7-4671-9258-a4df9068699c-config\") pod \"dnsmasq-dns-6444958b7f-kjg47\" (UID: \"e8fbab86-0dd7-4671-9258-a4df9068699c\") " pod="openstack/dnsmasq-dns-6444958b7f-kjg47" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.374690 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwr6j\" (UniqueName: \"kubernetes.io/projected/e8fbab86-0dd7-4671-9258-a4df9068699c-kube-api-access-rwr6j\") pod \"dnsmasq-dns-6444958b7f-kjg47\" (UID: \"e8fbab86-0dd7-4671-9258-a4df9068699c\") " pod="openstack/dnsmasq-dns-6444958b7f-kjg47" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.451428 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-config\") pod \"ovn-controller-metrics-6x72v\" (UID: \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\") " pod="openstack/ovn-controller-metrics-6x72v" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.451598 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6x72v\" (UID: \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\") " pod="openstack/ovn-controller-metrics-6x72v" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.451667 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-ovs-rundir\") pod \"ovn-controller-metrics-6x72v\" (UID: \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\") " pod="openstack/ovn-controller-metrics-6x72v" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.451692 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-combined-ca-bundle\") pod \"ovn-controller-metrics-6x72v\" (UID: \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\") " pod="openstack/ovn-controller-metrics-6x72v" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.451719 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dc89\" (UniqueName: \"kubernetes.io/projected/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-kube-api-access-4dc89\") pod \"ovn-controller-metrics-6x72v\" (UID: \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\") " pod="openstack/ovn-controller-metrics-6x72v" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.451766 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-ovn-rundir\") pod \"ovn-controller-metrics-6x72v\" (UID: \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\") " pod="openstack/ovn-controller-metrics-6x72v" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.452074 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-ovn-rundir\") pod \"ovn-controller-metrics-6x72v\" (UID: \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\") " pod="openstack/ovn-controller-metrics-6x72v" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.452075 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-ovs-rundir\") pod \"ovn-controller-metrics-6x72v\" (UID: \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\") " pod="openstack/ovn-controller-metrics-6x72v" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.452306 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-config\") pod \"ovn-controller-metrics-6x72v\" (UID: \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\") " pod="openstack/ovn-controller-metrics-6x72v" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.455828 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-combined-ca-bundle\") pod \"ovn-controller-metrics-6x72v\" (UID: \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\") " pod="openstack/ovn-controller-metrics-6x72v" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.470440 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6x72v\" (UID: \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\") " pod="openstack/ovn-controller-metrics-6x72v" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.477128 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dc89\" (UniqueName: \"kubernetes.io/projected/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-kube-api-access-4dc89\") pod \"ovn-controller-metrics-6x72v\" (UID: \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\") " pod="openstack/ovn-controller-metrics-6x72v" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.485952 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6444958b7f-kjg47" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.519106 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c47bcb9f9-krwxw"] Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.519365 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c47bcb9f9-krwxw" podUID="ff691b46-da15-4bfc-9b37-833a8cd599a9" containerName="dnsmasq-dns" containerID="cri-o://362d5311a385a7f648a74f2db17c9845d9dea89170a1166f167a02a07029567e" gracePeriod=10 Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.544505 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6x72v" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.567695 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b57d9888c-xvlx4"] Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.575964 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.578889 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.587195 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b57d9888c-xvlx4"] Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.658131 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51dd350a-e07a-49db-babe-419a6e23e271-dns-svc\") pod \"dnsmasq-dns-7b57d9888c-xvlx4\" (UID: \"51dd350a-e07a-49db-babe-419a6e23e271\") " pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.658202 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51dd350a-e07a-49db-babe-419a6e23e271-ovsdbserver-nb\") pod \"dnsmasq-dns-7b57d9888c-xvlx4\" (UID: \"51dd350a-e07a-49db-babe-419a6e23e271\") " pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.658269 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51dd350a-e07a-49db-babe-419a6e23e271-ovsdbserver-sb\") pod \"dnsmasq-dns-7b57d9888c-xvlx4\" (UID: \"51dd350a-e07a-49db-babe-419a6e23e271\") " pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.658303 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtswv\" (UniqueName: \"kubernetes.io/projected/51dd350a-e07a-49db-babe-419a6e23e271-kube-api-access-rtswv\") pod \"dnsmasq-dns-7b57d9888c-xvlx4\" (UID: \"51dd350a-e07a-49db-babe-419a6e23e271\") " pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.658335 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51dd350a-e07a-49db-babe-419a6e23e271-config\") pod \"dnsmasq-dns-7b57d9888c-xvlx4\" (UID: \"51dd350a-e07a-49db-babe-419a6e23e271\") " pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.760723 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51dd350a-e07a-49db-babe-419a6e23e271-config\") pod \"dnsmasq-dns-7b57d9888c-xvlx4\" (UID: \"51dd350a-e07a-49db-babe-419a6e23e271\") " pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.760839 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51dd350a-e07a-49db-babe-419a6e23e271-dns-svc\") pod \"dnsmasq-dns-7b57d9888c-xvlx4\" (UID: \"51dd350a-e07a-49db-babe-419a6e23e271\") " pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.760888 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51dd350a-e07a-49db-babe-419a6e23e271-ovsdbserver-nb\") pod \"dnsmasq-dns-7b57d9888c-xvlx4\" (UID: \"51dd350a-e07a-49db-babe-419a6e23e271\") " pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.760918 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51dd350a-e07a-49db-babe-419a6e23e271-ovsdbserver-sb\") pod \"dnsmasq-dns-7b57d9888c-xvlx4\" (UID: \"51dd350a-e07a-49db-babe-419a6e23e271\") " pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.760955 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtswv\" (UniqueName: \"kubernetes.io/projected/51dd350a-e07a-49db-babe-419a6e23e271-kube-api-access-rtswv\") pod \"dnsmasq-dns-7b57d9888c-xvlx4\" (UID: \"51dd350a-e07a-49db-babe-419a6e23e271\") " pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.762465 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51dd350a-e07a-49db-babe-419a6e23e271-config\") pod \"dnsmasq-dns-7b57d9888c-xvlx4\" (UID: \"51dd350a-e07a-49db-babe-419a6e23e271\") " pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.763066 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51dd350a-e07a-49db-babe-419a6e23e271-ovsdbserver-nb\") pod \"dnsmasq-dns-7b57d9888c-xvlx4\" (UID: \"51dd350a-e07a-49db-babe-419a6e23e271\") " pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.766784 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51dd350a-e07a-49db-babe-419a6e23e271-ovsdbserver-sb\") pod \"dnsmasq-dns-7b57d9888c-xvlx4\" (UID: \"51dd350a-e07a-49db-babe-419a6e23e271\") " pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.768506 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51dd350a-e07a-49db-babe-419a6e23e271-dns-svc\") pod \"dnsmasq-dns-7b57d9888c-xvlx4\" (UID: \"51dd350a-e07a-49db-babe-419a6e23e271\") " pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.784825 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtswv\" (UniqueName: \"kubernetes.io/projected/51dd350a-e07a-49db-babe-419a6e23e271-kube-api-access-rtswv\") pod \"dnsmasq-dns-7b57d9888c-xvlx4\" (UID: \"51dd350a-e07a-49db-babe-419a6e23e271\") " pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.787247 4840 generic.go:334] "Generic (PLEG): container finished" podID="ff691b46-da15-4bfc-9b37-833a8cd599a9" containerID="362d5311a385a7f648a74f2db17c9845d9dea89170a1166f167a02a07029567e" exitCode=0 Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.788266 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c47bcb9f9-krwxw" event={"ID":"ff691b46-da15-4bfc-9b37-833a8cd599a9","Type":"ContainerDied","Data":"362d5311a385a7f648a74f2db17c9845d9dea89170a1166f167a02a07029567e"} Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.789142 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79f9fc56ff-r2nss" podUID="ad8faa32-f620-4ded-b606-0009b5a92cb8" containerName="dnsmasq-dns" containerID="cri-o://27e5bfc0177c23c62203733ad850446a3cd74d8a6222283cbd77398aa132ab2f" gracePeriod=10 Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.789311 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.789351 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79f9fc56ff-r2nss" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.850840 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 11 09:16:23 crc kubenswrapper[4840]: I0311 09:16:23.977259 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.040165 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.047854 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.054301 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.054584 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.054710 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-5z5pd" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.054811 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.079247 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.102946 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6444958b7f-kjg47"] Mar 11 09:16:24 crc kubenswrapper[4840]: W0311 09:16:24.124068 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8fbab86_0dd7_4671_9258_a4df9068699c.slice/crio-49f8fe6c2098f845c1d08dd6d482aa2afc13aeeb0022b358ff6e0afa5837efe9 WatchSource:0}: Error finding container 49f8fe6c2098f845c1d08dd6d482aa2afc13aeeb0022b358ff6e0afa5837efe9: Status 404 returned error can't find the container with id 49f8fe6c2098f845c1d08dd6d482aa2afc13aeeb0022b358ff6e0afa5837efe9 Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.140041 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c47bcb9f9-krwxw" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.170797 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " pod="openstack/ovn-northd-0" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.170860 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc597\" (UniqueName: \"kubernetes.io/projected/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-kube-api-access-cc597\") pod \"ovn-northd-0\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " pod="openstack/ovn-northd-0" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.170894 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-config\") pod \"ovn-northd-0\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " pod="openstack/ovn-northd-0" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.170945 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " pod="openstack/ovn-northd-0" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.171012 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " pod="openstack/ovn-northd-0" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.171104 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " pod="openstack/ovn-northd-0" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.171133 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-scripts\") pod \"ovn-northd-0\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " pod="openstack/ovn-northd-0" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.222991 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6x72v"] Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.272209 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvssb\" (UniqueName: \"kubernetes.io/projected/ff691b46-da15-4bfc-9b37-833a8cd599a9-kube-api-access-vvssb\") pod \"ff691b46-da15-4bfc-9b37-833a8cd599a9\" (UID: \"ff691b46-da15-4bfc-9b37-833a8cd599a9\") " Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.272332 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff691b46-da15-4bfc-9b37-833a8cd599a9-dns-svc\") pod \"ff691b46-da15-4bfc-9b37-833a8cd599a9\" (UID: \"ff691b46-da15-4bfc-9b37-833a8cd599a9\") " Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.272390 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff691b46-da15-4bfc-9b37-833a8cd599a9-config\") pod \"ff691b46-da15-4bfc-9b37-833a8cd599a9\" (UID: \"ff691b46-da15-4bfc-9b37-833a8cd599a9\") " Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.272774 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc597\" (UniqueName: \"kubernetes.io/projected/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-kube-api-access-cc597\") pod \"ovn-northd-0\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " pod="openstack/ovn-northd-0" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.272821 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-config\") pod \"ovn-northd-0\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " pod="openstack/ovn-northd-0" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.272864 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " pod="openstack/ovn-northd-0" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.272908 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " pod="openstack/ovn-northd-0" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.272984 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " pod="openstack/ovn-northd-0" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.273016 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-scripts\") pod \"ovn-northd-0\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " pod="openstack/ovn-northd-0" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.273069 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " pod="openstack/ovn-northd-0" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.273700 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " pod="openstack/ovn-northd-0" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.276102 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-scripts\") pod \"ovn-northd-0\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " pod="openstack/ovn-northd-0" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.278571 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-config\") pod \"ovn-northd-0\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " pod="openstack/ovn-northd-0" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.280115 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " pod="openstack/ovn-northd-0" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.292097 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff691b46-da15-4bfc-9b37-833a8cd599a9-kube-api-access-vvssb" (OuterVolumeSpecName: "kube-api-access-vvssb") pod "ff691b46-da15-4bfc-9b37-833a8cd599a9" (UID: "ff691b46-da15-4bfc-9b37-833a8cd599a9"). InnerVolumeSpecName "kube-api-access-vvssb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.292247 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " pod="openstack/ovn-northd-0" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.293088 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " pod="openstack/ovn-northd-0" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.300656 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc597\" (UniqueName: \"kubernetes.io/projected/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-kube-api-access-cc597\") pod \"ovn-northd-0\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " pod="openstack/ovn-northd-0" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.374861 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvssb\" (UniqueName: \"kubernetes.io/projected/ff691b46-da15-4bfc-9b37-833a8cd599a9-kube-api-access-vvssb\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.403902 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff691b46-da15-4bfc-9b37-833a8cd599a9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ff691b46-da15-4bfc-9b37-833a8cd599a9" (UID: "ff691b46-da15-4bfc-9b37-833a8cd599a9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.405914 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.419512 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff691b46-da15-4bfc-9b37-833a8cd599a9-config" (OuterVolumeSpecName: "config") pod "ff691b46-da15-4bfc-9b37-833a8cd599a9" (UID: "ff691b46-da15-4bfc-9b37-833a8cd599a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.477737 4840 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff691b46-da15-4bfc-9b37-833a8cd599a9-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.478223 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff691b46-da15-4bfc-9b37-833a8cd599a9-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.589551 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79f9fc56ff-r2nss" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.619559 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b57d9888c-xvlx4"] Mar 11 09:16:24 crc kubenswrapper[4840]: W0311 09:16:24.659427 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51dd350a_e07a_49db_babe_419a6e23e271.slice/crio-6ea339deaf35390394fdfc07d305f5dc33ad856271801abaa248605d08460331 WatchSource:0}: Error finding container 6ea339deaf35390394fdfc07d305f5dc33ad856271801abaa248605d08460331: Status 404 returned error can't find the container with id 6ea339deaf35390394fdfc07d305f5dc33ad856271801abaa248605d08460331 Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.689969 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad8faa32-f620-4ded-b606-0009b5a92cb8-dns-svc\") pod \"ad8faa32-f620-4ded-b606-0009b5a92cb8\" (UID: \"ad8faa32-f620-4ded-b606-0009b5a92cb8\") " Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.690025 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad8faa32-f620-4ded-b606-0009b5a92cb8-config\") pod \"ad8faa32-f620-4ded-b606-0009b5a92cb8\" (UID: \"ad8faa32-f620-4ded-b606-0009b5a92cb8\") " Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.690147 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m8vv\" (UniqueName: \"kubernetes.io/projected/ad8faa32-f620-4ded-b606-0009b5a92cb8-kube-api-access-2m8vv\") pod \"ad8faa32-f620-4ded-b606-0009b5a92cb8\" (UID: \"ad8faa32-f620-4ded-b606-0009b5a92cb8\") " Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.711404 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad8faa32-f620-4ded-b606-0009b5a92cb8-kube-api-access-2m8vv" (OuterVolumeSpecName: "kube-api-access-2m8vv") pod "ad8faa32-f620-4ded-b606-0009b5a92cb8" (UID: "ad8faa32-f620-4ded-b606-0009b5a92cb8"). InnerVolumeSpecName "kube-api-access-2m8vv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.754453 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad8faa32-f620-4ded-b606-0009b5a92cb8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ad8faa32-f620-4ded-b606-0009b5a92cb8" (UID: "ad8faa32-f620-4ded-b606-0009b5a92cb8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.757878 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad8faa32-f620-4ded-b606-0009b5a92cb8-config" (OuterVolumeSpecName: "config") pod "ad8faa32-f620-4ded-b606-0009b5a92cb8" (UID: "ad8faa32-f620-4ded-b606-0009b5a92cb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.791605 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m8vv\" (UniqueName: \"kubernetes.io/projected/ad8faa32-f620-4ded-b606-0009b5a92cb8-kube-api-access-2m8vv\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.791629 4840 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad8faa32-f620-4ded-b606-0009b5a92cb8-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.791639 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad8faa32-f620-4ded-b606-0009b5a92cb8-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.802304 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6x72v" event={"ID":"f0b554eb-06b2-4670-99df-9b4fcfc6a42f","Type":"ContainerStarted","Data":"663786306d4268a4dc766ed8fb9c44b26423f4bc146d98039618216faacd06ee"} Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.802361 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6x72v" event={"ID":"f0b554eb-06b2-4670-99df-9b4fcfc6a42f","Type":"ContainerStarted","Data":"0c663ab073f92da3912363efc0522d911b62b6a9415959263b09dbb8ca419de0"} Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.806452 4840 generic.go:334] "Generic (PLEG): container finished" podID="ad8faa32-f620-4ded-b606-0009b5a92cb8" containerID="27e5bfc0177c23c62203733ad850446a3cd74d8a6222283cbd77398aa132ab2f" exitCode=0 Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.806700 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79f9fc56ff-r2nss" event={"ID":"ad8faa32-f620-4ded-b606-0009b5a92cb8","Type":"ContainerDied","Data":"27e5bfc0177c23c62203733ad850446a3cd74d8a6222283cbd77398aa132ab2f"} Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.806744 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79f9fc56ff-r2nss" event={"ID":"ad8faa32-f620-4ded-b606-0009b5a92cb8","Type":"ContainerDied","Data":"0c7272e641e20d83077264f1a97810bff882fc8c725a7410984c968d88e33c65"} Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.806767 4840 scope.go:117] "RemoveContainer" containerID="27e5bfc0177c23c62203733ad850446a3cd74d8a6222283cbd77398aa132ab2f" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.806950 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79f9fc56ff-r2nss" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.808739 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" event={"ID":"51dd350a-e07a-49db-babe-419a6e23e271","Type":"ContainerStarted","Data":"6ea339deaf35390394fdfc07d305f5dc33ad856271801abaa248605d08460331"} Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.816122 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c47bcb9f9-krwxw" event={"ID":"ff691b46-da15-4bfc-9b37-833a8cd599a9","Type":"ContainerDied","Data":"5ba5048199e31865dbf953275305d91ff3219e702305451fcfa5b05d2d2e5306"} Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.816220 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c47bcb9f9-krwxw" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.823847 4840 generic.go:334] "Generic (PLEG): container finished" podID="e8fbab86-0dd7-4671-9258-a4df9068699c" containerID="6fa05a2c0cb379aa6773fc67393d5927a4f3becfa84a4ed2d0595ef2559ab872" exitCode=0 Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.826124 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6444958b7f-kjg47" event={"ID":"e8fbab86-0dd7-4671-9258-a4df9068699c","Type":"ContainerDied","Data":"6fa05a2c0cb379aa6773fc67393d5927a4f3becfa84a4ed2d0595ef2559ab872"} Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.826192 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6444958b7f-kjg47" event={"ID":"e8fbab86-0dd7-4671-9258-a4df9068699c","Type":"ContainerStarted","Data":"49f8fe6c2098f845c1d08dd6d482aa2afc13aeeb0022b358ff6e0afa5837efe9"} Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.865404 4840 scope.go:117] "RemoveContainer" containerID="b61f5c7d9297ef7beab0682709f86a5ccfcb5752c1261b06e5b562656791f789" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.890151 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-6x72v" podStartSLOduration=1.890128019 podStartE2EDuration="1.890128019s" podCreationTimestamp="2026-03-11 09:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:16:24.822532429 +0000 UTC m=+1183.488202234" watchObservedRunningTime="2026-03-11 09:16:24.890128019 +0000 UTC m=+1183.555797834" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.890406 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79f9fc56ff-r2nss"] Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.913380 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79f9fc56ff-r2nss"] Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.929765 4840 scope.go:117] "RemoveContainer" containerID="27e5bfc0177c23c62203733ad850446a3cd74d8a6222283cbd77398aa132ab2f" Mar 11 09:16:24 crc kubenswrapper[4840]: E0311 09:16:24.934260 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27e5bfc0177c23c62203733ad850446a3cd74d8a6222283cbd77398aa132ab2f\": container with ID starting with 27e5bfc0177c23c62203733ad850446a3cd74d8a6222283cbd77398aa132ab2f not found: ID does not exist" containerID="27e5bfc0177c23c62203733ad850446a3cd74d8a6222283cbd77398aa132ab2f" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.934500 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27e5bfc0177c23c62203733ad850446a3cd74d8a6222283cbd77398aa132ab2f"} err="failed to get container status \"27e5bfc0177c23c62203733ad850446a3cd74d8a6222283cbd77398aa132ab2f\": rpc error: code = NotFound desc = could not find container \"27e5bfc0177c23c62203733ad850446a3cd74d8a6222283cbd77398aa132ab2f\": container with ID starting with 27e5bfc0177c23c62203733ad850446a3cd74d8a6222283cbd77398aa132ab2f not found: ID does not exist" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.934589 4840 scope.go:117] "RemoveContainer" containerID="b61f5c7d9297ef7beab0682709f86a5ccfcb5752c1261b06e5b562656791f789" Mar 11 09:16:24 crc kubenswrapper[4840]: E0311 09:16:24.934919 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b61f5c7d9297ef7beab0682709f86a5ccfcb5752c1261b06e5b562656791f789\": container with ID starting with b61f5c7d9297ef7beab0682709f86a5ccfcb5752c1261b06e5b562656791f789 not found: ID does not exist" containerID="b61f5c7d9297ef7beab0682709f86a5ccfcb5752c1261b06e5b562656791f789" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.934945 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b61f5c7d9297ef7beab0682709f86a5ccfcb5752c1261b06e5b562656791f789"} err="failed to get container status \"b61f5c7d9297ef7beab0682709f86a5ccfcb5752c1261b06e5b562656791f789\": rpc error: code = NotFound desc = could not find container \"b61f5c7d9297ef7beab0682709f86a5ccfcb5752c1261b06e5b562656791f789\": container with ID starting with b61f5c7d9297ef7beab0682709f86a5ccfcb5752c1261b06e5b562656791f789 not found: ID does not exist" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.934962 4840 scope.go:117] "RemoveContainer" containerID="362d5311a385a7f648a74f2db17c9845d9dea89170a1166f167a02a07029567e" Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.957416 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c47bcb9f9-krwxw"] Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.965626 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c47bcb9f9-krwxw"] Mar 11 09:16:24 crc kubenswrapper[4840]: I0311 09:16:24.975402 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 11 09:16:25 crc kubenswrapper[4840]: W0311 09:16:25.007745 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b0d4c8e_fe4e_4ca7_8cfe_1d4fb97c3ac6.slice/crio-5cfb05a3cd6aac51ef732971ab79ab60f75029cd71dda360f49ad1b9a2d0edc2 WatchSource:0}: Error finding container 5cfb05a3cd6aac51ef732971ab79ab60f75029cd71dda360f49ad1b9a2d0edc2: Status 404 returned error can't find the container with id 5cfb05a3cd6aac51ef732971ab79ab60f75029cd71dda360f49ad1b9a2d0edc2 Mar 11 09:16:25 crc kubenswrapper[4840]: I0311 09:16:25.021222 4840 scope.go:117] "RemoveContainer" containerID="acdaf4cfeaa1d7ace882f8a7e17f269cbd9c506764911549157e115d7a04a9c5" Mar 11 09:16:25 crc kubenswrapper[4840]: I0311 09:16:25.042908 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 11 09:16:25 crc kubenswrapper[4840]: I0311 09:16:25.042952 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 11 09:16:25 crc kubenswrapper[4840]: I0311 09:16:25.835696 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6444958b7f-kjg47" event={"ID":"e8fbab86-0dd7-4671-9258-a4df9068699c","Type":"ContainerStarted","Data":"93b47c2b09da48e6911be72dd763be7bfbccb1391132bbc89b3d810ec644ad4b"} Mar 11 09:16:25 crc kubenswrapper[4840]: I0311 09:16:25.836175 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6444958b7f-kjg47" Mar 11 09:16:25 crc kubenswrapper[4840]: I0311 09:16:25.848149 4840 generic.go:334] "Generic (PLEG): container finished" podID="51dd350a-e07a-49db-babe-419a6e23e271" containerID="0756c34c4bbf430129f8132fddfe578fb4d973c5a56e788f369e2843e4b7176a" exitCode=0 Mar 11 09:16:25 crc kubenswrapper[4840]: I0311 09:16:25.848236 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" event={"ID":"51dd350a-e07a-49db-babe-419a6e23e271","Type":"ContainerDied","Data":"0756c34c4bbf430129f8132fddfe578fb4d973c5a56e788f369e2843e4b7176a"} Mar 11 09:16:25 crc kubenswrapper[4840]: I0311 09:16:25.849389 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6","Type":"ContainerStarted","Data":"5cfb05a3cd6aac51ef732971ab79ab60f75029cd71dda360f49ad1b9a2d0edc2"} Mar 11 09:16:25 crc kubenswrapper[4840]: I0311 09:16:25.865668 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6444958b7f-kjg47" podStartSLOduration=2.865625002 podStartE2EDuration="2.865625002s" podCreationTimestamp="2026-03-11 09:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:16:25.854763359 +0000 UTC m=+1184.520433174" watchObservedRunningTime="2026-03-11 09:16:25.865625002 +0000 UTC m=+1184.531294817" Mar 11 09:16:26 crc kubenswrapper[4840]: I0311 09:16:26.071172 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad8faa32-f620-4ded-b606-0009b5a92cb8" path="/var/lib/kubelet/pods/ad8faa32-f620-4ded-b606-0009b5a92cb8/volumes" Mar 11 09:16:26 crc kubenswrapper[4840]: I0311 09:16:26.071909 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff691b46-da15-4bfc-9b37-833a8cd599a9" path="/var/lib/kubelet/pods/ff691b46-da15-4bfc-9b37-833a8cd599a9/volumes" Mar 11 09:16:26 crc kubenswrapper[4840]: I0311 09:16:26.281551 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 11 09:16:26 crc kubenswrapper[4840]: I0311 09:16:26.282015 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 11 09:16:26 crc kubenswrapper[4840]: I0311 09:16:26.863027 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" event={"ID":"51dd350a-e07a-49db-babe-419a6e23e271","Type":"ContainerStarted","Data":"2bebde5ea6ba31adfbf9a34327c68ff63b9f596992027cfe9c2c5849d6a2d624"} Mar 11 09:16:26 crc kubenswrapper[4840]: I0311 09:16:26.863501 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" Mar 11 09:16:26 crc kubenswrapper[4840]: I0311 09:16:26.865511 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6","Type":"ContainerStarted","Data":"a22560a02062c00ff0e4a3b9e873bd27ab71b2325c47df15e235ac0992d9117c"} Mar 11 09:16:26 crc kubenswrapper[4840]: I0311 09:16:26.869531 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6","Type":"ContainerStarted","Data":"7adedc06633a9d0c8638d09c8c1e960d90f82138b050dce1fb05836daea4ad71"} Mar 11 09:16:26 crc kubenswrapper[4840]: I0311 09:16:26.882313 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" podStartSLOduration=3.882296031 podStartE2EDuration="3.882296031s" podCreationTimestamp="2026-03-11 09:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:16:26.881662655 +0000 UTC m=+1185.547332470" watchObservedRunningTime="2026-03-11 09:16:26.882296031 +0000 UTC m=+1185.547965846" Mar 11 09:16:26 crc kubenswrapper[4840]: I0311 09:16:26.902909 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.7020709200000002 podStartE2EDuration="2.902893209s" podCreationTimestamp="2026-03-11 09:16:24 +0000 UTC" firstStartedPulling="2026-03-11 09:16:25.021286378 +0000 UTC m=+1183.686956183" lastFinishedPulling="2026-03-11 09:16:26.222108667 +0000 UTC m=+1184.887778472" observedRunningTime="2026-03-11 09:16:26.900314234 +0000 UTC m=+1185.565984049" watchObservedRunningTime="2026-03-11 09:16:26.902893209 +0000 UTC m=+1185.568563024" Mar 11 09:16:27 crc kubenswrapper[4840]: E0311 09:16:27.204383 4840 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.30:48536->38.102.83.30:45639: write tcp 38.102.83.30:48536->38.102.83.30:45639: write: broken pipe Mar 11 09:16:27 crc kubenswrapper[4840]: I0311 09:16:27.446197 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:16:27 crc kubenswrapper[4840]: I0311 09:16:27.446259 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:16:27 crc kubenswrapper[4840]: I0311 09:16:27.802924 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 11 09:16:27 crc kubenswrapper[4840]: I0311 09:16:27.872254 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 11 09:16:27 crc kubenswrapper[4840]: I0311 09:16:27.893490 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 11 09:16:28 crc kubenswrapper[4840]: I0311 09:16:28.590635 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 11 09:16:28 crc kubenswrapper[4840]: I0311 09:16:28.677521 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 11 09:16:28 crc kubenswrapper[4840]: I0311 09:16:28.811293 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 11 09:16:28 crc kubenswrapper[4840]: I0311 09:16:28.948269 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6444958b7f-kjg47"] Mar 11 09:16:28 crc kubenswrapper[4840]: I0311 09:16:28.949412 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6444958b7f-kjg47" podUID="e8fbab86-0dd7-4671-9258-a4df9068699c" containerName="dnsmasq-dns" containerID="cri-o://93b47c2b09da48e6911be72dd763be7bfbccb1391132bbc89b3d810ec644ad4b" gracePeriod=10 Mar 11 09:16:28 crc kubenswrapper[4840]: I0311 09:16:28.993861 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f7dd995-5qwn2"] Mar 11 09:16:28 crc kubenswrapper[4840]: E0311 09:16:28.994527 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff691b46-da15-4bfc-9b37-833a8cd599a9" containerName="dnsmasq-dns" Mar 11 09:16:28 crc kubenswrapper[4840]: I0311 09:16:28.994612 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff691b46-da15-4bfc-9b37-833a8cd599a9" containerName="dnsmasq-dns" Mar 11 09:16:28 crc kubenswrapper[4840]: E0311 09:16:28.994707 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff691b46-da15-4bfc-9b37-833a8cd599a9" containerName="init" Mar 11 09:16:28 crc kubenswrapper[4840]: I0311 09:16:28.994765 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff691b46-da15-4bfc-9b37-833a8cd599a9" containerName="init" Mar 11 09:16:28 crc kubenswrapper[4840]: E0311 09:16:28.994830 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad8faa32-f620-4ded-b606-0009b5a92cb8" containerName="init" Mar 11 09:16:28 crc kubenswrapper[4840]: I0311 09:16:28.994924 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad8faa32-f620-4ded-b606-0009b5a92cb8" containerName="init" Mar 11 09:16:28 crc kubenswrapper[4840]: E0311 09:16:28.994992 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad8faa32-f620-4ded-b606-0009b5a92cb8" containerName="dnsmasq-dns" Mar 11 09:16:28 crc kubenswrapper[4840]: I0311 09:16:28.995053 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad8faa32-f620-4ded-b606-0009b5a92cb8" containerName="dnsmasq-dns" Mar 11 09:16:28 crc kubenswrapper[4840]: I0311 09:16:28.995276 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff691b46-da15-4bfc-9b37-833a8cd599a9" containerName="dnsmasq-dns" Mar 11 09:16:28 crc kubenswrapper[4840]: I0311 09:16:28.995350 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad8faa32-f620-4ded-b606-0009b5a92cb8" containerName="dnsmasq-dns" Mar 11 09:16:28 crc kubenswrapper[4840]: I0311 09:16:28.996722 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.027039 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f7dd995-5qwn2"] Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.072679 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8b85\" (UniqueName: \"kubernetes.io/projected/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-kube-api-access-x8b85\") pod \"dnsmasq-dns-675f7dd995-5qwn2\" (UID: \"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9\") " pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.072748 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-ovsdbserver-sb\") pod \"dnsmasq-dns-675f7dd995-5qwn2\" (UID: \"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9\") " pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.073006 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-ovsdbserver-nb\") pod \"dnsmasq-dns-675f7dd995-5qwn2\" (UID: \"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9\") " pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.073209 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-dns-svc\") pod \"dnsmasq-dns-675f7dd995-5qwn2\" (UID: \"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9\") " pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.073264 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-config\") pod \"dnsmasq-dns-675f7dd995-5qwn2\" (UID: \"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9\") " pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.174571 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-ovsdbserver-nb\") pod \"dnsmasq-dns-675f7dd995-5qwn2\" (UID: \"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9\") " pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.174666 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-dns-svc\") pod \"dnsmasq-dns-675f7dd995-5qwn2\" (UID: \"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9\") " pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.174694 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-config\") pod \"dnsmasq-dns-675f7dd995-5qwn2\" (UID: \"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9\") " pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.174766 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8b85\" (UniqueName: \"kubernetes.io/projected/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-kube-api-access-x8b85\") pod \"dnsmasq-dns-675f7dd995-5qwn2\" (UID: \"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9\") " pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.174810 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-ovsdbserver-sb\") pod \"dnsmasq-dns-675f7dd995-5qwn2\" (UID: \"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9\") " pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.175793 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-ovsdbserver-nb\") pod \"dnsmasq-dns-675f7dd995-5qwn2\" (UID: \"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9\") " pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.175891 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-dns-svc\") pod \"dnsmasq-dns-675f7dd995-5qwn2\" (UID: \"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9\") " pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.178348 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-config\") pod \"dnsmasq-dns-675f7dd995-5qwn2\" (UID: \"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9\") " pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.178564 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-ovsdbserver-sb\") pod \"dnsmasq-dns-675f7dd995-5qwn2\" (UID: \"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9\") " pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.197955 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8b85\" (UniqueName: \"kubernetes.io/projected/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-kube-api-access-x8b85\") pod \"dnsmasq-dns-675f7dd995-5qwn2\" (UID: \"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9\") " pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.320032 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.473036 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6444958b7f-kjg47" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.581051 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8fbab86-0dd7-4671-9258-a4df9068699c-ovsdbserver-nb\") pod \"e8fbab86-0dd7-4671-9258-a4df9068699c\" (UID: \"e8fbab86-0dd7-4671-9258-a4df9068699c\") " Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.581180 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwr6j\" (UniqueName: \"kubernetes.io/projected/e8fbab86-0dd7-4671-9258-a4df9068699c-kube-api-access-rwr6j\") pod \"e8fbab86-0dd7-4671-9258-a4df9068699c\" (UID: \"e8fbab86-0dd7-4671-9258-a4df9068699c\") " Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.581229 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8fbab86-0dd7-4671-9258-a4df9068699c-dns-svc\") pod \"e8fbab86-0dd7-4671-9258-a4df9068699c\" (UID: \"e8fbab86-0dd7-4671-9258-a4df9068699c\") " Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.581510 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8fbab86-0dd7-4671-9258-a4df9068699c-config\") pod \"e8fbab86-0dd7-4671-9258-a4df9068699c\" (UID: \"e8fbab86-0dd7-4671-9258-a4df9068699c\") " Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.586759 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8fbab86-0dd7-4671-9258-a4df9068699c-kube-api-access-rwr6j" (OuterVolumeSpecName: "kube-api-access-rwr6j") pod "e8fbab86-0dd7-4671-9258-a4df9068699c" (UID: "e8fbab86-0dd7-4671-9258-a4df9068699c"). InnerVolumeSpecName "kube-api-access-rwr6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.631837 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8fbab86-0dd7-4671-9258-a4df9068699c-config" (OuterVolumeSpecName: "config") pod "e8fbab86-0dd7-4671-9258-a4df9068699c" (UID: "e8fbab86-0dd7-4671-9258-a4df9068699c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.633166 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8fbab86-0dd7-4671-9258-a4df9068699c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e8fbab86-0dd7-4671-9258-a4df9068699c" (UID: "e8fbab86-0dd7-4671-9258-a4df9068699c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.642342 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8fbab86-0dd7-4671-9258-a4df9068699c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e8fbab86-0dd7-4671-9258-a4df9068699c" (UID: "e8fbab86-0dd7-4671-9258-a4df9068699c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.684197 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwr6j\" (UniqueName: \"kubernetes.io/projected/e8fbab86-0dd7-4671-9258-a4df9068699c-kube-api-access-rwr6j\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.684244 4840 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e8fbab86-0dd7-4671-9258-a4df9068699c-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.684257 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8fbab86-0dd7-4671-9258-a4df9068699c-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.684270 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e8fbab86-0dd7-4671-9258-a4df9068699c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.812533 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f7dd995-5qwn2"] Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.893632 4840 generic.go:334] "Generic (PLEG): container finished" podID="e8fbab86-0dd7-4671-9258-a4df9068699c" containerID="93b47c2b09da48e6911be72dd763be7bfbccb1391132bbc89b3d810ec644ad4b" exitCode=0 Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.893729 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6444958b7f-kjg47" event={"ID":"e8fbab86-0dd7-4671-9258-a4df9068699c","Type":"ContainerDied","Data":"93b47c2b09da48e6911be72dd763be7bfbccb1391132bbc89b3d810ec644ad4b"} Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.893765 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6444958b7f-kjg47" event={"ID":"e8fbab86-0dd7-4671-9258-a4df9068699c","Type":"ContainerDied","Data":"49f8fe6c2098f845c1d08dd6d482aa2afc13aeeb0022b358ff6e0afa5837efe9"} Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.893783 4840 scope.go:117] "RemoveContainer" containerID="93b47c2b09da48e6911be72dd763be7bfbccb1391132bbc89b3d810ec644ad4b" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.893890 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6444958b7f-kjg47" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.899406 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" event={"ID":"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9","Type":"ContainerStarted","Data":"5abe8789a9e548cafba146d220fac104fb52ff361797f05980a09326015575b1"} Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.927452 4840 scope.go:117] "RemoveContainer" containerID="6fa05a2c0cb379aa6773fc67393d5927a4f3becfa84a4ed2d0595ef2559ab872" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.939602 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6444958b7f-kjg47"] Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.947187 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6444958b7f-kjg47"] Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.954361 4840 scope.go:117] "RemoveContainer" containerID="93b47c2b09da48e6911be72dd763be7bfbccb1391132bbc89b3d810ec644ad4b" Mar 11 09:16:29 crc kubenswrapper[4840]: E0311 09:16:29.957054 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93b47c2b09da48e6911be72dd763be7bfbccb1391132bbc89b3d810ec644ad4b\": container with ID starting with 93b47c2b09da48e6911be72dd763be7bfbccb1391132bbc89b3d810ec644ad4b not found: ID does not exist" containerID="93b47c2b09da48e6911be72dd763be7bfbccb1391132bbc89b3d810ec644ad4b" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.957108 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93b47c2b09da48e6911be72dd763be7bfbccb1391132bbc89b3d810ec644ad4b"} err="failed to get container status \"93b47c2b09da48e6911be72dd763be7bfbccb1391132bbc89b3d810ec644ad4b\": rpc error: code = NotFound desc = could not find container \"93b47c2b09da48e6911be72dd763be7bfbccb1391132bbc89b3d810ec644ad4b\": container with ID starting with 93b47c2b09da48e6911be72dd763be7bfbccb1391132bbc89b3d810ec644ad4b not found: ID does not exist" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.957138 4840 scope.go:117] "RemoveContainer" containerID="6fa05a2c0cb379aa6773fc67393d5927a4f3becfa84a4ed2d0595ef2559ab872" Mar 11 09:16:29 crc kubenswrapper[4840]: E0311 09:16:29.957484 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fa05a2c0cb379aa6773fc67393d5927a4f3becfa84a4ed2d0595ef2559ab872\": container with ID starting with 6fa05a2c0cb379aa6773fc67393d5927a4f3becfa84a4ed2d0595ef2559ab872 not found: ID does not exist" containerID="6fa05a2c0cb379aa6773fc67393d5927a4f3becfa84a4ed2d0595ef2559ab872" Mar 11 09:16:29 crc kubenswrapper[4840]: I0311 09:16:29.957511 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fa05a2c0cb379aa6773fc67393d5927a4f3becfa84a4ed2d0595ef2559ab872"} err="failed to get container status \"6fa05a2c0cb379aa6773fc67393d5927a4f3becfa84a4ed2d0595ef2559ab872\": rpc error: code = NotFound desc = could not find container \"6fa05a2c0cb379aa6773fc67393d5927a4f3becfa84a4ed2d0595ef2559ab872\": container with ID starting with 6fa05a2c0cb379aa6773fc67393d5927a4f3becfa84a4ed2d0595ef2559ab872 not found: ID does not exist" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.077126 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8fbab86-0dd7-4671-9258-a4df9068699c" path="/var/lib/kubelet/pods/e8fbab86-0dd7-4671-9258-a4df9068699c/volumes" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.145013 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 11 09:16:30 crc kubenswrapper[4840]: E0311 09:16:30.145416 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8fbab86-0dd7-4671-9258-a4df9068699c" containerName="dnsmasq-dns" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.145437 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8fbab86-0dd7-4671-9258-a4df9068699c" containerName="dnsmasq-dns" Mar 11 09:16:30 crc kubenswrapper[4840]: E0311 09:16:30.145484 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8fbab86-0dd7-4671-9258-a4df9068699c" containerName="init" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.145492 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8fbab86-0dd7-4671-9258-a4df9068699c" containerName="init" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.145817 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8fbab86-0dd7-4671-9258-a4df9068699c" containerName="dnsmasq-dns" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.153133 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.155677 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-5z9g7" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.156110 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.156533 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.156722 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.169421 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.294449 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " pod="openstack/swift-storage-0" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.294787 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-cache\") pod \"swift-storage-0\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " pod="openstack/swift-storage-0" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.294902 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-lock\") pod \"swift-storage-0\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " pod="openstack/swift-storage-0" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.294990 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxhkf\" (UniqueName: \"kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-kube-api-access-sxhkf\") pod \"swift-storage-0\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " pod="openstack/swift-storage-0" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.295091 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-etc-swift\") pod \"swift-storage-0\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " pod="openstack/swift-storage-0" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.295178 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " pod="openstack/swift-storage-0" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.396958 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-cache\") pod \"swift-storage-0\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " pod="openstack/swift-storage-0" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.397610 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-cache\") pod \"swift-storage-0\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " pod="openstack/swift-storage-0" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.397629 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-lock\") pod \"swift-storage-0\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " pod="openstack/swift-storage-0" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.397832 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxhkf\" (UniqueName: \"kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-kube-api-access-sxhkf\") pod \"swift-storage-0\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " pod="openstack/swift-storage-0" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.397925 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-etc-swift\") pod \"swift-storage-0\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " pod="openstack/swift-storage-0" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.398020 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " pod="openstack/swift-storage-0" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.398120 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " pod="openstack/swift-storage-0" Mar 11 09:16:30 crc kubenswrapper[4840]: E0311 09:16:30.398186 4840 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 11 09:16:30 crc kubenswrapper[4840]: E0311 09:16:30.398232 4840 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.398290 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-lock\") pod \"swift-storage-0\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " pod="openstack/swift-storage-0" Mar 11 09:16:30 crc kubenswrapper[4840]: E0311 09:16:30.398310 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-etc-swift podName:f1c7d7f4-dc60-4703-b6c3-6cd626db11af nodeName:}" failed. No retries permitted until 2026-03-11 09:16:30.898283487 +0000 UTC m=+1189.563953352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-etc-swift") pod "swift-storage-0" (UID: "f1c7d7f4-dc60-4703-b6c3-6cd626db11af") : configmap "swift-ring-files" not found Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.398576 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/swift-storage-0" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.406392 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " pod="openstack/swift-storage-0" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.420623 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxhkf\" (UniqueName: \"kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-kube-api-access-sxhkf\") pod \"swift-storage-0\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " pod="openstack/swift-storage-0" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.426373 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " pod="openstack/swift-storage-0" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.790346 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-r77jc"] Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.791614 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-r77jc" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.795222 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.797080 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.806023 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-r77jc"] Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.806589 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.814415 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-r77jc"] Mar 11 09:16:30 crc kubenswrapper[4840]: E0311 09:16:30.857181 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-crlp6 ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-crlp6 ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-r77jc" podUID="bb7e4e45-4796-4f6c-909c-f65a1f68f689" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.900798 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-px5zj"] Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.902124 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-px5zj" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.907192 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-px5zj"] Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.909103 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bb7e4e45-4796-4f6c-909c-f65a1f68f689-dispersionconf\") pod \"swift-ring-rebalance-r77jc\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " pod="openstack/swift-ring-rebalance-r77jc" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.909137 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crlp6\" (UniqueName: \"kubernetes.io/projected/bb7e4e45-4796-4f6c-909c-f65a1f68f689-kube-api-access-crlp6\") pod \"swift-ring-rebalance-r77jc\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " pod="openstack/swift-ring-rebalance-r77jc" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.909173 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-etc-swift\") pod \"swift-storage-0\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " pod="openstack/swift-storage-0" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.909212 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bb7e4e45-4796-4f6c-909c-f65a1f68f689-ring-data-devices\") pod \"swift-ring-rebalance-r77jc\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " pod="openstack/swift-ring-rebalance-r77jc" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.909246 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb7e4e45-4796-4f6c-909c-f65a1f68f689-scripts\") pod \"swift-ring-rebalance-r77jc\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " pod="openstack/swift-ring-rebalance-r77jc" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.909311 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7e4e45-4796-4f6c-909c-f65a1f68f689-combined-ca-bundle\") pod \"swift-ring-rebalance-r77jc\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " pod="openstack/swift-ring-rebalance-r77jc" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.909343 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bb7e4e45-4796-4f6c-909c-f65a1f68f689-etc-swift\") pod \"swift-ring-rebalance-r77jc\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " pod="openstack/swift-ring-rebalance-r77jc" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.909358 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bb7e4e45-4796-4f6c-909c-f65a1f68f689-swiftconf\") pod \"swift-ring-rebalance-r77jc\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " pod="openstack/swift-ring-rebalance-r77jc" Mar 11 09:16:30 crc kubenswrapper[4840]: E0311 09:16:30.909531 4840 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 11 09:16:30 crc kubenswrapper[4840]: E0311 09:16:30.909548 4840 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 11 09:16:30 crc kubenswrapper[4840]: E0311 09:16:30.909590 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-etc-swift podName:f1c7d7f4-dc60-4703-b6c3-6cd626db11af nodeName:}" failed. No retries permitted until 2026-03-11 09:16:31.909572946 +0000 UTC m=+1190.575242761 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-etc-swift") pod "swift-storage-0" (UID: "f1c7d7f4-dc60-4703-b6c3-6cd626db11af") : configmap "swift-ring-files" not found Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.921667 4840 generic.go:334] "Generic (PLEG): container finished" podID="ed273fa9-3890-4c96-ac5f-ac17d2b2dad9" containerID="bcf97b20ebe322efcd63d649c6350dbc476441008719578f91fb01bcd6286035" exitCode=0 Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.921771 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-r77jc" Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.921814 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" event={"ID":"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9","Type":"ContainerDied","Data":"bcf97b20ebe322efcd63d649c6350dbc476441008719578f91fb01bcd6286035"} Mar 11 09:16:30 crc kubenswrapper[4840]: I0311 09:16:30.934130 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-r77jc" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.012891 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/757b3a3d-0655-4517-a752-b944899642c9-etc-swift\") pod \"swift-ring-rebalance-px5zj\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " pod="openstack/swift-ring-rebalance-px5zj" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.012978 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bb7e4e45-4796-4f6c-909c-f65a1f68f689-ring-data-devices\") pod \"swift-ring-rebalance-r77jc\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " pod="openstack/swift-ring-rebalance-r77jc" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.013020 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/757b3a3d-0655-4517-a752-b944899642c9-scripts\") pod \"swift-ring-rebalance-px5zj\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " pod="openstack/swift-ring-rebalance-px5zj" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.013049 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb7e4e45-4796-4f6c-909c-f65a1f68f689-scripts\") pod \"swift-ring-rebalance-r77jc\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " pod="openstack/swift-ring-rebalance-r77jc" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.013153 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/757b3a3d-0655-4517-a752-b944899642c9-dispersionconf\") pod \"swift-ring-rebalance-px5zj\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " pod="openstack/swift-ring-rebalance-px5zj" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.013173 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7e4e45-4796-4f6c-909c-f65a1f68f689-combined-ca-bundle\") pod \"swift-ring-rebalance-r77jc\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " pod="openstack/swift-ring-rebalance-r77jc" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.013191 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/757b3a3d-0655-4517-a752-b944899642c9-ring-data-devices\") pod \"swift-ring-rebalance-px5zj\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " pod="openstack/swift-ring-rebalance-px5zj" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.013243 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/757b3a3d-0655-4517-a752-b944899642c9-combined-ca-bundle\") pod \"swift-ring-rebalance-px5zj\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " pod="openstack/swift-ring-rebalance-px5zj" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.013269 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/757b3a3d-0655-4517-a752-b944899642c9-swiftconf\") pod \"swift-ring-rebalance-px5zj\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " pod="openstack/swift-ring-rebalance-px5zj" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.013288 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bb7e4e45-4796-4f6c-909c-f65a1f68f689-etc-swift\") pod \"swift-ring-rebalance-r77jc\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " pod="openstack/swift-ring-rebalance-r77jc" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.013308 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bb7e4e45-4796-4f6c-909c-f65a1f68f689-swiftconf\") pod \"swift-ring-rebalance-r77jc\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " pod="openstack/swift-ring-rebalance-r77jc" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.013327 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjnm5\" (UniqueName: \"kubernetes.io/projected/757b3a3d-0655-4517-a752-b944899642c9-kube-api-access-jjnm5\") pod \"swift-ring-rebalance-px5zj\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " pod="openstack/swift-ring-rebalance-px5zj" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.013348 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bb7e4e45-4796-4f6c-909c-f65a1f68f689-dispersionconf\") pod \"swift-ring-rebalance-r77jc\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " pod="openstack/swift-ring-rebalance-r77jc" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.013367 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crlp6\" (UniqueName: \"kubernetes.io/projected/bb7e4e45-4796-4f6c-909c-f65a1f68f689-kube-api-access-crlp6\") pod \"swift-ring-rebalance-r77jc\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " pod="openstack/swift-ring-rebalance-r77jc" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.016078 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bb7e4e45-4796-4f6c-909c-f65a1f68f689-ring-data-devices\") pod \"swift-ring-rebalance-r77jc\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " pod="openstack/swift-ring-rebalance-r77jc" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.016362 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bb7e4e45-4796-4f6c-909c-f65a1f68f689-etc-swift\") pod \"swift-ring-rebalance-r77jc\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " pod="openstack/swift-ring-rebalance-r77jc" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.016805 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb7e4e45-4796-4f6c-909c-f65a1f68f689-scripts\") pod \"swift-ring-rebalance-r77jc\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " pod="openstack/swift-ring-rebalance-r77jc" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.021386 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bb7e4e45-4796-4f6c-909c-f65a1f68f689-swiftconf\") pod \"swift-ring-rebalance-r77jc\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " pod="openstack/swift-ring-rebalance-r77jc" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.021643 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7e4e45-4796-4f6c-909c-f65a1f68f689-combined-ca-bundle\") pod \"swift-ring-rebalance-r77jc\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " pod="openstack/swift-ring-rebalance-r77jc" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.022428 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bb7e4e45-4796-4f6c-909c-f65a1f68f689-dispersionconf\") pod \"swift-ring-rebalance-r77jc\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " pod="openstack/swift-ring-rebalance-r77jc" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.037060 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crlp6\" (UniqueName: \"kubernetes.io/projected/bb7e4e45-4796-4f6c-909c-f65a1f68f689-kube-api-access-crlp6\") pod \"swift-ring-rebalance-r77jc\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " pod="openstack/swift-ring-rebalance-r77jc" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.116994 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7e4e45-4796-4f6c-909c-f65a1f68f689-combined-ca-bundle\") pod \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.117061 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bb7e4e45-4796-4f6c-909c-f65a1f68f689-ring-data-devices\") pod \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.117175 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bb7e4e45-4796-4f6c-909c-f65a1f68f689-dispersionconf\") pod \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.117200 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crlp6\" (UniqueName: \"kubernetes.io/projected/bb7e4e45-4796-4f6c-909c-f65a1f68f689-kube-api-access-crlp6\") pod \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.117272 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bb7e4e45-4796-4f6c-909c-f65a1f68f689-etc-swift\") pod \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.117293 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb7e4e45-4796-4f6c-909c-f65a1f68f689-scripts\") pod \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.117353 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bb7e4e45-4796-4f6c-909c-f65a1f68f689-swiftconf\") pod \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\" (UID: \"bb7e4e45-4796-4f6c-909c-f65a1f68f689\") " Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.117558 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/757b3a3d-0655-4517-a752-b944899642c9-combined-ca-bundle\") pod \"swift-ring-rebalance-px5zj\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " pod="openstack/swift-ring-rebalance-px5zj" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.117592 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/757b3a3d-0655-4517-a752-b944899642c9-swiftconf\") pod \"swift-ring-rebalance-px5zj\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " pod="openstack/swift-ring-rebalance-px5zj" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.117622 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjnm5\" (UniqueName: \"kubernetes.io/projected/757b3a3d-0655-4517-a752-b944899642c9-kube-api-access-jjnm5\") pod \"swift-ring-rebalance-px5zj\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " pod="openstack/swift-ring-rebalance-px5zj" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.117703 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/757b3a3d-0655-4517-a752-b944899642c9-etc-swift\") pod \"swift-ring-rebalance-px5zj\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " pod="openstack/swift-ring-rebalance-px5zj" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.117755 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/757b3a3d-0655-4517-a752-b944899642c9-scripts\") pod \"swift-ring-rebalance-px5zj\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " pod="openstack/swift-ring-rebalance-px5zj" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.117876 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/757b3a3d-0655-4517-a752-b944899642c9-dispersionconf\") pod \"swift-ring-rebalance-px5zj\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " pod="openstack/swift-ring-rebalance-px5zj" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.117928 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/757b3a3d-0655-4517-a752-b944899642c9-ring-data-devices\") pod \"swift-ring-rebalance-px5zj\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " pod="openstack/swift-ring-rebalance-px5zj" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.118661 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/757b3a3d-0655-4517-a752-b944899642c9-etc-swift\") pod \"swift-ring-rebalance-px5zj\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " pod="openstack/swift-ring-rebalance-px5zj" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.118932 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/757b3a3d-0655-4517-a752-b944899642c9-ring-data-devices\") pod \"swift-ring-rebalance-px5zj\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " pod="openstack/swift-ring-rebalance-px5zj" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.119356 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb7e4e45-4796-4f6c-909c-f65a1f68f689-scripts" (OuterVolumeSpecName: "scripts") pod "bb7e4e45-4796-4f6c-909c-f65a1f68f689" (UID: "bb7e4e45-4796-4f6c-909c-f65a1f68f689"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.119828 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb7e4e45-4796-4f6c-909c-f65a1f68f689-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "bb7e4e45-4796-4f6c-909c-f65a1f68f689" (UID: "bb7e4e45-4796-4f6c-909c-f65a1f68f689"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.119960 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/757b3a3d-0655-4517-a752-b944899642c9-scripts\") pod \"swift-ring-rebalance-px5zj\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " pod="openstack/swift-ring-rebalance-px5zj" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.120124 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb7e4e45-4796-4f6c-909c-f65a1f68f689-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "bb7e4e45-4796-4f6c-909c-f65a1f68f689" (UID: "bb7e4e45-4796-4f6c-909c-f65a1f68f689"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.121695 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb7e4e45-4796-4f6c-909c-f65a1f68f689-kube-api-access-crlp6" (OuterVolumeSpecName: "kube-api-access-crlp6") pod "bb7e4e45-4796-4f6c-909c-f65a1f68f689" (UID: "bb7e4e45-4796-4f6c-909c-f65a1f68f689"). InnerVolumeSpecName "kube-api-access-crlp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.123806 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/757b3a3d-0655-4517-a752-b944899642c9-dispersionconf\") pod \"swift-ring-rebalance-px5zj\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " pod="openstack/swift-ring-rebalance-px5zj" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.125006 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/757b3a3d-0655-4517-a752-b944899642c9-swiftconf\") pod \"swift-ring-rebalance-px5zj\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " pod="openstack/swift-ring-rebalance-px5zj" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.132783 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb7e4e45-4796-4f6c-909c-f65a1f68f689-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "bb7e4e45-4796-4f6c-909c-f65a1f68f689" (UID: "bb7e4e45-4796-4f6c-909c-f65a1f68f689"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.132894 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb7e4e45-4796-4f6c-909c-f65a1f68f689-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "bb7e4e45-4796-4f6c-909c-f65a1f68f689" (UID: "bb7e4e45-4796-4f6c-909c-f65a1f68f689"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.137678 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/757b3a3d-0655-4517-a752-b944899642c9-combined-ca-bundle\") pod \"swift-ring-rebalance-px5zj\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " pod="openstack/swift-ring-rebalance-px5zj" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.141981 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb7e4e45-4796-4f6c-909c-f65a1f68f689-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb7e4e45-4796-4f6c-909c-f65a1f68f689" (UID: "bb7e4e45-4796-4f6c-909c-f65a1f68f689"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.142178 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjnm5\" (UniqueName: \"kubernetes.io/projected/757b3a3d-0655-4517-a752-b944899642c9-kube-api-access-jjnm5\") pod \"swift-ring-rebalance-px5zj\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " pod="openstack/swift-ring-rebalance-px5zj" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.220388 4840 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bb7e4e45-4796-4f6c-909c-f65a1f68f689-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.220893 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crlp6\" (UniqueName: \"kubernetes.io/projected/bb7e4e45-4796-4f6c-909c-f65a1f68f689-kube-api-access-crlp6\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.220906 4840 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bb7e4e45-4796-4f6c-909c-f65a1f68f689-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.220915 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb7e4e45-4796-4f6c-909c-f65a1f68f689-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.220925 4840 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bb7e4e45-4796-4f6c-909c-f65a1f68f689-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.220935 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb7e4e45-4796-4f6c-909c-f65a1f68f689-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.220945 4840 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bb7e4e45-4796-4f6c-909c-f65a1f68f689-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.221120 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-px5zj" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.718752 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-3c72-account-create-update-dbmd9"] Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.720772 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3c72-account-create-update-dbmd9" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.731786 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.742517 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3c72-account-create-update-dbmd9"] Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.750781 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-tv2rh"] Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.751990 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tv2rh" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.795805 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.807532 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tv2rh"] Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.822443 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-px5zj"] Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.832503 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6mvz\" (UniqueName: \"kubernetes.io/projected/cebd38d4-d574-424d-8472-b60da948f8d1-kube-api-access-m6mvz\") pod \"glance-db-create-tv2rh\" (UID: \"cebd38d4-d574-424d-8472-b60da948f8d1\") " pod="openstack/glance-db-create-tv2rh" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.832628 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cebd38d4-d574-424d-8472-b60da948f8d1-operator-scripts\") pod \"glance-db-create-tv2rh\" (UID: \"cebd38d4-d574-424d-8472-b60da948f8d1\") " pod="openstack/glance-db-create-tv2rh" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.832699 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c155188a-ed86-457a-88cb-42db8a510bf7-operator-scripts\") pod \"glance-3c72-account-create-update-dbmd9\" (UID: \"c155188a-ed86-457a-88cb-42db8a510bf7\") " pod="openstack/glance-3c72-account-create-update-dbmd9" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.832787 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wllkj\" (UniqueName: \"kubernetes.io/projected/c155188a-ed86-457a-88cb-42db8a510bf7-kube-api-access-wllkj\") pod \"glance-3c72-account-create-update-dbmd9\" (UID: \"c155188a-ed86-457a-88cb-42db8a510bf7\") " pod="openstack/glance-3c72-account-create-update-dbmd9" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.931039 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" event={"ID":"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9","Type":"ContainerStarted","Data":"03ec2c771246b57617ea48edda4ac7074f5ae66716e89aabd682da671e7f2772"} Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.931807 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.933024 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-px5zj" event={"ID":"757b3a3d-0655-4517-a752-b944899642c9","Type":"ContainerStarted","Data":"301e3877ef8b749903e8b9831a8c5796721697e838c804905d1abd0642a708a1"} Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.933046 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-r77jc" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.933754 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6mvz\" (UniqueName: \"kubernetes.io/projected/cebd38d4-d574-424d-8472-b60da948f8d1-kube-api-access-m6mvz\") pod \"glance-db-create-tv2rh\" (UID: \"cebd38d4-d574-424d-8472-b60da948f8d1\") " pod="openstack/glance-db-create-tv2rh" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.933832 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cebd38d4-d574-424d-8472-b60da948f8d1-operator-scripts\") pod \"glance-db-create-tv2rh\" (UID: \"cebd38d4-d574-424d-8472-b60da948f8d1\") " pod="openstack/glance-db-create-tv2rh" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.933862 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-etc-swift\") pod \"swift-storage-0\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " pod="openstack/swift-storage-0" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.933905 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c155188a-ed86-457a-88cb-42db8a510bf7-operator-scripts\") pod \"glance-3c72-account-create-update-dbmd9\" (UID: \"c155188a-ed86-457a-88cb-42db8a510bf7\") " pod="openstack/glance-3c72-account-create-update-dbmd9" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.933948 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wllkj\" (UniqueName: \"kubernetes.io/projected/c155188a-ed86-457a-88cb-42db8a510bf7-kube-api-access-wllkj\") pod \"glance-3c72-account-create-update-dbmd9\" (UID: \"c155188a-ed86-457a-88cb-42db8a510bf7\") " pod="openstack/glance-3c72-account-create-update-dbmd9" Mar 11 09:16:31 crc kubenswrapper[4840]: E0311 09:16:31.934020 4840 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 11 09:16:31 crc kubenswrapper[4840]: E0311 09:16:31.934039 4840 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 11 09:16:31 crc kubenswrapper[4840]: E0311 09:16:31.934081 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-etc-swift podName:f1c7d7f4-dc60-4703-b6c3-6cd626db11af nodeName:}" failed. No retries permitted until 2026-03-11 09:16:33.934064282 +0000 UTC m=+1192.599734097 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-etc-swift") pod "swift-storage-0" (UID: "f1c7d7f4-dc60-4703-b6c3-6cd626db11af") : configmap "swift-ring-files" not found Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.934679 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cebd38d4-d574-424d-8472-b60da948f8d1-operator-scripts\") pod \"glance-db-create-tv2rh\" (UID: \"cebd38d4-d574-424d-8472-b60da948f8d1\") " pod="openstack/glance-db-create-tv2rh" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.934723 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c155188a-ed86-457a-88cb-42db8a510bf7-operator-scripts\") pod \"glance-3c72-account-create-update-dbmd9\" (UID: \"c155188a-ed86-457a-88cb-42db8a510bf7\") " pod="openstack/glance-3c72-account-create-update-dbmd9" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.953900 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6mvz\" (UniqueName: \"kubernetes.io/projected/cebd38d4-d574-424d-8472-b60da948f8d1-kube-api-access-m6mvz\") pod \"glance-db-create-tv2rh\" (UID: \"cebd38d4-d574-424d-8472-b60da948f8d1\") " pod="openstack/glance-db-create-tv2rh" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.956152 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wllkj\" (UniqueName: \"kubernetes.io/projected/c155188a-ed86-457a-88cb-42db8a510bf7-kube-api-access-wllkj\") pod \"glance-3c72-account-create-update-dbmd9\" (UID: \"c155188a-ed86-457a-88cb-42db8a510bf7\") " pod="openstack/glance-3c72-account-create-update-dbmd9" Mar 11 09:16:31 crc kubenswrapper[4840]: I0311 09:16:31.959148 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" podStartSLOduration=3.959134712 podStartE2EDuration="3.959134712s" podCreationTimestamp="2026-03-11 09:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:16:31.94912702 +0000 UTC m=+1190.614796825" watchObservedRunningTime="2026-03-11 09:16:31.959134712 +0000 UTC m=+1190.624804527" Mar 11 09:16:32 crc kubenswrapper[4840]: I0311 09:16:32.038326 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-r77jc"] Mar 11 09:16:32 crc kubenswrapper[4840]: I0311 09:16:32.040537 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3c72-account-create-update-dbmd9" Mar 11 09:16:32 crc kubenswrapper[4840]: I0311 09:16:32.044370 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-r77jc"] Mar 11 09:16:32 crc kubenswrapper[4840]: I0311 09:16:32.068516 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tv2rh" Mar 11 09:16:32 crc kubenswrapper[4840]: I0311 09:16:32.071960 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb7e4e45-4796-4f6c-909c-f65a1f68f689" path="/var/lib/kubelet/pods/bb7e4e45-4796-4f6c-909c-f65a1f68f689/volumes" Mar 11 09:16:32 crc kubenswrapper[4840]: I0311 09:16:32.511214 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3c72-account-create-update-dbmd9"] Mar 11 09:16:32 crc kubenswrapper[4840]: W0311 09:16:32.515002 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc155188a_ed86_457a_88cb_42db8a510bf7.slice/crio-8d741d61d69b8dfc10043440bee4785fbd01b90c0f9b9f2bab6c3b33c2cdfcd8 WatchSource:0}: Error finding container 8d741d61d69b8dfc10043440bee4785fbd01b90c0f9b9f2bab6c3b33c2cdfcd8: Status 404 returned error can't find the container with id 8d741d61d69b8dfc10043440bee4785fbd01b90c0f9b9f2bab6c3b33c2cdfcd8 Mar 11 09:16:32 crc kubenswrapper[4840]: I0311 09:16:32.590118 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tv2rh"] Mar 11 09:16:32 crc kubenswrapper[4840]: W0311 09:16:32.607462 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcebd38d4_d574_424d_8472_b60da948f8d1.slice/crio-8d2d7a228e6b71e6be27d664a2830dc2966b675be42e64d684075192b692f812 WatchSource:0}: Error finding container 8d2d7a228e6b71e6be27d664a2830dc2966b675be42e64d684075192b692f812: Status 404 returned error can't find the container with id 8d2d7a228e6b71e6be27d664a2830dc2966b675be42e64d684075192b692f812 Mar 11 09:16:32 crc kubenswrapper[4840]: I0311 09:16:32.948294 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3c72-account-create-update-dbmd9" event={"ID":"c155188a-ed86-457a-88cb-42db8a510bf7","Type":"ContainerStarted","Data":"d1b5ffec586cd47bb674a9326320739259785cd510dbc8ce17b400241ea7be05"} Mar 11 09:16:32 crc kubenswrapper[4840]: I0311 09:16:32.948924 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3c72-account-create-update-dbmd9" event={"ID":"c155188a-ed86-457a-88cb-42db8a510bf7","Type":"ContainerStarted","Data":"8d741d61d69b8dfc10043440bee4785fbd01b90c0f9b9f2bab6c3b33c2cdfcd8"} Mar 11 09:16:32 crc kubenswrapper[4840]: I0311 09:16:32.953202 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tv2rh" event={"ID":"cebd38d4-d574-424d-8472-b60da948f8d1","Type":"ContainerStarted","Data":"b38323b6b7e8b7295410c590c4ccbf9d6e88a12f4db5e89ab486bcac669132fd"} Mar 11 09:16:32 crc kubenswrapper[4840]: I0311 09:16:32.953310 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tv2rh" event={"ID":"cebd38d4-d574-424d-8472-b60da948f8d1","Type":"ContainerStarted","Data":"8d2d7a228e6b71e6be27d664a2830dc2966b675be42e64d684075192b692f812"} Mar 11 09:16:32 crc kubenswrapper[4840]: I0311 09:16:32.988323 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-3c72-account-create-update-dbmd9" podStartSLOduration=1.988298244 podStartE2EDuration="1.988298244s" podCreationTimestamp="2026-03-11 09:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:16:32.965894921 +0000 UTC m=+1191.631564736" watchObservedRunningTime="2026-03-11 09:16:32.988298244 +0000 UTC m=+1191.653968059" Mar 11 09:16:32 crc kubenswrapper[4840]: I0311 09:16:32.989068 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-tv2rh" podStartSLOduration=1.9890624639999999 podStartE2EDuration="1.989062464s" podCreationTimestamp="2026-03-11 09:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:16:32.98533697 +0000 UTC m=+1191.651006805" watchObservedRunningTime="2026-03-11 09:16:32.989062464 +0000 UTC m=+1191.654732269" Mar 11 09:16:33 crc kubenswrapper[4840]: I0311 09:16:33.644579 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-bclgj"] Mar 11 09:16:33 crc kubenswrapper[4840]: I0311 09:16:33.645757 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bclgj" Mar 11 09:16:33 crc kubenswrapper[4840]: I0311 09:16:33.649565 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 11 09:16:33 crc kubenswrapper[4840]: I0311 09:16:33.653003 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bclgj"] Mar 11 09:16:33 crc kubenswrapper[4840]: I0311 09:16:33.767263 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/577fd829-89f7-4673-99d8-e9b4da8b9f84-operator-scripts\") pod \"root-account-create-update-bclgj\" (UID: \"577fd829-89f7-4673-99d8-e9b4da8b9f84\") " pod="openstack/root-account-create-update-bclgj" Mar 11 09:16:33 crc kubenswrapper[4840]: I0311 09:16:33.767788 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtvhj\" (UniqueName: \"kubernetes.io/projected/577fd829-89f7-4673-99d8-e9b4da8b9f84-kube-api-access-dtvhj\") pod \"root-account-create-update-bclgj\" (UID: \"577fd829-89f7-4673-99d8-e9b4da8b9f84\") " pod="openstack/root-account-create-update-bclgj" Mar 11 09:16:33 crc kubenswrapper[4840]: I0311 09:16:33.869837 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtvhj\" (UniqueName: \"kubernetes.io/projected/577fd829-89f7-4673-99d8-e9b4da8b9f84-kube-api-access-dtvhj\") pod \"root-account-create-update-bclgj\" (UID: \"577fd829-89f7-4673-99d8-e9b4da8b9f84\") " pod="openstack/root-account-create-update-bclgj" Mar 11 09:16:33 crc kubenswrapper[4840]: I0311 09:16:33.870057 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/577fd829-89f7-4673-99d8-e9b4da8b9f84-operator-scripts\") pod \"root-account-create-update-bclgj\" (UID: \"577fd829-89f7-4673-99d8-e9b4da8b9f84\") " pod="openstack/root-account-create-update-bclgj" Mar 11 09:16:33 crc kubenswrapper[4840]: I0311 09:16:33.871580 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/577fd829-89f7-4673-99d8-e9b4da8b9f84-operator-scripts\") pod \"root-account-create-update-bclgj\" (UID: \"577fd829-89f7-4673-99d8-e9b4da8b9f84\") " pod="openstack/root-account-create-update-bclgj" Mar 11 09:16:33 crc kubenswrapper[4840]: I0311 09:16:33.905204 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtvhj\" (UniqueName: \"kubernetes.io/projected/577fd829-89f7-4673-99d8-e9b4da8b9f84-kube-api-access-dtvhj\") pod \"root-account-create-update-bclgj\" (UID: \"577fd829-89f7-4673-99d8-e9b4da8b9f84\") " pod="openstack/root-account-create-update-bclgj" Mar 11 09:16:33 crc kubenswrapper[4840]: I0311 09:16:33.966053 4840 generic.go:334] "Generic (PLEG): container finished" podID="cebd38d4-d574-424d-8472-b60da948f8d1" containerID="b38323b6b7e8b7295410c590c4ccbf9d6e88a12f4db5e89ab486bcac669132fd" exitCode=0 Mar 11 09:16:33 crc kubenswrapper[4840]: I0311 09:16:33.966214 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tv2rh" event={"ID":"cebd38d4-d574-424d-8472-b60da948f8d1","Type":"ContainerDied","Data":"b38323b6b7e8b7295410c590c4ccbf9d6e88a12f4db5e89ab486bcac669132fd"} Mar 11 09:16:33 crc kubenswrapper[4840]: I0311 09:16:33.972576 4840 generic.go:334] "Generic (PLEG): container finished" podID="c155188a-ed86-457a-88cb-42db8a510bf7" containerID="d1b5ffec586cd47bb674a9326320739259785cd510dbc8ce17b400241ea7be05" exitCode=0 Mar 11 09:16:33 crc kubenswrapper[4840]: I0311 09:16:33.973453 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3c72-account-create-update-dbmd9" event={"ID":"c155188a-ed86-457a-88cb-42db8a510bf7","Type":"ContainerDied","Data":"d1b5ffec586cd47bb674a9326320739259785cd510dbc8ce17b400241ea7be05"} Mar 11 09:16:33 crc kubenswrapper[4840]: I0311 09:16:33.973595 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-etc-swift\") pod \"swift-storage-0\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " pod="openstack/swift-storage-0" Mar 11 09:16:33 crc kubenswrapper[4840]: E0311 09:16:33.973890 4840 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 11 09:16:33 crc kubenswrapper[4840]: E0311 09:16:33.973916 4840 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 11 09:16:33 crc kubenswrapper[4840]: E0311 09:16:33.973975 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-etc-swift podName:f1c7d7f4-dc60-4703-b6c3-6cd626db11af nodeName:}" failed. No retries permitted until 2026-03-11 09:16:37.973955033 +0000 UTC m=+1196.639624848 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-etc-swift") pod "swift-storage-0" (UID: "f1c7d7f4-dc60-4703-b6c3-6cd626db11af") : configmap "swift-ring-files" not found Mar 11 09:16:33 crc kubenswrapper[4840]: I0311 09:16:33.976027 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bclgj" Mar 11 09:16:33 crc kubenswrapper[4840]: I0311 09:16:33.987306 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.607137 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-vghhr"] Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.609331 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vghhr" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.618413 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-vghhr"] Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.642552 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3c72-account-create-update-dbmd9" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.721281 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-42de-account-create-update-bjxph"] Mar 11 09:16:37 crc kubenswrapper[4840]: E0311 09:16:37.721761 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c155188a-ed86-457a-88cb-42db8a510bf7" containerName="mariadb-account-create-update" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.721784 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="c155188a-ed86-457a-88cb-42db8a510bf7" containerName="mariadb-account-create-update" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.722005 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="c155188a-ed86-457a-88cb-42db8a510bf7" containerName="mariadb-account-create-update" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.722726 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-42de-account-create-update-bjxph" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.727357 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.729162 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-42de-account-create-update-bjxph"] Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.755453 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c155188a-ed86-457a-88cb-42db8a510bf7-operator-scripts\") pod \"c155188a-ed86-457a-88cb-42db8a510bf7\" (UID: \"c155188a-ed86-457a-88cb-42db8a510bf7\") " Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.755670 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wllkj\" (UniqueName: \"kubernetes.io/projected/c155188a-ed86-457a-88cb-42db8a510bf7-kube-api-access-wllkj\") pod \"c155188a-ed86-457a-88cb-42db8a510bf7\" (UID: \"c155188a-ed86-457a-88cb-42db8a510bf7\") " Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.756337 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c155188a-ed86-457a-88cb-42db8a510bf7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c155188a-ed86-457a-88cb-42db8a510bf7" (UID: "c155188a-ed86-457a-88cb-42db8a510bf7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.756926 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f06548f2-3dd2-48ae-ad54-333466a3e6ac-operator-scripts\") pod \"keystone-db-create-vghhr\" (UID: \"f06548f2-3dd2-48ae-ad54-333466a3e6ac\") " pod="openstack/keystone-db-create-vghhr" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.757090 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbxm4\" (UniqueName: \"kubernetes.io/projected/f06548f2-3dd2-48ae-ad54-333466a3e6ac-kube-api-access-sbxm4\") pod \"keystone-db-create-vghhr\" (UID: \"f06548f2-3dd2-48ae-ad54-333466a3e6ac\") " pod="openstack/keystone-db-create-vghhr" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.757193 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c155188a-ed86-457a-88cb-42db8a510bf7-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.762404 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c155188a-ed86-457a-88cb-42db8a510bf7-kube-api-access-wllkj" (OuterVolumeSpecName: "kube-api-access-wllkj") pod "c155188a-ed86-457a-88cb-42db8a510bf7" (UID: "c155188a-ed86-457a-88cb-42db8a510bf7"). InnerVolumeSpecName "kube-api-access-wllkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.817908 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-p8m6d"] Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.819020 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-p8m6d" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.826870 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-p8m6d"] Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.859576 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbxm4\" (UniqueName: \"kubernetes.io/projected/f06548f2-3dd2-48ae-ad54-333466a3e6ac-kube-api-access-sbxm4\") pod \"keystone-db-create-vghhr\" (UID: \"f06548f2-3dd2-48ae-ad54-333466a3e6ac\") " pod="openstack/keystone-db-create-vghhr" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.859715 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vblk2\" (UniqueName: \"kubernetes.io/projected/e3dc3dc8-7561-4964-9b52-5bfacca166da-kube-api-access-vblk2\") pod \"keystone-42de-account-create-update-bjxph\" (UID: \"e3dc3dc8-7561-4964-9b52-5bfacca166da\") " pod="openstack/keystone-42de-account-create-update-bjxph" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.859858 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3dc3dc8-7561-4964-9b52-5bfacca166da-operator-scripts\") pod \"keystone-42de-account-create-update-bjxph\" (UID: \"e3dc3dc8-7561-4964-9b52-5bfacca166da\") " pod="openstack/keystone-42de-account-create-update-bjxph" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.859932 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f06548f2-3dd2-48ae-ad54-333466a3e6ac-operator-scripts\") pod \"keystone-db-create-vghhr\" (UID: \"f06548f2-3dd2-48ae-ad54-333466a3e6ac\") " pod="openstack/keystone-db-create-vghhr" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.860429 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wllkj\" (UniqueName: \"kubernetes.io/projected/c155188a-ed86-457a-88cb-42db8a510bf7-kube-api-access-wllkj\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.861009 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f06548f2-3dd2-48ae-ad54-333466a3e6ac-operator-scripts\") pod \"keystone-db-create-vghhr\" (UID: \"f06548f2-3dd2-48ae-ad54-333466a3e6ac\") " pod="openstack/keystone-db-create-vghhr" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.891376 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbxm4\" (UniqueName: \"kubernetes.io/projected/f06548f2-3dd2-48ae-ad54-333466a3e6ac-kube-api-access-sbxm4\") pod \"keystone-db-create-vghhr\" (UID: \"f06548f2-3dd2-48ae-ad54-333466a3e6ac\") " pod="openstack/keystone-db-create-vghhr" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.914593 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5280-account-create-update-8r5vm"] Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.915811 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5280-account-create-update-8r5vm" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.917879 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.923501 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5280-account-create-update-8r5vm"] Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.943020 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vghhr" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.961613 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsxgk\" (UniqueName: \"kubernetes.io/projected/85f1dc85-62df-44f4-a549-390aeb2709be-kube-api-access-gsxgk\") pod \"placement-db-create-p8m6d\" (UID: \"85f1dc85-62df-44f4-a549-390aeb2709be\") " pod="openstack/placement-db-create-p8m6d" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.961670 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vblk2\" (UniqueName: \"kubernetes.io/projected/e3dc3dc8-7561-4964-9b52-5bfacca166da-kube-api-access-vblk2\") pod \"keystone-42de-account-create-update-bjxph\" (UID: \"e3dc3dc8-7561-4964-9b52-5bfacca166da\") " pod="openstack/keystone-42de-account-create-update-bjxph" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.961718 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3dc3dc8-7561-4964-9b52-5bfacca166da-operator-scripts\") pod \"keystone-42de-account-create-update-bjxph\" (UID: \"e3dc3dc8-7561-4964-9b52-5bfacca166da\") " pod="openstack/keystone-42de-account-create-update-bjxph" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.961822 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85f1dc85-62df-44f4-a549-390aeb2709be-operator-scripts\") pod \"placement-db-create-p8m6d\" (UID: \"85f1dc85-62df-44f4-a549-390aeb2709be\") " pod="openstack/placement-db-create-p8m6d" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.964008 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3dc3dc8-7561-4964-9b52-5bfacca166da-operator-scripts\") pod \"keystone-42de-account-create-update-bjxph\" (UID: \"e3dc3dc8-7561-4964-9b52-5bfacca166da\") " pod="openstack/keystone-42de-account-create-update-bjxph" Mar 11 09:16:37 crc kubenswrapper[4840]: I0311 09:16:37.981069 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vblk2\" (UniqueName: \"kubernetes.io/projected/e3dc3dc8-7561-4964-9b52-5bfacca166da-kube-api-access-vblk2\") pod \"keystone-42de-account-create-update-bjxph\" (UID: \"e3dc3dc8-7561-4964-9b52-5bfacca166da\") " pod="openstack/keystone-42de-account-create-update-bjxph" Mar 11 09:16:38 crc kubenswrapper[4840]: I0311 09:16:38.010495 4840 generic.go:334] "Generic (PLEG): container finished" podID="3a964c24-3c53-4a29-98fb-ceaca467c372" containerID="43da3ef6c28eef2051a13159a394be8ff41df563fd5e85c35c8ba4cdcc59aec7" exitCode=0 Mar 11 09:16:38 crc kubenswrapper[4840]: I0311 09:16:38.010577 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3a964c24-3c53-4a29-98fb-ceaca467c372","Type":"ContainerDied","Data":"43da3ef6c28eef2051a13159a394be8ff41df563fd5e85c35c8ba4cdcc59aec7"} Mar 11 09:16:38 crc kubenswrapper[4840]: I0311 09:16:38.014435 4840 generic.go:334] "Generic (PLEG): container finished" podID="f31748d2-64a9-4839-ac55-691d9682ee8e" containerID="ee9ff21cca22ce33a9fecf80bd4c008df2c67c2ba0d6988e89b3285d5e560733" exitCode=0 Mar 11 09:16:38 crc kubenswrapper[4840]: I0311 09:16:38.014497 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f31748d2-64a9-4839-ac55-691d9682ee8e","Type":"ContainerDied","Data":"ee9ff21cca22ce33a9fecf80bd4c008df2c67c2ba0d6988e89b3285d5e560733"} Mar 11 09:16:38 crc kubenswrapper[4840]: I0311 09:16:38.016488 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3c72-account-create-update-dbmd9" event={"ID":"c155188a-ed86-457a-88cb-42db8a510bf7","Type":"ContainerDied","Data":"8d741d61d69b8dfc10043440bee4785fbd01b90c0f9b9f2bab6c3b33c2cdfcd8"} Mar 11 09:16:38 crc kubenswrapper[4840]: I0311 09:16:38.016518 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d741d61d69b8dfc10043440bee4785fbd01b90c0f9b9f2bab6c3b33c2cdfcd8" Mar 11 09:16:38 crc kubenswrapper[4840]: I0311 09:16:38.016565 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3c72-account-create-update-dbmd9" Mar 11 09:16:38 crc kubenswrapper[4840]: I0311 09:16:38.048238 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-42de-account-create-update-bjxph" Mar 11 09:16:38 crc kubenswrapper[4840]: I0311 09:16:38.062883 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-etc-swift\") pod \"swift-storage-0\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " pod="openstack/swift-storage-0" Mar 11 09:16:38 crc kubenswrapper[4840]: I0311 09:16:38.062961 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85f1dc85-62df-44f4-a549-390aeb2709be-operator-scripts\") pod \"placement-db-create-p8m6d\" (UID: \"85f1dc85-62df-44f4-a549-390aeb2709be\") " pod="openstack/placement-db-create-p8m6d" Mar 11 09:16:38 crc kubenswrapper[4840]: I0311 09:16:38.063018 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd038b3a-9412-48d6-8aa9-f455897bcfb2-operator-scripts\") pod \"placement-5280-account-create-update-8r5vm\" (UID: \"fd038b3a-9412-48d6-8aa9-f455897bcfb2\") " pod="openstack/placement-5280-account-create-update-8r5vm" Mar 11 09:16:38 crc kubenswrapper[4840]: I0311 09:16:38.063082 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsxgk\" (UniqueName: \"kubernetes.io/projected/85f1dc85-62df-44f4-a549-390aeb2709be-kube-api-access-gsxgk\") pod \"placement-db-create-p8m6d\" (UID: \"85f1dc85-62df-44f4-a549-390aeb2709be\") " pod="openstack/placement-db-create-p8m6d" Mar 11 09:16:38 crc kubenswrapper[4840]: I0311 09:16:38.063169 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62scg\" (UniqueName: \"kubernetes.io/projected/fd038b3a-9412-48d6-8aa9-f455897bcfb2-kube-api-access-62scg\") pod \"placement-5280-account-create-update-8r5vm\" (UID: \"fd038b3a-9412-48d6-8aa9-f455897bcfb2\") " pod="openstack/placement-5280-account-create-update-8r5vm" Mar 11 09:16:38 crc kubenswrapper[4840]: E0311 09:16:38.063362 4840 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 11 09:16:38 crc kubenswrapper[4840]: E0311 09:16:38.063377 4840 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 11 09:16:38 crc kubenswrapper[4840]: E0311 09:16:38.063420 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-etc-swift podName:f1c7d7f4-dc60-4703-b6c3-6cd626db11af nodeName:}" failed. No retries permitted until 2026-03-11 09:16:46.063402081 +0000 UTC m=+1204.729071896 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-etc-swift") pod "swift-storage-0" (UID: "f1c7d7f4-dc60-4703-b6c3-6cd626db11af") : configmap "swift-ring-files" not found Mar 11 09:16:38 crc kubenswrapper[4840]: I0311 09:16:38.064591 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85f1dc85-62df-44f4-a549-390aeb2709be-operator-scripts\") pod \"placement-db-create-p8m6d\" (UID: \"85f1dc85-62df-44f4-a549-390aeb2709be\") " pod="openstack/placement-db-create-p8m6d" Mar 11 09:16:38 crc kubenswrapper[4840]: I0311 09:16:38.120785 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsxgk\" (UniqueName: \"kubernetes.io/projected/85f1dc85-62df-44f4-a549-390aeb2709be-kube-api-access-gsxgk\") pod \"placement-db-create-p8m6d\" (UID: \"85f1dc85-62df-44f4-a549-390aeb2709be\") " pod="openstack/placement-db-create-p8m6d" Mar 11 09:16:38 crc kubenswrapper[4840]: I0311 09:16:38.146861 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-p8m6d" Mar 11 09:16:38 crc kubenswrapper[4840]: I0311 09:16:38.165133 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd038b3a-9412-48d6-8aa9-f455897bcfb2-operator-scripts\") pod \"placement-5280-account-create-update-8r5vm\" (UID: \"fd038b3a-9412-48d6-8aa9-f455897bcfb2\") " pod="openstack/placement-5280-account-create-update-8r5vm" Mar 11 09:16:38 crc kubenswrapper[4840]: I0311 09:16:38.165353 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62scg\" (UniqueName: \"kubernetes.io/projected/fd038b3a-9412-48d6-8aa9-f455897bcfb2-kube-api-access-62scg\") pod \"placement-5280-account-create-update-8r5vm\" (UID: \"fd038b3a-9412-48d6-8aa9-f455897bcfb2\") " pod="openstack/placement-5280-account-create-update-8r5vm" Mar 11 09:16:38 crc kubenswrapper[4840]: I0311 09:16:38.166732 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd038b3a-9412-48d6-8aa9-f455897bcfb2-operator-scripts\") pod \"placement-5280-account-create-update-8r5vm\" (UID: \"fd038b3a-9412-48d6-8aa9-f455897bcfb2\") " pod="openstack/placement-5280-account-create-update-8r5vm" Mar 11 09:16:38 crc kubenswrapper[4840]: I0311 09:16:38.189631 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62scg\" (UniqueName: \"kubernetes.io/projected/fd038b3a-9412-48d6-8aa9-f455897bcfb2-kube-api-access-62scg\") pod \"placement-5280-account-create-update-8r5vm\" (UID: \"fd038b3a-9412-48d6-8aa9-f455897bcfb2\") " pod="openstack/placement-5280-account-create-update-8r5vm" Mar 11 09:16:38 crc kubenswrapper[4840]: I0311 09:16:38.241419 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5280-account-create-update-8r5vm" Mar 11 09:16:39 crc kubenswrapper[4840]: I0311 09:16:39.034436 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tv2rh" event={"ID":"cebd38d4-d574-424d-8472-b60da948f8d1","Type":"ContainerDied","Data":"8d2d7a228e6b71e6be27d664a2830dc2966b675be42e64d684075192b692f812"} Mar 11 09:16:39 crc kubenswrapper[4840]: I0311 09:16:39.035187 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d2d7a228e6b71e6be27d664a2830dc2966b675be42e64d684075192b692f812" Mar 11 09:16:39 crc kubenswrapper[4840]: I0311 09:16:39.145861 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tv2rh" Mar 11 09:16:39 crc kubenswrapper[4840]: I0311 09:16:39.285054 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cebd38d4-d574-424d-8472-b60da948f8d1-operator-scripts\") pod \"cebd38d4-d574-424d-8472-b60da948f8d1\" (UID: \"cebd38d4-d574-424d-8472-b60da948f8d1\") " Mar 11 09:16:39 crc kubenswrapper[4840]: I0311 09:16:39.285510 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6mvz\" (UniqueName: \"kubernetes.io/projected/cebd38d4-d574-424d-8472-b60da948f8d1-kube-api-access-m6mvz\") pod \"cebd38d4-d574-424d-8472-b60da948f8d1\" (UID: \"cebd38d4-d574-424d-8472-b60da948f8d1\") " Mar 11 09:16:39 crc kubenswrapper[4840]: I0311 09:16:39.286319 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cebd38d4-d574-424d-8472-b60da948f8d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cebd38d4-d574-424d-8472-b60da948f8d1" (UID: "cebd38d4-d574-424d-8472-b60da948f8d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:16:39 crc kubenswrapper[4840]: I0311 09:16:39.289488 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cebd38d4-d574-424d-8472-b60da948f8d1-kube-api-access-m6mvz" (OuterVolumeSpecName: "kube-api-access-m6mvz") pod "cebd38d4-d574-424d-8472-b60da948f8d1" (UID: "cebd38d4-d574-424d-8472-b60da948f8d1"). InnerVolumeSpecName "kube-api-access-m6mvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:16:39 crc kubenswrapper[4840]: I0311 09:16:39.323646 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" Mar 11 09:16:39 crc kubenswrapper[4840]: I0311 09:16:39.379094 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b57d9888c-xvlx4"] Mar 11 09:16:39 crc kubenswrapper[4840]: I0311 09:16:39.379337 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" podUID="51dd350a-e07a-49db-babe-419a6e23e271" containerName="dnsmasq-dns" containerID="cri-o://2bebde5ea6ba31adfbf9a34327c68ff63b9f596992027cfe9c2c5849d6a2d624" gracePeriod=10 Mar 11 09:16:39 crc kubenswrapper[4840]: I0311 09:16:39.393299 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cebd38d4-d574-424d-8472-b60da948f8d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:39 crc kubenswrapper[4840]: I0311 09:16:39.393332 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6mvz\" (UniqueName: \"kubernetes.io/projected/cebd38d4-d574-424d-8472-b60da948f8d1-kube-api-access-m6mvz\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:39 crc kubenswrapper[4840]: I0311 09:16:39.486461 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-bclgj"] Mar 11 09:16:39 crc kubenswrapper[4840]: I0311 09:16:39.497087 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-vghhr"] Mar 11 09:16:39 crc kubenswrapper[4840]: I0311 09:16:39.759502 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5280-account-create-update-8r5vm"] Mar 11 09:16:39 crc kubenswrapper[4840]: W0311 09:16:39.778659 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd038b3a_9412_48d6_8aa9_f455897bcfb2.slice/crio-e10ab3e3a2a945655fd5ee3348440fdd2787dea5cd4e2024715142bbb7ab42f7 WatchSource:0}: Error finding container e10ab3e3a2a945655fd5ee3348440fdd2787dea5cd4e2024715142bbb7ab42f7: Status 404 returned error can't find the container with id e10ab3e3a2a945655fd5ee3348440fdd2787dea5cd4e2024715142bbb7ab42f7 Mar 11 09:16:39 crc kubenswrapper[4840]: I0311 09:16:39.781247 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-p8m6d"] Mar 11 09:16:39 crc kubenswrapper[4840]: I0311 09:16:39.815262 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-42de-account-create-update-bjxph"] Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.010638 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.048833 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5280-account-create-update-8r5vm" event={"ID":"fd038b3a-9412-48d6-8aa9-f455897bcfb2","Type":"ContainerStarted","Data":"e10ab3e3a2a945655fd5ee3348440fdd2787dea5cd4e2024715142bbb7ab42f7"} Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.052005 4840 generic.go:334] "Generic (PLEG): container finished" podID="51dd350a-e07a-49db-babe-419a6e23e271" containerID="2bebde5ea6ba31adfbf9a34327c68ff63b9f596992027cfe9c2c5849d6a2d624" exitCode=0 Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.052089 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" event={"ID":"51dd350a-e07a-49db-babe-419a6e23e271","Type":"ContainerDied","Data":"2bebde5ea6ba31adfbf9a34327c68ff63b9f596992027cfe9c2c5849d6a2d624"} Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.052126 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" event={"ID":"51dd350a-e07a-49db-babe-419a6e23e271","Type":"ContainerDied","Data":"6ea339deaf35390394fdfc07d305f5dc33ad856271801abaa248605d08460331"} Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.052150 4840 scope.go:117] "RemoveContainer" containerID="2bebde5ea6ba31adfbf9a34327c68ff63b9f596992027cfe9c2c5849d6a2d624" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.052282 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b57d9888c-xvlx4" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.058701 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-42de-account-create-update-bjxph" event={"ID":"e3dc3dc8-7561-4964-9b52-5bfacca166da","Type":"ContainerStarted","Data":"30e8ff4a821ff94e16f2303d00b5c01a555a3233f831ea87b1efb63f3c44aae7"} Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.084345 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bclgj" event={"ID":"577fd829-89f7-4673-99d8-e9b4da8b9f84","Type":"ContainerStarted","Data":"2996d2155c329176eff9a6a52124bee55a98d549119d59487efbbb39299564e2"} Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.084382 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bclgj" event={"ID":"577fd829-89f7-4673-99d8-e9b4da8b9f84","Type":"ContainerStarted","Data":"afd328e4a8288a2b682a780e2e6902b50b7944372410e58b9f2fd4143d45bc96"} Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.090741 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3a964c24-3c53-4a29-98fb-ceaca467c372","Type":"ContainerStarted","Data":"a744999686b40927e99b68155086c1bd24cdc20d1ac6a31619c39f4421dcb680"} Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.091727 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.099143 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-bclgj" podStartSLOduration=7.099127179 podStartE2EDuration="7.099127179s" podCreationTimestamp="2026-03-11 09:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:16:40.097013386 +0000 UTC m=+1198.762683201" watchObservedRunningTime="2026-03-11 09:16:40.099127179 +0000 UTC m=+1198.764796994" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.108192 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-px5zj" event={"ID":"757b3a3d-0655-4517-a752-b944899642c9","Type":"ContainerStarted","Data":"5acef0cbaf6b8eacac1e6d33812371d5045212d01c390a7c87175c13608077b8"} Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.114116 4840 scope.go:117] "RemoveContainer" containerID="0756c34c4bbf430129f8132fddfe578fb4d973c5a56e788f369e2843e4b7176a" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.129319 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51dd350a-e07a-49db-babe-419a6e23e271-ovsdbserver-nb\") pod \"51dd350a-e07a-49db-babe-419a6e23e271\" (UID: \"51dd350a-e07a-49db-babe-419a6e23e271\") " Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.129727 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtswv\" (UniqueName: \"kubernetes.io/projected/51dd350a-e07a-49db-babe-419a6e23e271-kube-api-access-rtswv\") pod \"51dd350a-e07a-49db-babe-419a6e23e271\" (UID: \"51dd350a-e07a-49db-babe-419a6e23e271\") " Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.129970 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51dd350a-e07a-49db-babe-419a6e23e271-ovsdbserver-sb\") pod \"51dd350a-e07a-49db-babe-419a6e23e271\" (UID: \"51dd350a-e07a-49db-babe-419a6e23e271\") " Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.130167 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51dd350a-e07a-49db-babe-419a6e23e271-dns-svc\") pod \"51dd350a-e07a-49db-babe-419a6e23e271\" (UID: \"51dd350a-e07a-49db-babe-419a6e23e271\") " Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.130320 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51dd350a-e07a-49db-babe-419a6e23e271-config\") pod \"51dd350a-e07a-49db-babe-419a6e23e271\" (UID: \"51dd350a-e07a-49db-babe-419a6e23e271\") " Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.133117 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-p8m6d" event={"ID":"85f1dc85-62df-44f4-a549-390aeb2709be","Type":"ContainerStarted","Data":"bad5fc019396e439013abc9f716a1f2810f81bf618741b13c84245d03d2d40b2"} Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.134124 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=44.598152393 podStartE2EDuration="58.134103028s" podCreationTimestamp="2026-03-11 09:15:42 +0000 UTC" firstStartedPulling="2026-03-11 09:15:50.472402128 +0000 UTC m=+1149.138071943" lastFinishedPulling="2026-03-11 09:16:04.008352763 +0000 UTC m=+1162.674022578" observedRunningTime="2026-03-11 09:16:40.121299166 +0000 UTC m=+1198.786968991" watchObservedRunningTime="2026-03-11 09:16:40.134103028 +0000 UTC m=+1198.799772833" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.139164 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51dd350a-e07a-49db-babe-419a6e23e271-kube-api-access-rtswv" (OuterVolumeSpecName: "kube-api-access-rtswv") pod "51dd350a-e07a-49db-babe-419a6e23e271" (UID: "51dd350a-e07a-49db-babe-419a6e23e271"). InnerVolumeSpecName "kube-api-access-rtswv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.150319 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-px5zj" podStartSLOduration=3.013231162 podStartE2EDuration="10.150299206s" podCreationTimestamp="2026-03-11 09:16:30 +0000 UTC" firstStartedPulling="2026-03-11 09:16:31.795577829 +0000 UTC m=+1190.461247644" lastFinishedPulling="2026-03-11 09:16:38.932645873 +0000 UTC m=+1197.598315688" observedRunningTime="2026-03-11 09:16:40.144670084 +0000 UTC m=+1198.810339889" watchObservedRunningTime="2026-03-11 09:16:40.150299206 +0000 UTC m=+1198.815969021" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.151881 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f31748d2-64a9-4839-ac55-691d9682ee8e","Type":"ContainerStarted","Data":"0e215c93773db23bc670a8e1e1523ba89c86e46f9708b9592117120dd8a29424"} Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.152183 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.156155 4840 scope.go:117] "RemoveContainer" containerID="2bebde5ea6ba31adfbf9a34327c68ff63b9f596992027cfe9c2c5849d6a2d624" Mar 11 09:16:40 crc kubenswrapper[4840]: E0311 09:16:40.156511 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bebde5ea6ba31adfbf9a34327c68ff63b9f596992027cfe9c2c5849d6a2d624\": container with ID starting with 2bebde5ea6ba31adfbf9a34327c68ff63b9f596992027cfe9c2c5849d6a2d624 not found: ID does not exist" containerID="2bebde5ea6ba31adfbf9a34327c68ff63b9f596992027cfe9c2c5849d6a2d624" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.156548 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bebde5ea6ba31adfbf9a34327c68ff63b9f596992027cfe9c2c5849d6a2d624"} err="failed to get container status \"2bebde5ea6ba31adfbf9a34327c68ff63b9f596992027cfe9c2c5849d6a2d624\": rpc error: code = NotFound desc = could not find container \"2bebde5ea6ba31adfbf9a34327c68ff63b9f596992027cfe9c2c5849d6a2d624\": container with ID starting with 2bebde5ea6ba31adfbf9a34327c68ff63b9f596992027cfe9c2c5849d6a2d624 not found: ID does not exist" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.156568 4840 scope.go:117] "RemoveContainer" containerID="0756c34c4bbf430129f8132fddfe578fb4d973c5a56e788f369e2843e4b7176a" Mar 11 09:16:40 crc kubenswrapper[4840]: E0311 09:16:40.157658 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0756c34c4bbf430129f8132fddfe578fb4d973c5a56e788f369e2843e4b7176a\": container with ID starting with 0756c34c4bbf430129f8132fddfe578fb4d973c5a56e788f369e2843e4b7176a not found: ID does not exist" containerID="0756c34c4bbf430129f8132fddfe578fb4d973c5a56e788f369e2843e4b7176a" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.157684 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0756c34c4bbf430129f8132fddfe578fb4d973c5a56e788f369e2843e4b7176a"} err="failed to get container status \"0756c34c4bbf430129f8132fddfe578fb4d973c5a56e788f369e2843e4b7176a\": rpc error: code = NotFound desc = could not find container \"0756c34c4bbf430129f8132fddfe578fb4d973c5a56e788f369e2843e4b7176a\": container with ID starting with 0756c34c4bbf430129f8132fddfe578fb4d973c5a56e788f369e2843e4b7176a not found: ID does not exist" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.161657 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tv2rh" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.162198 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-vghhr" event={"ID":"f06548f2-3dd2-48ae-ad54-333466a3e6ac","Type":"ContainerStarted","Data":"11fb75cf0ca0a87baab391a652f351b5b6942820369225966a95d8485b62934e"} Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.162233 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-vghhr" event={"ID":"f06548f2-3dd2-48ae-ad54-333466a3e6ac","Type":"ContainerStarted","Data":"7d7c237a2a88f6a70d2f207c71cd82560237e20e1346246aa8b484ed7c58ad72"} Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.206357 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51dd350a-e07a-49db-babe-419a6e23e271-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "51dd350a-e07a-49db-babe-419a6e23e271" (UID: "51dd350a-e07a-49db-babe-419a6e23e271"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.214920 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51dd350a-e07a-49db-babe-419a6e23e271-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "51dd350a-e07a-49db-babe-419a6e23e271" (UID: "51dd350a-e07a-49db-babe-419a6e23e271"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.225038 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=44.104383891 podStartE2EDuration="58.224991314s" podCreationTimestamp="2026-03-11 09:15:42 +0000 UTC" firstStartedPulling="2026-03-11 09:15:49.857255356 +0000 UTC m=+1148.522925171" lastFinishedPulling="2026-03-11 09:16:03.977862779 +0000 UTC m=+1162.643532594" observedRunningTime="2026-03-11 09:16:40.185783198 +0000 UTC m=+1198.851453013" watchObservedRunningTime="2026-03-11 09:16:40.224991314 +0000 UTC m=+1198.890661129" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.233750 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-vghhr" podStartSLOduration=3.233729794 podStartE2EDuration="3.233729794s" podCreationTimestamp="2026-03-11 09:16:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:16:40.213104695 +0000 UTC m=+1198.878774510" watchObservedRunningTime="2026-03-11 09:16:40.233729794 +0000 UTC m=+1198.899399609" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.236012 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtswv\" (UniqueName: \"kubernetes.io/projected/51dd350a-e07a-49db-babe-419a6e23e271-kube-api-access-rtswv\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.236035 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51dd350a-e07a-49db-babe-419a6e23e271-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.236045 4840 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51dd350a-e07a-49db-babe-419a6e23e271-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.241349 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51dd350a-e07a-49db-babe-419a6e23e271-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "51dd350a-e07a-49db-babe-419a6e23e271" (UID: "51dd350a-e07a-49db-babe-419a6e23e271"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.262645 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51dd350a-e07a-49db-babe-419a6e23e271-config" (OuterVolumeSpecName: "config") pod "51dd350a-e07a-49db-babe-419a6e23e271" (UID: "51dd350a-e07a-49db-babe-419a6e23e271"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.341771 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51dd350a-e07a-49db-babe-419a6e23e271-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.341816 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51dd350a-e07a-49db-babe-419a6e23e271-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.548442 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b57d9888c-xvlx4"] Mar 11 09:16:40 crc kubenswrapper[4840]: I0311 09:16:40.559506 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b57d9888c-xvlx4"] Mar 11 09:16:41 crc kubenswrapper[4840]: I0311 09:16:41.171952 4840 generic.go:334] "Generic (PLEG): container finished" podID="f06548f2-3dd2-48ae-ad54-333466a3e6ac" containerID="11fb75cf0ca0a87baab391a652f351b5b6942820369225966a95d8485b62934e" exitCode=0 Mar 11 09:16:41 crc kubenswrapper[4840]: I0311 09:16:41.172023 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-vghhr" event={"ID":"f06548f2-3dd2-48ae-ad54-333466a3e6ac","Type":"ContainerDied","Data":"11fb75cf0ca0a87baab391a652f351b5b6942820369225966a95d8485b62934e"} Mar 11 09:16:41 crc kubenswrapper[4840]: I0311 09:16:41.173737 4840 generic.go:334] "Generic (PLEG): container finished" podID="577fd829-89f7-4673-99d8-e9b4da8b9f84" containerID="2996d2155c329176eff9a6a52124bee55a98d549119d59487efbbb39299564e2" exitCode=0 Mar 11 09:16:41 crc kubenswrapper[4840]: I0311 09:16:41.173822 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bclgj" event={"ID":"577fd829-89f7-4673-99d8-e9b4da8b9f84","Type":"ContainerDied","Data":"2996d2155c329176eff9a6a52124bee55a98d549119d59487efbbb39299564e2"} Mar 11 09:16:41 crc kubenswrapper[4840]: I0311 09:16:41.176089 4840 generic.go:334] "Generic (PLEG): container finished" podID="e3dc3dc8-7561-4964-9b52-5bfacca166da" containerID="4012476edfb45bc66ade0425e6e15cb295f6d01f29410cdf38ce8e24f4f925b0" exitCode=0 Mar 11 09:16:41 crc kubenswrapper[4840]: I0311 09:16:41.176189 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-42de-account-create-update-bjxph" event={"ID":"e3dc3dc8-7561-4964-9b52-5bfacca166da","Type":"ContainerDied","Data":"4012476edfb45bc66ade0425e6e15cb295f6d01f29410cdf38ce8e24f4f925b0"} Mar 11 09:16:41 crc kubenswrapper[4840]: I0311 09:16:41.177676 4840 generic.go:334] "Generic (PLEG): container finished" podID="85f1dc85-62df-44f4-a549-390aeb2709be" containerID="da45ef6825f99f5918718a576904526c5a0a326e16f3e54f2f9f1f1632566812" exitCode=0 Mar 11 09:16:41 crc kubenswrapper[4840]: I0311 09:16:41.177755 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-p8m6d" event={"ID":"85f1dc85-62df-44f4-a549-390aeb2709be","Type":"ContainerDied","Data":"da45ef6825f99f5918718a576904526c5a0a326e16f3e54f2f9f1f1632566812"} Mar 11 09:16:41 crc kubenswrapper[4840]: I0311 09:16:41.179212 4840 generic.go:334] "Generic (PLEG): container finished" podID="fd038b3a-9412-48d6-8aa9-f455897bcfb2" containerID="10d38f83e8c1cf32b2d2f74c18bdafacb8d637d49dd6f29cc3410a5f98c1889a" exitCode=0 Mar 11 09:16:41 crc kubenswrapper[4840]: I0311 09:16:41.179291 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5280-account-create-update-8r5vm" event={"ID":"fd038b3a-9412-48d6-8aa9-f455897bcfb2","Type":"ContainerDied","Data":"10d38f83e8c1cf32b2d2f74c18bdafacb8d637d49dd6f29cc3410a5f98c1889a"} Mar 11 09:16:41 crc kubenswrapper[4840]: I0311 09:16:41.976408 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-d6pcw"] Mar 11 09:16:41 crc kubenswrapper[4840]: E0311 09:16:41.977363 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51dd350a-e07a-49db-babe-419a6e23e271" containerName="dnsmasq-dns" Mar 11 09:16:41 crc kubenswrapper[4840]: I0311 09:16:41.977388 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="51dd350a-e07a-49db-babe-419a6e23e271" containerName="dnsmasq-dns" Mar 11 09:16:41 crc kubenswrapper[4840]: E0311 09:16:41.977407 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cebd38d4-d574-424d-8472-b60da948f8d1" containerName="mariadb-database-create" Mar 11 09:16:41 crc kubenswrapper[4840]: I0311 09:16:41.977414 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="cebd38d4-d574-424d-8472-b60da948f8d1" containerName="mariadb-database-create" Mar 11 09:16:41 crc kubenswrapper[4840]: E0311 09:16:41.977430 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51dd350a-e07a-49db-babe-419a6e23e271" containerName="init" Mar 11 09:16:41 crc kubenswrapper[4840]: I0311 09:16:41.977439 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="51dd350a-e07a-49db-babe-419a6e23e271" containerName="init" Mar 11 09:16:41 crc kubenswrapper[4840]: I0311 09:16:41.977652 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="51dd350a-e07a-49db-babe-419a6e23e271" containerName="dnsmasq-dns" Mar 11 09:16:41 crc kubenswrapper[4840]: I0311 09:16:41.977679 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="cebd38d4-d574-424d-8472-b60da948f8d1" containerName="mariadb-database-create" Mar 11 09:16:41 crc kubenswrapper[4840]: I0311 09:16:41.978457 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-d6pcw" Mar 11 09:16:41 crc kubenswrapper[4840]: I0311 09:16:41.981246 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 11 09:16:41 crc kubenswrapper[4840]: I0311 09:16:41.981857 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8l29s" Mar 11 09:16:41 crc kubenswrapper[4840]: I0311 09:16:41.988432 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-d6pcw"] Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.074793 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51dd350a-e07a-49db-babe-419a6e23e271" path="/var/lib/kubelet/pods/51dd350a-e07a-49db-babe-419a6e23e271/volumes" Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.075694 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c6e6d6-9fb9-4042-a8c7-d14ad4067afb-combined-ca-bundle\") pod \"glance-db-sync-d6pcw\" (UID: \"51c6e6d6-9fb9-4042-a8c7-d14ad4067afb\") " pod="openstack/glance-db-sync-d6pcw" Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.075768 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/51c6e6d6-9fb9-4042-a8c7-d14ad4067afb-db-sync-config-data\") pod \"glance-db-sync-d6pcw\" (UID: \"51c6e6d6-9fb9-4042-a8c7-d14ad4067afb\") " pod="openstack/glance-db-sync-d6pcw" Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.075845 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c6e6d6-9fb9-4042-a8c7-d14ad4067afb-config-data\") pod \"glance-db-sync-d6pcw\" (UID: \"51c6e6d6-9fb9-4042-a8c7-d14ad4067afb\") " pod="openstack/glance-db-sync-d6pcw" Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.075895 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrstq\" (UniqueName: \"kubernetes.io/projected/51c6e6d6-9fb9-4042-a8c7-d14ad4067afb-kube-api-access-nrstq\") pod \"glance-db-sync-d6pcw\" (UID: \"51c6e6d6-9fb9-4042-a8c7-d14ad4067afb\") " pod="openstack/glance-db-sync-d6pcw" Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.177414 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c6e6d6-9fb9-4042-a8c7-d14ad4067afb-combined-ca-bundle\") pod \"glance-db-sync-d6pcw\" (UID: \"51c6e6d6-9fb9-4042-a8c7-d14ad4067afb\") " pod="openstack/glance-db-sync-d6pcw" Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.177502 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/51c6e6d6-9fb9-4042-a8c7-d14ad4067afb-db-sync-config-data\") pod \"glance-db-sync-d6pcw\" (UID: \"51c6e6d6-9fb9-4042-a8c7-d14ad4067afb\") " pod="openstack/glance-db-sync-d6pcw" Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.177578 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c6e6d6-9fb9-4042-a8c7-d14ad4067afb-config-data\") pod \"glance-db-sync-d6pcw\" (UID: \"51c6e6d6-9fb9-4042-a8c7-d14ad4067afb\") " pod="openstack/glance-db-sync-d6pcw" Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.177612 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrstq\" (UniqueName: \"kubernetes.io/projected/51c6e6d6-9fb9-4042-a8c7-d14ad4067afb-kube-api-access-nrstq\") pod \"glance-db-sync-d6pcw\" (UID: \"51c6e6d6-9fb9-4042-a8c7-d14ad4067afb\") " pod="openstack/glance-db-sync-d6pcw" Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.185133 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c6e6d6-9fb9-4042-a8c7-d14ad4067afb-config-data\") pod \"glance-db-sync-d6pcw\" (UID: \"51c6e6d6-9fb9-4042-a8c7-d14ad4067afb\") " pod="openstack/glance-db-sync-d6pcw" Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.187318 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c6e6d6-9fb9-4042-a8c7-d14ad4067afb-combined-ca-bundle\") pod \"glance-db-sync-d6pcw\" (UID: \"51c6e6d6-9fb9-4042-a8c7-d14ad4067afb\") " pod="openstack/glance-db-sync-d6pcw" Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.205127 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/51c6e6d6-9fb9-4042-a8c7-d14ad4067afb-db-sync-config-data\") pod \"glance-db-sync-d6pcw\" (UID: \"51c6e6d6-9fb9-4042-a8c7-d14ad4067afb\") " pod="openstack/glance-db-sync-d6pcw" Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.205796 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrstq\" (UniqueName: \"kubernetes.io/projected/51c6e6d6-9fb9-4042-a8c7-d14ad4067afb-kube-api-access-nrstq\") pod \"glance-db-sync-d6pcw\" (UID: \"51c6e6d6-9fb9-4042-a8c7-d14ad4067afb\") " pod="openstack/glance-db-sync-d6pcw" Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.304439 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-d6pcw" Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.648101 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5280-account-create-update-8r5vm" Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.715821 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62scg\" (UniqueName: \"kubernetes.io/projected/fd038b3a-9412-48d6-8aa9-f455897bcfb2-kube-api-access-62scg\") pod \"fd038b3a-9412-48d6-8aa9-f455897bcfb2\" (UID: \"fd038b3a-9412-48d6-8aa9-f455897bcfb2\") " Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.715914 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd038b3a-9412-48d6-8aa9-f455897bcfb2-operator-scripts\") pod \"fd038b3a-9412-48d6-8aa9-f455897bcfb2\" (UID: \"fd038b3a-9412-48d6-8aa9-f455897bcfb2\") " Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.717775 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd038b3a-9412-48d6-8aa9-f455897bcfb2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fd038b3a-9412-48d6-8aa9-f455897bcfb2" (UID: "fd038b3a-9412-48d6-8aa9-f455897bcfb2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.726118 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd038b3a-9412-48d6-8aa9-f455897bcfb2-kube-api-access-62scg" (OuterVolumeSpecName: "kube-api-access-62scg") pod "fd038b3a-9412-48d6-8aa9-f455897bcfb2" (UID: "fd038b3a-9412-48d6-8aa9-f455897bcfb2"). InnerVolumeSpecName "kube-api-access-62scg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.818096 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd038b3a-9412-48d6-8aa9-f455897bcfb2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.818559 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62scg\" (UniqueName: \"kubernetes.io/projected/fd038b3a-9412-48d6-8aa9-f455897bcfb2-kube-api-access-62scg\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.834453 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-42de-account-create-update-bjxph" Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.909560 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bclgj" Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.911662 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-p8m6d" Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.919987 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3dc3dc8-7561-4964-9b52-5bfacca166da-operator-scripts\") pod \"e3dc3dc8-7561-4964-9b52-5bfacca166da\" (UID: \"e3dc3dc8-7561-4964-9b52-5bfacca166da\") " Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.920185 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vblk2\" (UniqueName: \"kubernetes.io/projected/e3dc3dc8-7561-4964-9b52-5bfacca166da-kube-api-access-vblk2\") pod \"e3dc3dc8-7561-4964-9b52-5bfacca166da\" (UID: \"e3dc3dc8-7561-4964-9b52-5bfacca166da\") " Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.920502 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3dc3dc8-7561-4964-9b52-5bfacca166da-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e3dc3dc8-7561-4964-9b52-5bfacca166da" (UID: "e3dc3dc8-7561-4964-9b52-5bfacca166da"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.921104 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3dc3dc8-7561-4964-9b52-5bfacca166da-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.924444 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3dc3dc8-7561-4964-9b52-5bfacca166da-kube-api-access-vblk2" (OuterVolumeSpecName: "kube-api-access-vblk2") pod "e3dc3dc8-7561-4964-9b52-5bfacca166da" (UID: "e3dc3dc8-7561-4964-9b52-5bfacca166da"). InnerVolumeSpecName "kube-api-access-vblk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:16:42 crc kubenswrapper[4840]: I0311 09:16:42.930814 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vghhr" Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.022245 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsxgk\" (UniqueName: \"kubernetes.io/projected/85f1dc85-62df-44f4-a549-390aeb2709be-kube-api-access-gsxgk\") pod \"85f1dc85-62df-44f4-a549-390aeb2709be\" (UID: \"85f1dc85-62df-44f4-a549-390aeb2709be\") " Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.022377 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtvhj\" (UniqueName: \"kubernetes.io/projected/577fd829-89f7-4673-99d8-e9b4da8b9f84-kube-api-access-dtvhj\") pod \"577fd829-89f7-4673-99d8-e9b4da8b9f84\" (UID: \"577fd829-89f7-4673-99d8-e9b4da8b9f84\") " Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.022404 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/577fd829-89f7-4673-99d8-e9b4da8b9f84-operator-scripts\") pod \"577fd829-89f7-4673-99d8-e9b4da8b9f84\" (UID: \"577fd829-89f7-4673-99d8-e9b4da8b9f84\") " Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.022487 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85f1dc85-62df-44f4-a549-390aeb2709be-operator-scripts\") pod \"85f1dc85-62df-44f4-a549-390aeb2709be\" (UID: \"85f1dc85-62df-44f4-a549-390aeb2709be\") " Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.023150 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/577fd829-89f7-4673-99d8-e9b4da8b9f84-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "577fd829-89f7-4673-99d8-e9b4da8b9f84" (UID: "577fd829-89f7-4673-99d8-e9b4da8b9f84"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.023311 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85f1dc85-62df-44f4-a549-390aeb2709be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85f1dc85-62df-44f4-a549-390aeb2709be" (UID: "85f1dc85-62df-44f4-a549-390aeb2709be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.024163 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f06548f2-3dd2-48ae-ad54-333466a3e6ac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f06548f2-3dd2-48ae-ad54-333466a3e6ac" (UID: "f06548f2-3dd2-48ae-ad54-333466a3e6ac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.024205 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f06548f2-3dd2-48ae-ad54-333466a3e6ac-operator-scripts\") pod \"f06548f2-3dd2-48ae-ad54-333466a3e6ac\" (UID: \"f06548f2-3dd2-48ae-ad54-333466a3e6ac\") " Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.024283 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbxm4\" (UniqueName: \"kubernetes.io/projected/f06548f2-3dd2-48ae-ad54-333466a3e6ac-kube-api-access-sbxm4\") pod \"f06548f2-3dd2-48ae-ad54-333466a3e6ac\" (UID: \"f06548f2-3dd2-48ae-ad54-333466a3e6ac\") " Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.025161 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f06548f2-3dd2-48ae-ad54-333466a3e6ac-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.025183 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/577fd829-89f7-4673-99d8-e9b4da8b9f84-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.025193 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85f1dc85-62df-44f4-a549-390aeb2709be-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.025205 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vblk2\" (UniqueName: \"kubernetes.io/projected/e3dc3dc8-7561-4964-9b52-5bfacca166da-kube-api-access-vblk2\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.026960 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/577fd829-89f7-4673-99d8-e9b4da8b9f84-kube-api-access-dtvhj" (OuterVolumeSpecName: "kube-api-access-dtvhj") pod "577fd829-89f7-4673-99d8-e9b4da8b9f84" (UID: "577fd829-89f7-4673-99d8-e9b4da8b9f84"). InnerVolumeSpecName "kube-api-access-dtvhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.027361 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f06548f2-3dd2-48ae-ad54-333466a3e6ac-kube-api-access-sbxm4" (OuterVolumeSpecName: "kube-api-access-sbxm4") pod "f06548f2-3dd2-48ae-ad54-333466a3e6ac" (UID: "f06548f2-3dd2-48ae-ad54-333466a3e6ac"). InnerVolumeSpecName "kube-api-access-sbxm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.027978 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85f1dc85-62df-44f4-a549-390aeb2709be-kube-api-access-gsxgk" (OuterVolumeSpecName: "kube-api-access-gsxgk") pod "85f1dc85-62df-44f4-a549-390aeb2709be" (UID: "85f1dc85-62df-44f4-a549-390aeb2709be"). InnerVolumeSpecName "kube-api-access-gsxgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.127107 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbxm4\" (UniqueName: \"kubernetes.io/projected/f06548f2-3dd2-48ae-ad54-333466a3e6ac-kube-api-access-sbxm4\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.127147 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsxgk\" (UniqueName: \"kubernetes.io/projected/85f1dc85-62df-44f4-a549-390aeb2709be-kube-api-access-gsxgk\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.127158 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtvhj\" (UniqueName: \"kubernetes.io/projected/577fd829-89f7-4673-99d8-e9b4da8b9f84-kube-api-access-dtvhj\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.190075 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-d6pcw"] Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.206653 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-d6pcw" event={"ID":"51c6e6d6-9fb9-4042-a8c7-d14ad4067afb","Type":"ContainerStarted","Data":"5c4b641fc01e6d28be1daf3106a41ad8f623151d9075310e1fc70d1d270217c0"} Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.208888 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-p8m6d" event={"ID":"85f1dc85-62df-44f4-a549-390aeb2709be","Type":"ContainerDied","Data":"bad5fc019396e439013abc9f716a1f2810f81bf618741b13c84245d03d2d40b2"} Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.209043 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bad5fc019396e439013abc9f716a1f2810f81bf618741b13c84245d03d2d40b2" Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.208927 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-p8m6d" Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.213225 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5280-account-create-update-8r5vm" Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.213237 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5280-account-create-update-8r5vm" event={"ID":"fd038b3a-9412-48d6-8aa9-f455897bcfb2","Type":"ContainerDied","Data":"e10ab3e3a2a945655fd5ee3348440fdd2787dea5cd4e2024715142bbb7ab42f7"} Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.213283 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e10ab3e3a2a945655fd5ee3348440fdd2787dea5cd4e2024715142bbb7ab42f7" Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.216012 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-vghhr" event={"ID":"f06548f2-3dd2-48ae-ad54-333466a3e6ac","Type":"ContainerDied","Data":"7d7c237a2a88f6a70d2f207c71cd82560237e20e1346246aa8b484ed7c58ad72"} Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.216049 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d7c237a2a88f6a70d2f207c71cd82560237e20e1346246aa8b484ed7c58ad72" Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.216130 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vghhr" Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.231389 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-42de-account-create-update-bjxph" event={"ID":"e3dc3dc8-7561-4964-9b52-5bfacca166da","Type":"ContainerDied","Data":"30e8ff4a821ff94e16f2303d00b5c01a555a3233f831ea87b1efb63f3c44aae7"} Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.231439 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30e8ff4a821ff94e16f2303d00b5c01a555a3233f831ea87b1efb63f3c44aae7" Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.231412 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-42de-account-create-update-bjxph" Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.233507 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-bclgj" event={"ID":"577fd829-89f7-4673-99d8-e9b4da8b9f84","Type":"ContainerDied","Data":"afd328e4a8288a2b682a780e2e6902b50b7944372410e58b9f2fd4143d45bc96"} Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.233531 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afd328e4a8288a2b682a780e2e6902b50b7944372410e58b9f2fd4143d45bc96" Mar 11 09:16:43 crc kubenswrapper[4840]: I0311 09:16:43.233584 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-bclgj" Mar 11 09:16:44 crc kubenswrapper[4840]: I0311 09:16:44.480158 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 11 09:16:45 crc kubenswrapper[4840]: I0311 09:16:45.049583 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-bclgj"] Mar 11 09:16:45 crc kubenswrapper[4840]: I0311 09:16:45.062094 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-bclgj"] Mar 11 09:16:46 crc kubenswrapper[4840]: I0311 09:16:46.073032 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="577fd829-89f7-4673-99d8-e9b4da8b9f84" path="/var/lib/kubelet/pods/577fd829-89f7-4673-99d8-e9b4da8b9f84/volumes" Mar 11 09:16:46 crc kubenswrapper[4840]: I0311 09:16:46.082974 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-etc-swift\") pod \"swift-storage-0\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " pod="openstack/swift-storage-0" Mar 11 09:16:46 crc kubenswrapper[4840]: E0311 09:16:46.083224 4840 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 11 09:16:46 crc kubenswrapper[4840]: E0311 09:16:46.083265 4840 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 11 09:16:46 crc kubenswrapper[4840]: E0311 09:16:46.083333 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-etc-swift podName:f1c7d7f4-dc60-4703-b6c3-6cd626db11af nodeName:}" failed. No retries permitted until 2026-03-11 09:17:02.083307499 +0000 UTC m=+1220.748977314 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-etc-swift") pod "swift-storage-0" (UID: "f1c7d7f4-dc60-4703-b6c3-6cd626db11af") : configmap "swift-ring-files" not found Mar 11 09:16:46 crc kubenswrapper[4840]: I0311 09:16:46.687979 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-g2p7c" podUID="056df7c0-d577-4908-91a8-b5dfb95e0316" containerName="ovn-controller" probeResult="failure" output=< Mar 11 09:16:46 crc kubenswrapper[4840]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 11 09:16:46 crc kubenswrapper[4840]: > Mar 11 09:16:47 crc kubenswrapper[4840]: I0311 09:16:47.286095 4840 generic.go:334] "Generic (PLEG): container finished" podID="757b3a3d-0655-4517-a752-b944899642c9" containerID="5acef0cbaf6b8eacac1e6d33812371d5045212d01c390a7c87175c13608077b8" exitCode=0 Mar 11 09:16:47 crc kubenswrapper[4840]: I0311 09:16:47.286139 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-px5zj" event={"ID":"757b3a3d-0655-4517-a752-b944899642c9","Type":"ContainerDied","Data":"5acef0cbaf6b8eacac1e6d33812371d5045212d01c390a7c87175c13608077b8"} Mar 11 09:16:48 crc kubenswrapper[4840]: I0311 09:16:48.676263 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-jzpgq"] Mar 11 09:16:48 crc kubenswrapper[4840]: E0311 09:16:48.677071 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f1dc85-62df-44f4-a549-390aeb2709be" containerName="mariadb-database-create" Mar 11 09:16:48 crc kubenswrapper[4840]: I0311 09:16:48.677091 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f1dc85-62df-44f4-a549-390aeb2709be" containerName="mariadb-database-create" Mar 11 09:16:48 crc kubenswrapper[4840]: E0311 09:16:48.677111 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3dc3dc8-7561-4964-9b52-5bfacca166da" containerName="mariadb-account-create-update" Mar 11 09:16:48 crc kubenswrapper[4840]: I0311 09:16:48.677121 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3dc3dc8-7561-4964-9b52-5bfacca166da" containerName="mariadb-account-create-update" Mar 11 09:16:48 crc kubenswrapper[4840]: E0311 09:16:48.677137 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd038b3a-9412-48d6-8aa9-f455897bcfb2" containerName="mariadb-account-create-update" Mar 11 09:16:48 crc kubenswrapper[4840]: I0311 09:16:48.677145 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd038b3a-9412-48d6-8aa9-f455897bcfb2" containerName="mariadb-account-create-update" Mar 11 09:16:48 crc kubenswrapper[4840]: E0311 09:16:48.677165 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="577fd829-89f7-4673-99d8-e9b4da8b9f84" containerName="mariadb-account-create-update" Mar 11 09:16:48 crc kubenswrapper[4840]: I0311 09:16:48.677171 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="577fd829-89f7-4673-99d8-e9b4da8b9f84" containerName="mariadb-account-create-update" Mar 11 09:16:48 crc kubenswrapper[4840]: E0311 09:16:48.677181 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f06548f2-3dd2-48ae-ad54-333466a3e6ac" containerName="mariadb-database-create" Mar 11 09:16:48 crc kubenswrapper[4840]: I0311 09:16:48.677187 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f06548f2-3dd2-48ae-ad54-333466a3e6ac" containerName="mariadb-database-create" Mar 11 09:16:48 crc kubenswrapper[4840]: I0311 09:16:48.677376 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f06548f2-3dd2-48ae-ad54-333466a3e6ac" containerName="mariadb-database-create" Mar 11 09:16:48 crc kubenswrapper[4840]: I0311 09:16:48.677389 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd038b3a-9412-48d6-8aa9-f455897bcfb2" containerName="mariadb-account-create-update" Mar 11 09:16:48 crc kubenswrapper[4840]: I0311 09:16:48.677404 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="85f1dc85-62df-44f4-a549-390aeb2709be" containerName="mariadb-database-create" Mar 11 09:16:48 crc kubenswrapper[4840]: I0311 09:16:48.677419 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="577fd829-89f7-4673-99d8-e9b4da8b9f84" containerName="mariadb-account-create-update" Mar 11 09:16:48 crc kubenswrapper[4840]: I0311 09:16:48.677429 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3dc3dc8-7561-4964-9b52-5bfacca166da" containerName="mariadb-account-create-update" Mar 11 09:16:48 crc kubenswrapper[4840]: I0311 09:16:48.678047 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jzpgq" Mar 11 09:16:48 crc kubenswrapper[4840]: I0311 09:16:48.681673 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 11 09:16:48 crc kubenswrapper[4840]: I0311 09:16:48.687355 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jzpgq"] Mar 11 09:16:48 crc kubenswrapper[4840]: I0311 09:16:48.843902 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w665\" (UniqueName: \"kubernetes.io/projected/44b4b840-62df-4871-b50d-30b3a8123389-kube-api-access-6w665\") pod \"root-account-create-update-jzpgq\" (UID: \"44b4b840-62df-4871-b50d-30b3a8123389\") " pod="openstack/root-account-create-update-jzpgq" Mar 11 09:16:48 crc kubenswrapper[4840]: I0311 09:16:48.845869 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44b4b840-62df-4871-b50d-30b3a8123389-operator-scripts\") pod \"root-account-create-update-jzpgq\" (UID: \"44b4b840-62df-4871-b50d-30b3a8123389\") " pod="openstack/root-account-create-update-jzpgq" Mar 11 09:16:48 crc kubenswrapper[4840]: I0311 09:16:48.947009 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44b4b840-62df-4871-b50d-30b3a8123389-operator-scripts\") pod \"root-account-create-update-jzpgq\" (UID: \"44b4b840-62df-4871-b50d-30b3a8123389\") " pod="openstack/root-account-create-update-jzpgq" Mar 11 09:16:48 crc kubenswrapper[4840]: I0311 09:16:48.947112 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w665\" (UniqueName: \"kubernetes.io/projected/44b4b840-62df-4871-b50d-30b3a8123389-kube-api-access-6w665\") pod \"root-account-create-update-jzpgq\" (UID: \"44b4b840-62df-4871-b50d-30b3a8123389\") " pod="openstack/root-account-create-update-jzpgq" Mar 11 09:16:48 crc kubenswrapper[4840]: I0311 09:16:48.948562 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44b4b840-62df-4871-b50d-30b3a8123389-operator-scripts\") pod \"root-account-create-update-jzpgq\" (UID: \"44b4b840-62df-4871-b50d-30b3a8123389\") " pod="openstack/root-account-create-update-jzpgq" Mar 11 09:16:48 crc kubenswrapper[4840]: I0311 09:16:48.969264 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w665\" (UniqueName: \"kubernetes.io/projected/44b4b840-62df-4871-b50d-30b3a8123389-kube-api-access-6w665\") pod \"root-account-create-update-jzpgq\" (UID: \"44b4b840-62df-4871-b50d-30b3a8123389\") " pod="openstack/root-account-create-update-jzpgq" Mar 11 09:16:49 crc kubenswrapper[4840]: I0311 09:16:49.054083 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jzpgq" Mar 11 09:16:51 crc kubenswrapper[4840]: I0311 09:16:51.694946 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-g2p7c" podUID="056df7c0-d577-4908-91a8-b5dfb95e0316" containerName="ovn-controller" probeResult="failure" output=< Mar 11 09:16:51 crc kubenswrapper[4840]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 11 09:16:51 crc kubenswrapper[4840]: > Mar 11 09:16:51 crc kubenswrapper[4840]: I0311 09:16:51.711235 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:16:51 crc kubenswrapper[4840]: I0311 09:16:51.716424 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:16:51 crc kubenswrapper[4840]: I0311 09:16:51.937483 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-g2p7c-config-k6wz4"] Mar 11 09:16:51 crc kubenswrapper[4840]: I0311 09:16:51.938828 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g2p7c-config-k6wz4" Mar 11 09:16:51 crc kubenswrapper[4840]: I0311 09:16:51.943933 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 11 09:16:51 crc kubenswrapper[4840]: I0311 09:16:51.966229 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g2p7c-config-k6wz4"] Mar 11 09:16:52 crc kubenswrapper[4840]: I0311 09:16:52.004426 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0ae5d630-389b-46e9-b937-11cf0448b6b3-var-log-ovn\") pod \"ovn-controller-g2p7c-config-k6wz4\" (UID: \"0ae5d630-389b-46e9-b937-11cf0448b6b3\") " pod="openstack/ovn-controller-g2p7c-config-k6wz4" Mar 11 09:16:52 crc kubenswrapper[4840]: I0311 09:16:52.004549 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0ae5d630-389b-46e9-b937-11cf0448b6b3-var-run\") pod \"ovn-controller-g2p7c-config-k6wz4\" (UID: \"0ae5d630-389b-46e9-b937-11cf0448b6b3\") " pod="openstack/ovn-controller-g2p7c-config-k6wz4" Mar 11 09:16:52 crc kubenswrapper[4840]: I0311 09:16:52.004627 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ae5d630-389b-46e9-b937-11cf0448b6b3-scripts\") pod \"ovn-controller-g2p7c-config-k6wz4\" (UID: \"0ae5d630-389b-46e9-b937-11cf0448b6b3\") " pod="openstack/ovn-controller-g2p7c-config-k6wz4" Mar 11 09:16:52 crc kubenswrapper[4840]: I0311 09:16:52.004650 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ae5d630-389b-46e9-b937-11cf0448b6b3-var-run-ovn\") pod \"ovn-controller-g2p7c-config-k6wz4\" (UID: \"0ae5d630-389b-46e9-b937-11cf0448b6b3\") " pod="openstack/ovn-controller-g2p7c-config-k6wz4" Mar 11 09:16:52 crc kubenswrapper[4840]: I0311 09:16:52.004719 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae5d630-389b-46e9-b937-11cf0448b6b3-additional-scripts\") pod \"ovn-controller-g2p7c-config-k6wz4\" (UID: \"0ae5d630-389b-46e9-b937-11cf0448b6b3\") " pod="openstack/ovn-controller-g2p7c-config-k6wz4" Mar 11 09:16:52 crc kubenswrapper[4840]: I0311 09:16:52.004747 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5r4c\" (UniqueName: \"kubernetes.io/projected/0ae5d630-389b-46e9-b937-11cf0448b6b3-kube-api-access-b5r4c\") pod \"ovn-controller-g2p7c-config-k6wz4\" (UID: \"0ae5d630-389b-46e9-b937-11cf0448b6b3\") " pod="openstack/ovn-controller-g2p7c-config-k6wz4" Mar 11 09:16:52 crc kubenswrapper[4840]: I0311 09:16:52.107207 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0ae5d630-389b-46e9-b937-11cf0448b6b3-var-run\") pod \"ovn-controller-g2p7c-config-k6wz4\" (UID: \"0ae5d630-389b-46e9-b937-11cf0448b6b3\") " pod="openstack/ovn-controller-g2p7c-config-k6wz4" Mar 11 09:16:52 crc kubenswrapper[4840]: I0311 09:16:52.107323 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ae5d630-389b-46e9-b937-11cf0448b6b3-scripts\") pod \"ovn-controller-g2p7c-config-k6wz4\" (UID: \"0ae5d630-389b-46e9-b937-11cf0448b6b3\") " pod="openstack/ovn-controller-g2p7c-config-k6wz4" Mar 11 09:16:52 crc kubenswrapper[4840]: I0311 09:16:52.107370 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ae5d630-389b-46e9-b937-11cf0448b6b3-var-run-ovn\") pod \"ovn-controller-g2p7c-config-k6wz4\" (UID: \"0ae5d630-389b-46e9-b937-11cf0448b6b3\") " pod="openstack/ovn-controller-g2p7c-config-k6wz4" Mar 11 09:16:52 crc kubenswrapper[4840]: I0311 09:16:52.107496 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae5d630-389b-46e9-b937-11cf0448b6b3-additional-scripts\") pod \"ovn-controller-g2p7c-config-k6wz4\" (UID: \"0ae5d630-389b-46e9-b937-11cf0448b6b3\") " pod="openstack/ovn-controller-g2p7c-config-k6wz4" Mar 11 09:16:52 crc kubenswrapper[4840]: I0311 09:16:52.107527 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5r4c\" (UniqueName: \"kubernetes.io/projected/0ae5d630-389b-46e9-b937-11cf0448b6b3-kube-api-access-b5r4c\") pod \"ovn-controller-g2p7c-config-k6wz4\" (UID: \"0ae5d630-389b-46e9-b937-11cf0448b6b3\") " pod="openstack/ovn-controller-g2p7c-config-k6wz4" Mar 11 09:16:52 crc kubenswrapper[4840]: I0311 09:16:52.107571 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0ae5d630-389b-46e9-b937-11cf0448b6b3-var-run\") pod \"ovn-controller-g2p7c-config-k6wz4\" (UID: \"0ae5d630-389b-46e9-b937-11cf0448b6b3\") " pod="openstack/ovn-controller-g2p7c-config-k6wz4" Mar 11 09:16:52 crc kubenswrapper[4840]: I0311 09:16:52.107648 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0ae5d630-389b-46e9-b937-11cf0448b6b3-var-log-ovn\") pod \"ovn-controller-g2p7c-config-k6wz4\" (UID: \"0ae5d630-389b-46e9-b937-11cf0448b6b3\") " pod="openstack/ovn-controller-g2p7c-config-k6wz4" Mar 11 09:16:52 crc kubenswrapper[4840]: I0311 09:16:52.107583 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ae5d630-389b-46e9-b937-11cf0448b6b3-var-run-ovn\") pod \"ovn-controller-g2p7c-config-k6wz4\" (UID: \"0ae5d630-389b-46e9-b937-11cf0448b6b3\") " pod="openstack/ovn-controller-g2p7c-config-k6wz4" Mar 11 09:16:52 crc kubenswrapper[4840]: I0311 09:16:52.107832 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0ae5d630-389b-46e9-b937-11cf0448b6b3-var-log-ovn\") pod \"ovn-controller-g2p7c-config-k6wz4\" (UID: \"0ae5d630-389b-46e9-b937-11cf0448b6b3\") " pod="openstack/ovn-controller-g2p7c-config-k6wz4" Mar 11 09:16:52 crc kubenswrapper[4840]: I0311 09:16:52.108444 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae5d630-389b-46e9-b937-11cf0448b6b3-additional-scripts\") pod \"ovn-controller-g2p7c-config-k6wz4\" (UID: \"0ae5d630-389b-46e9-b937-11cf0448b6b3\") " pod="openstack/ovn-controller-g2p7c-config-k6wz4" Mar 11 09:16:52 crc kubenswrapper[4840]: I0311 09:16:52.110032 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ae5d630-389b-46e9-b937-11cf0448b6b3-scripts\") pod \"ovn-controller-g2p7c-config-k6wz4\" (UID: \"0ae5d630-389b-46e9-b937-11cf0448b6b3\") " pod="openstack/ovn-controller-g2p7c-config-k6wz4" Mar 11 09:16:52 crc kubenswrapper[4840]: I0311 09:16:52.128169 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5r4c\" (UniqueName: \"kubernetes.io/projected/0ae5d630-389b-46e9-b937-11cf0448b6b3-kube-api-access-b5r4c\") pod \"ovn-controller-g2p7c-config-k6wz4\" (UID: \"0ae5d630-389b-46e9-b937-11cf0448b6b3\") " pod="openstack/ovn-controller-g2p7c-config-k6wz4" Mar 11 09:16:52 crc kubenswrapper[4840]: I0311 09:16:52.275025 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g2p7c-config-k6wz4" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.007713 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.091988 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.357458 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-j24h9"] Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.367829 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-j24h9" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.397374 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-j24h9"] Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.458707 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c443-account-create-update-r8t8k"] Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.459932 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c443-account-create-update-r8t8k" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.463099 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.465770 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f3b6268-b59e-449e-a7d4-ecb9e26e1f39-operator-scripts\") pod \"cinder-db-create-j24h9\" (UID: \"1f3b6268-b59e-449e-a7d4-ecb9e26e1f39\") " pod="openstack/cinder-db-create-j24h9" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.465962 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfgpj\" (UniqueName: \"kubernetes.io/projected/1f3b6268-b59e-449e-a7d4-ecb9e26e1f39-kube-api-access-bfgpj\") pod \"cinder-db-create-j24h9\" (UID: \"1f3b6268-b59e-449e-a7d4-ecb9e26e1f39\") " pod="openstack/cinder-db-create-j24h9" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.486159 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c443-account-create-update-r8t8k"] Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.567978 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f3b6268-b59e-449e-a7d4-ecb9e26e1f39-operator-scripts\") pod \"cinder-db-create-j24h9\" (UID: \"1f3b6268-b59e-449e-a7d4-ecb9e26e1f39\") " pod="openstack/cinder-db-create-j24h9" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.568067 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfgpj\" (UniqueName: \"kubernetes.io/projected/1f3b6268-b59e-449e-a7d4-ecb9e26e1f39-kube-api-access-bfgpj\") pod \"cinder-db-create-j24h9\" (UID: \"1f3b6268-b59e-449e-a7d4-ecb9e26e1f39\") " pod="openstack/cinder-db-create-j24h9" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.568111 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv5j2\" (UniqueName: \"kubernetes.io/projected/16f6c965-932a-45fa-9ad7-608779e0bf25-kube-api-access-vv5j2\") pod \"cinder-c443-account-create-update-r8t8k\" (UID: \"16f6c965-932a-45fa-9ad7-608779e0bf25\") " pod="openstack/cinder-c443-account-create-update-r8t8k" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.568146 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16f6c965-932a-45fa-9ad7-608779e0bf25-operator-scripts\") pod \"cinder-c443-account-create-update-r8t8k\" (UID: \"16f6c965-932a-45fa-9ad7-608779e0bf25\") " pod="openstack/cinder-c443-account-create-update-r8t8k" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.570060 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f3b6268-b59e-449e-a7d4-ecb9e26e1f39-operator-scripts\") pod \"cinder-db-create-j24h9\" (UID: \"1f3b6268-b59e-449e-a7d4-ecb9e26e1f39\") " pod="openstack/cinder-db-create-j24h9" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.593423 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfgpj\" (UniqueName: \"kubernetes.io/projected/1f3b6268-b59e-449e-a7d4-ecb9e26e1f39-kube-api-access-bfgpj\") pod \"cinder-db-create-j24h9\" (UID: \"1f3b6268-b59e-449e-a7d4-ecb9e26e1f39\") " pod="openstack/cinder-db-create-j24h9" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.655901 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-9c9b9"] Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.657783 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9c9b9" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.671570 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv5j2\" (UniqueName: \"kubernetes.io/projected/16f6c965-932a-45fa-9ad7-608779e0bf25-kube-api-access-vv5j2\") pod \"cinder-c443-account-create-update-r8t8k\" (UID: \"16f6c965-932a-45fa-9ad7-608779e0bf25\") " pod="openstack/cinder-c443-account-create-update-r8t8k" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.671636 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16f6c965-932a-45fa-9ad7-608779e0bf25-operator-scripts\") pod \"cinder-c443-account-create-update-r8t8k\" (UID: \"16f6c965-932a-45fa-9ad7-608779e0bf25\") " pod="openstack/cinder-c443-account-create-update-r8t8k" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.672707 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16f6c965-932a-45fa-9ad7-608779e0bf25-operator-scripts\") pod \"cinder-c443-account-create-update-r8t8k\" (UID: \"16f6c965-932a-45fa-9ad7-608779e0bf25\") " pod="openstack/cinder-c443-account-create-update-r8t8k" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.675807 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-9c9b9"] Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.698026 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-j24h9" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.704310 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8daa-account-create-update-rhmhk"] Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.713549 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8daa-account-create-update-rhmhk" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.714833 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv5j2\" (UniqueName: \"kubernetes.io/projected/16f6c965-932a-45fa-9ad7-608779e0bf25-kube-api-access-vv5j2\") pod \"cinder-c443-account-create-update-r8t8k\" (UID: \"16f6c965-932a-45fa-9ad7-608779e0bf25\") " pod="openstack/cinder-c443-account-create-update-r8t8k" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.717875 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.721110 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8daa-account-create-update-rhmhk"] Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.773325 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjzth\" (UniqueName: \"kubernetes.io/projected/2b934acf-2bd8-420b-803b-dd9c6e993fe7-kube-api-access-sjzth\") pod \"neutron-8daa-account-create-update-rhmhk\" (UID: \"2b934acf-2bd8-420b-803b-dd9c6e993fe7\") " pod="openstack/neutron-8daa-account-create-update-rhmhk" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.773399 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2c581ad-e4bb-40c7-aa81-8937e2aab87b-operator-scripts\") pod \"barbican-db-create-9c9b9\" (UID: \"e2c581ad-e4bb-40c7-aa81-8937e2aab87b\") " pod="openstack/barbican-db-create-9c9b9" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.773420 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tm29\" (UniqueName: \"kubernetes.io/projected/e2c581ad-e4bb-40c7-aa81-8937e2aab87b-kube-api-access-5tm29\") pod \"barbican-db-create-9c9b9\" (UID: \"e2c581ad-e4bb-40c7-aa81-8937e2aab87b\") " pod="openstack/barbican-db-create-9c9b9" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.773494 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b934acf-2bd8-420b-803b-dd9c6e993fe7-operator-scripts\") pod \"neutron-8daa-account-create-update-rhmhk\" (UID: \"2b934acf-2bd8-420b-803b-dd9c6e993fe7\") " pod="openstack/neutron-8daa-account-create-update-rhmhk" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.783017 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-cc8xs"] Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.784242 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cc8xs" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.789135 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c443-account-create-update-r8t8k" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.796280 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-bc41-account-create-update-cpkrn"] Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.797884 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-bc41-account-create-update-cpkrn" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.809671 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.817046 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-bc41-account-create-update-cpkrn"] Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.847794 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-cc8xs"] Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.882010 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-qb59h"] Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.885396 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qb59h" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.887910 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.889274 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gtqwm" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.889498 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.892122 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.897275 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkxzw\" (UniqueName: \"kubernetes.io/projected/66831303-1e03-43d2-aabb-d98617b373a0-kube-api-access-xkxzw\") pod \"barbican-bc41-account-create-update-cpkrn\" (UID: \"66831303-1e03-43d2-aabb-d98617b373a0\") " pod="openstack/barbican-bc41-account-create-update-cpkrn" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.897424 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b934acf-2bd8-420b-803b-dd9c6e993fe7-operator-scripts\") pod \"neutron-8daa-account-create-update-rhmhk\" (UID: \"2b934acf-2bd8-420b-803b-dd9c6e993fe7\") " pod="openstack/neutron-8daa-account-create-update-rhmhk" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.897495 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7pgc\" (UniqueName: \"kubernetes.io/projected/206be006-d33b-43d1-8fae-ecc49291d0a4-kube-api-access-z7pgc\") pod \"neutron-db-create-cc8xs\" (UID: \"206be006-d33b-43d1-8fae-ecc49291d0a4\") " pod="openstack/neutron-db-create-cc8xs" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.897629 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66831303-1e03-43d2-aabb-d98617b373a0-operator-scripts\") pod \"barbican-bc41-account-create-update-cpkrn\" (UID: \"66831303-1e03-43d2-aabb-d98617b373a0\") " pod="openstack/barbican-bc41-account-create-update-cpkrn" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.898453 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b934acf-2bd8-420b-803b-dd9c6e993fe7-operator-scripts\") pod \"neutron-8daa-account-create-update-rhmhk\" (UID: \"2b934acf-2bd8-420b-803b-dd9c6e993fe7\") " pod="openstack/neutron-8daa-account-create-update-rhmhk" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.898794 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjzth\" (UniqueName: \"kubernetes.io/projected/2b934acf-2bd8-420b-803b-dd9c6e993fe7-kube-api-access-sjzth\") pod \"neutron-8daa-account-create-update-rhmhk\" (UID: \"2b934acf-2bd8-420b-803b-dd9c6e993fe7\") " pod="openstack/neutron-8daa-account-create-update-rhmhk" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.899131 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/206be006-d33b-43d1-8fae-ecc49291d0a4-operator-scripts\") pod \"neutron-db-create-cc8xs\" (UID: \"206be006-d33b-43d1-8fae-ecc49291d0a4\") " pod="openstack/neutron-db-create-cc8xs" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.899330 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2c581ad-e4bb-40c7-aa81-8937e2aab87b-operator-scripts\") pod \"barbican-db-create-9c9b9\" (UID: \"e2c581ad-e4bb-40c7-aa81-8937e2aab87b\") " pod="openstack/barbican-db-create-9c9b9" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.899515 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tm29\" (UniqueName: \"kubernetes.io/projected/e2c581ad-e4bb-40c7-aa81-8937e2aab87b-kube-api-access-5tm29\") pod \"barbican-db-create-9c9b9\" (UID: \"e2c581ad-e4bb-40c7-aa81-8937e2aab87b\") " pod="openstack/barbican-db-create-9c9b9" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.900070 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2c581ad-e4bb-40c7-aa81-8937e2aab87b-operator-scripts\") pod \"barbican-db-create-9c9b9\" (UID: \"e2c581ad-e4bb-40c7-aa81-8937e2aab87b\") " pod="openstack/barbican-db-create-9c9b9" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.921825 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjzth\" (UniqueName: \"kubernetes.io/projected/2b934acf-2bd8-420b-803b-dd9c6e993fe7-kube-api-access-sjzth\") pod \"neutron-8daa-account-create-update-rhmhk\" (UID: \"2b934acf-2bd8-420b-803b-dd9c6e993fe7\") " pod="openstack/neutron-8daa-account-create-update-rhmhk" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.926413 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tm29\" (UniqueName: \"kubernetes.io/projected/e2c581ad-e4bb-40c7-aa81-8937e2aab87b-kube-api-access-5tm29\") pod \"barbican-db-create-9c9b9\" (UID: \"e2c581ad-e4bb-40c7-aa81-8937e2aab87b\") " pod="openstack/barbican-db-create-9c9b9" Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.929990 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qb59h"] Mar 11 09:16:54 crc kubenswrapper[4840]: I0311 09:16:54.979718 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9c9b9" Mar 11 09:16:55 crc kubenswrapper[4840]: I0311 09:16:55.001392 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66831303-1e03-43d2-aabb-d98617b373a0-operator-scripts\") pod \"barbican-bc41-account-create-update-cpkrn\" (UID: \"66831303-1e03-43d2-aabb-d98617b373a0\") " pod="openstack/barbican-bc41-account-create-update-cpkrn" Mar 11 09:16:55 crc kubenswrapper[4840]: I0311 09:16:55.001486 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dj4q\" (UniqueName: \"kubernetes.io/projected/462d670d-c6b2-4c12-9d5c-183069b04200-kube-api-access-5dj4q\") pod \"keystone-db-sync-qb59h\" (UID: \"462d670d-c6b2-4c12-9d5c-183069b04200\") " pod="openstack/keystone-db-sync-qb59h" Mar 11 09:16:55 crc kubenswrapper[4840]: I0311 09:16:55.001510 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462d670d-c6b2-4c12-9d5c-183069b04200-combined-ca-bundle\") pod \"keystone-db-sync-qb59h\" (UID: \"462d670d-c6b2-4c12-9d5c-183069b04200\") " pod="openstack/keystone-db-sync-qb59h" Mar 11 09:16:55 crc kubenswrapper[4840]: I0311 09:16:55.001581 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/206be006-d33b-43d1-8fae-ecc49291d0a4-operator-scripts\") pod \"neutron-db-create-cc8xs\" (UID: \"206be006-d33b-43d1-8fae-ecc49291d0a4\") " pod="openstack/neutron-db-create-cc8xs" Mar 11 09:16:55 crc kubenswrapper[4840]: I0311 09:16:55.001615 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462d670d-c6b2-4c12-9d5c-183069b04200-config-data\") pod \"keystone-db-sync-qb59h\" (UID: \"462d670d-c6b2-4c12-9d5c-183069b04200\") " pod="openstack/keystone-db-sync-qb59h" Mar 11 09:16:55 crc kubenswrapper[4840]: I0311 09:16:55.001649 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkxzw\" (UniqueName: \"kubernetes.io/projected/66831303-1e03-43d2-aabb-d98617b373a0-kube-api-access-xkxzw\") pod \"barbican-bc41-account-create-update-cpkrn\" (UID: \"66831303-1e03-43d2-aabb-d98617b373a0\") " pod="openstack/barbican-bc41-account-create-update-cpkrn" Mar 11 09:16:55 crc kubenswrapper[4840]: I0311 09:16:55.001674 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7pgc\" (UniqueName: \"kubernetes.io/projected/206be006-d33b-43d1-8fae-ecc49291d0a4-kube-api-access-z7pgc\") pod \"neutron-db-create-cc8xs\" (UID: \"206be006-d33b-43d1-8fae-ecc49291d0a4\") " pod="openstack/neutron-db-create-cc8xs" Mar 11 09:16:55 crc kubenswrapper[4840]: I0311 09:16:55.002631 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66831303-1e03-43d2-aabb-d98617b373a0-operator-scripts\") pod \"barbican-bc41-account-create-update-cpkrn\" (UID: \"66831303-1e03-43d2-aabb-d98617b373a0\") " pod="openstack/barbican-bc41-account-create-update-cpkrn" Mar 11 09:16:55 crc kubenswrapper[4840]: I0311 09:16:55.003327 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/206be006-d33b-43d1-8fae-ecc49291d0a4-operator-scripts\") pod \"neutron-db-create-cc8xs\" (UID: \"206be006-d33b-43d1-8fae-ecc49291d0a4\") " pod="openstack/neutron-db-create-cc8xs" Mar 11 09:16:55 crc kubenswrapper[4840]: I0311 09:16:55.020005 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7pgc\" (UniqueName: \"kubernetes.io/projected/206be006-d33b-43d1-8fae-ecc49291d0a4-kube-api-access-z7pgc\") pod \"neutron-db-create-cc8xs\" (UID: \"206be006-d33b-43d1-8fae-ecc49291d0a4\") " pod="openstack/neutron-db-create-cc8xs" Mar 11 09:16:55 crc kubenswrapper[4840]: I0311 09:16:55.022709 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkxzw\" (UniqueName: \"kubernetes.io/projected/66831303-1e03-43d2-aabb-d98617b373a0-kube-api-access-xkxzw\") pod \"barbican-bc41-account-create-update-cpkrn\" (UID: \"66831303-1e03-43d2-aabb-d98617b373a0\") " pod="openstack/barbican-bc41-account-create-update-cpkrn" Mar 11 09:16:55 crc kubenswrapper[4840]: I0311 09:16:55.077363 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8daa-account-create-update-rhmhk" Mar 11 09:16:55 crc kubenswrapper[4840]: I0311 09:16:55.104240 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462d670d-c6b2-4c12-9d5c-183069b04200-config-data\") pod \"keystone-db-sync-qb59h\" (UID: \"462d670d-c6b2-4c12-9d5c-183069b04200\") " pod="openstack/keystone-db-sync-qb59h" Mar 11 09:16:55 crc kubenswrapper[4840]: I0311 09:16:55.104541 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dj4q\" (UniqueName: \"kubernetes.io/projected/462d670d-c6b2-4c12-9d5c-183069b04200-kube-api-access-5dj4q\") pod \"keystone-db-sync-qb59h\" (UID: \"462d670d-c6b2-4c12-9d5c-183069b04200\") " pod="openstack/keystone-db-sync-qb59h" Mar 11 09:16:55 crc kubenswrapper[4840]: I0311 09:16:55.104570 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462d670d-c6b2-4c12-9d5c-183069b04200-combined-ca-bundle\") pod \"keystone-db-sync-qb59h\" (UID: \"462d670d-c6b2-4c12-9d5c-183069b04200\") " pod="openstack/keystone-db-sync-qb59h" Mar 11 09:16:55 crc kubenswrapper[4840]: I0311 09:16:55.109344 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462d670d-c6b2-4c12-9d5c-183069b04200-config-data\") pod \"keystone-db-sync-qb59h\" (UID: \"462d670d-c6b2-4c12-9d5c-183069b04200\") " pod="openstack/keystone-db-sync-qb59h" Mar 11 09:16:55 crc kubenswrapper[4840]: I0311 09:16:55.109611 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cc8xs" Mar 11 09:16:55 crc kubenswrapper[4840]: I0311 09:16:55.112844 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462d670d-c6b2-4c12-9d5c-183069b04200-combined-ca-bundle\") pod \"keystone-db-sync-qb59h\" (UID: \"462d670d-c6b2-4c12-9d5c-183069b04200\") " pod="openstack/keystone-db-sync-qb59h" Mar 11 09:16:55 crc kubenswrapper[4840]: I0311 09:16:55.123091 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-bc41-account-create-update-cpkrn" Mar 11 09:16:55 crc kubenswrapper[4840]: I0311 09:16:55.127211 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dj4q\" (UniqueName: \"kubernetes.io/projected/462d670d-c6b2-4c12-9d5c-183069b04200-kube-api-access-5dj4q\") pod \"keystone-db-sync-qb59h\" (UID: \"462d670d-c6b2-4c12-9d5c-183069b04200\") " pod="openstack/keystone-db-sync-qb59h" Mar 11 09:16:55 crc kubenswrapper[4840]: I0311 09:16:55.218570 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qb59h" Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.059693 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-px5zj" Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.122352 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/757b3a3d-0655-4517-a752-b944899642c9-ring-data-devices\") pod \"757b3a3d-0655-4517-a752-b944899642c9\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.122453 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/757b3a3d-0655-4517-a752-b944899642c9-etc-swift\") pod \"757b3a3d-0655-4517-a752-b944899642c9\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.122566 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/757b3a3d-0655-4517-a752-b944899642c9-scripts\") pod \"757b3a3d-0655-4517-a752-b944899642c9\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.122631 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/757b3a3d-0655-4517-a752-b944899642c9-dispersionconf\") pod \"757b3a3d-0655-4517-a752-b944899642c9\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.122678 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/757b3a3d-0655-4517-a752-b944899642c9-combined-ca-bundle\") pod \"757b3a3d-0655-4517-a752-b944899642c9\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.122707 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/757b3a3d-0655-4517-a752-b944899642c9-swiftconf\") pod \"757b3a3d-0655-4517-a752-b944899642c9\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.122805 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjnm5\" (UniqueName: \"kubernetes.io/projected/757b3a3d-0655-4517-a752-b944899642c9-kube-api-access-jjnm5\") pod \"757b3a3d-0655-4517-a752-b944899642c9\" (UID: \"757b3a3d-0655-4517-a752-b944899642c9\") " Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.123784 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/757b3a3d-0655-4517-a752-b944899642c9-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "757b3a3d-0655-4517-a752-b944899642c9" (UID: "757b3a3d-0655-4517-a752-b944899642c9"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.138339 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/757b3a3d-0655-4517-a752-b944899642c9-kube-api-access-jjnm5" (OuterVolumeSpecName: "kube-api-access-jjnm5") pod "757b3a3d-0655-4517-a752-b944899642c9" (UID: "757b3a3d-0655-4517-a752-b944899642c9"). InnerVolumeSpecName "kube-api-access-jjnm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.143072 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/757b3a3d-0655-4517-a752-b944899642c9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "757b3a3d-0655-4517-a752-b944899642c9" (UID: "757b3a3d-0655-4517-a752-b944899642c9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.165308 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/757b3a3d-0655-4517-a752-b944899642c9-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "757b3a3d-0655-4517-a752-b944899642c9" (UID: "757b3a3d-0655-4517-a752-b944899642c9"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.203237 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/757b3a3d-0655-4517-a752-b944899642c9-scripts" (OuterVolumeSpecName: "scripts") pod "757b3a3d-0655-4517-a752-b944899642c9" (UID: "757b3a3d-0655-4517-a752-b944899642c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.224692 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjnm5\" (UniqueName: \"kubernetes.io/projected/757b3a3d-0655-4517-a752-b944899642c9-kube-api-access-jjnm5\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.224720 4840 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/757b3a3d-0655-4517-a752-b944899642c9-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.224730 4840 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/757b3a3d-0655-4517-a752-b944899642c9-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.224739 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/757b3a3d-0655-4517-a752-b944899642c9-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.224747 4840 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/757b3a3d-0655-4517-a752-b944899642c9-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.229715 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/757b3a3d-0655-4517-a752-b944899642c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "757b3a3d-0655-4517-a752-b944899642c9" (UID: "757b3a3d-0655-4517-a752-b944899642c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.229890 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/757b3a3d-0655-4517-a752-b944899642c9-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "757b3a3d-0655-4517-a752-b944899642c9" (UID: "757b3a3d-0655-4517-a752-b944899642c9"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.326920 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/757b3a3d-0655-4517-a752-b944899642c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.326956 4840 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/757b3a3d-0655-4517-a752-b944899642c9-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.376193 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-px5zj" event={"ID":"757b3a3d-0655-4517-a752-b944899642c9","Type":"ContainerDied","Data":"301e3877ef8b749903e8b9831a8c5796721697e838c804905d1abd0642a708a1"} Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.376239 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="301e3877ef8b749903e8b9831a8c5796721697e838c804905d1abd0642a708a1" Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.376296 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-px5zj" Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.509855 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8daa-account-create-update-rhmhk"] Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.715039 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-g2p7c" podUID="056df7c0-d577-4908-91a8-b5dfb95e0316" containerName="ovn-controller" probeResult="failure" output=< Mar 11 09:16:56 crc kubenswrapper[4840]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 11 09:16:56 crc kubenswrapper[4840]: > Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.789728 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c443-account-create-update-r8t8k"] Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.839538 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-bc41-account-create-update-cpkrn"] Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.877271 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-9c9b9"] Mar 11 09:16:56 crc kubenswrapper[4840]: I0311 09:16:56.918013 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jzpgq"] Mar 11 09:16:57 crc kubenswrapper[4840]: I0311 09:16:57.074451 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-qb59h"] Mar 11 09:16:57 crc kubenswrapper[4840]: I0311 09:16:57.118568 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g2p7c-config-k6wz4"] Mar 11 09:16:57 crc kubenswrapper[4840]: I0311 09:16:57.140513 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-cc8xs"] Mar 11 09:16:57 crc kubenswrapper[4840]: I0311 09:16:57.157315 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-j24h9"] Mar 11 09:16:57 crc kubenswrapper[4840]: I0311 09:16:57.405391 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8daa-account-create-update-rhmhk" event={"ID":"2b934acf-2bd8-420b-803b-dd9c6e993fe7","Type":"ContainerStarted","Data":"c0f39a62c6f22a1acbea4f945d5fd15a25e7d9220f312f83f9f23b88277b126a"} Mar 11 09:16:57 crc kubenswrapper[4840]: I0311 09:16:57.405884 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8daa-account-create-update-rhmhk" event={"ID":"2b934acf-2bd8-420b-803b-dd9c6e993fe7","Type":"ContainerStarted","Data":"86bf04339d7dbb462a91f0b0b01d20fe7435a83236d31a1fa3cf7b909f112b10"} Mar 11 09:16:57 crc kubenswrapper[4840]: I0311 09:16:57.445013 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9c9b9" event={"ID":"e2c581ad-e4bb-40c7-aa81-8937e2aab87b","Type":"ContainerStarted","Data":"e37a7b6e4c5c8187dd8cc8bdf9f1f6808bdb342b36bfcd3f5d8ce87ccfbf0335"} Mar 11 09:16:57 crc kubenswrapper[4840]: I0311 09:16:57.445941 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:16:57 crc kubenswrapper[4840]: I0311 09:16:57.446033 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:16:57 crc kubenswrapper[4840]: I0311 09:16:57.446035 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8daa-account-create-update-rhmhk" podStartSLOduration=3.446005097 podStartE2EDuration="3.446005097s" podCreationTimestamp="2026-03-11 09:16:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:16:57.433930883 +0000 UTC m=+1216.099600698" watchObservedRunningTime="2026-03-11 09:16:57.446005097 +0000 UTC m=+1216.111674912" Mar 11 09:16:57 crc kubenswrapper[4840]: I0311 09:16:57.446105 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 09:16:57 crc kubenswrapper[4840]: I0311 09:16:57.447403 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c4693d2b28f962f129bc06a9e78798a61e61b356cfdcc19696be10d164614b1d"} pod="openshift-machine-config-operator/machine-config-daemon-brtht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 09:16:57 crc kubenswrapper[4840]: I0311 09:16:57.447503 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" containerID="cri-o://c4693d2b28f962f129bc06a9e78798a61e61b356cfdcc19696be10d164614b1d" gracePeriod=600 Mar 11 09:16:57 crc kubenswrapper[4840]: I0311 09:16:57.470294 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-j24h9" event={"ID":"1f3b6268-b59e-449e-a7d4-ecb9e26e1f39","Type":"ContainerStarted","Data":"0a9a6f0d4489b3b597da0d07c1eaf97d25c1aac8e9c8bd6427007d9ef74e7b43"} Mar 11 09:16:57 crc kubenswrapper[4840]: I0311 09:16:57.474681 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-bc41-account-create-update-cpkrn" event={"ID":"66831303-1e03-43d2-aabb-d98617b373a0","Type":"ContainerStarted","Data":"64647b67318809406eaf3e03cebbd70bc8f163262ca3707b419465867dba3ba3"} Mar 11 09:16:57 crc kubenswrapper[4840]: I0311 09:16:57.480664 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qb59h" event={"ID":"462d670d-c6b2-4c12-9d5c-183069b04200","Type":"ContainerStarted","Data":"afac74ae78d09a0387b687e7c3cffb9f15469133e34fb323971aaf05411411bf"} Mar 11 09:16:57 crc kubenswrapper[4840]: I0311 09:16:57.489311 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c443-account-create-update-r8t8k" event={"ID":"16f6c965-932a-45fa-9ad7-608779e0bf25","Type":"ContainerStarted","Data":"8551ecd9337f942d6c61636f9ae5d3b6cd5d9a84c9b912aa6cbf186033a4a63b"} Mar 11 09:16:57 crc kubenswrapper[4840]: I0311 09:16:57.489374 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c443-account-create-update-r8t8k" event={"ID":"16f6c965-932a-45fa-9ad7-608779e0bf25","Type":"ContainerStarted","Data":"d649d5c78c3e315b081cccde8ca3213acc719cd04105e05ff938a5d101283db7"} Mar 11 09:16:57 crc kubenswrapper[4840]: I0311 09:16:57.494391 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g2p7c-config-k6wz4" event={"ID":"0ae5d630-389b-46e9-b937-11cf0448b6b3","Type":"ContainerStarted","Data":"6fe34fc5423e55b2db52ac28dce7b38305a67294b2105b743745811fa9c8189f"} Mar 11 09:16:57 crc kubenswrapper[4840]: I0311 09:16:57.497806 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jzpgq" event={"ID":"44b4b840-62df-4871-b50d-30b3a8123389","Type":"ContainerStarted","Data":"bf5737bfc6417705925d4d1770df53715739c0927d7e276a7cc8e393c597883f"} Mar 11 09:16:57 crc kubenswrapper[4840]: I0311 09:16:57.506780 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cc8xs" event={"ID":"206be006-d33b-43d1-8fae-ecc49291d0a4","Type":"ContainerStarted","Data":"7ee198372cdea923aebabab4b54b905006946edba2127e8d622f19ff5c7e6caf"} Mar 11 09:16:57 crc kubenswrapper[4840]: I0311 09:16:57.526722 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-c443-account-create-update-r8t8k" podStartSLOduration=3.526693846 podStartE2EDuration="3.526693846s" podCreationTimestamp="2026-03-11 09:16:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:16:57.514715155 +0000 UTC m=+1216.180384990" watchObservedRunningTime="2026-03-11 09:16:57.526693846 +0000 UTC m=+1216.192363681" Mar 11 09:16:57 crc kubenswrapper[4840]: I0311 09:16:57.537751 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-jzpgq" podStartSLOduration=9.537731964 podStartE2EDuration="9.537731964s" podCreationTimestamp="2026-03-11 09:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:16:57.535506808 +0000 UTC m=+1216.201176633" watchObservedRunningTime="2026-03-11 09:16:57.537731964 +0000 UTC m=+1216.203401779" Mar 11 09:16:58 crc kubenswrapper[4840]: I0311 09:16:58.516550 4840 generic.go:334] "Generic (PLEG): container finished" podID="2b934acf-2bd8-420b-803b-dd9c6e993fe7" containerID="c0f39a62c6f22a1acbea4f945d5fd15a25e7d9220f312f83f9f23b88277b126a" exitCode=0 Mar 11 09:16:58 crc kubenswrapper[4840]: I0311 09:16:58.520002 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8daa-account-create-update-rhmhk" event={"ID":"2b934acf-2bd8-420b-803b-dd9c6e993fe7","Type":"ContainerDied","Data":"c0f39a62c6f22a1acbea4f945d5fd15a25e7d9220f312f83f9f23b88277b126a"} Mar 11 09:16:58 crc kubenswrapper[4840]: I0311 09:16:58.533719 4840 generic.go:334] "Generic (PLEG): container finished" podID="16f6c965-932a-45fa-9ad7-608779e0bf25" containerID="8551ecd9337f942d6c61636f9ae5d3b6cd5d9a84c9b912aa6cbf186033a4a63b" exitCode=0 Mar 11 09:16:58 crc kubenswrapper[4840]: I0311 09:16:58.533815 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c443-account-create-update-r8t8k" event={"ID":"16f6c965-932a-45fa-9ad7-608779e0bf25","Type":"ContainerDied","Data":"8551ecd9337f942d6c61636f9ae5d3b6cd5d9a84c9b912aa6cbf186033a4a63b"} Mar 11 09:16:58 crc kubenswrapper[4840]: I0311 09:16:58.546194 4840 generic.go:334] "Generic (PLEG): container finished" podID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerID="c4693d2b28f962f129bc06a9e78798a61e61b356cfdcc19696be10d164614b1d" exitCode=0 Mar 11 09:16:58 crc kubenswrapper[4840]: I0311 09:16:58.546306 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerDied","Data":"c4693d2b28f962f129bc06a9e78798a61e61b356cfdcc19696be10d164614b1d"} Mar 11 09:16:58 crc kubenswrapper[4840]: I0311 09:16:58.546342 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"6d23981dcac257731751dcd40b5609471911fb1c4a3f0ea9223434b40578e948"} Mar 11 09:16:58 crc kubenswrapper[4840]: I0311 09:16:58.546362 4840 scope.go:117] "RemoveContainer" containerID="f08eeaa4fe8ff05d2389b41d94a12307326129a99c2a57e3c9c13f2ab4a219eb" Mar 11 09:16:58 crc kubenswrapper[4840]: I0311 09:16:58.551662 4840 generic.go:334] "Generic (PLEG): container finished" podID="206be006-d33b-43d1-8fae-ecc49291d0a4" containerID="5a178fa21bb4e04275cb65def582e45646bde0ab8e88575805ce80df87b91ebb" exitCode=0 Mar 11 09:16:58 crc kubenswrapper[4840]: I0311 09:16:58.552064 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cc8xs" event={"ID":"206be006-d33b-43d1-8fae-ecc49291d0a4","Type":"ContainerDied","Data":"5a178fa21bb4e04275cb65def582e45646bde0ab8e88575805ce80df87b91ebb"} Mar 11 09:16:58 crc kubenswrapper[4840]: I0311 09:16:58.555274 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-d6pcw" event={"ID":"51c6e6d6-9fb9-4042-a8c7-d14ad4067afb","Type":"ContainerStarted","Data":"2f7ff4f6ca7eaa7a90b03a36ff78a6df0e5070796f410f01246f899e6c6e9f45"} Mar 11 09:16:58 crc kubenswrapper[4840]: I0311 09:16:58.558187 4840 generic.go:334] "Generic (PLEG): container finished" podID="0ae5d630-389b-46e9-b937-11cf0448b6b3" containerID="199a0328b700e036e67f97e8adc8b8b9207dfa1a9a9eb09434590923473cd2bb" exitCode=0 Mar 11 09:16:58 crc kubenswrapper[4840]: I0311 09:16:58.558378 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g2p7c-config-k6wz4" event={"ID":"0ae5d630-389b-46e9-b937-11cf0448b6b3","Type":"ContainerDied","Data":"199a0328b700e036e67f97e8adc8b8b9207dfa1a9a9eb09434590923473cd2bb"} Mar 11 09:16:58 crc kubenswrapper[4840]: I0311 09:16:58.561024 4840 generic.go:334] "Generic (PLEG): container finished" podID="66831303-1e03-43d2-aabb-d98617b373a0" containerID="0cc7935f4bd52c3f2596d14db90ae0d5b79d3c959d9c7ad317c5181908f66f7d" exitCode=0 Mar 11 09:16:58 crc kubenswrapper[4840]: I0311 09:16:58.561103 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-bc41-account-create-update-cpkrn" event={"ID":"66831303-1e03-43d2-aabb-d98617b373a0","Type":"ContainerDied","Data":"0cc7935f4bd52c3f2596d14db90ae0d5b79d3c959d9c7ad317c5181908f66f7d"} Mar 11 09:16:58 crc kubenswrapper[4840]: I0311 09:16:58.566874 4840 generic.go:334] "Generic (PLEG): container finished" podID="1f3b6268-b59e-449e-a7d4-ecb9e26e1f39" containerID="3bad46acf302210e1c1e5e33a7fd79daab79e6bbe2d7d0b0cff1789d3f58adb7" exitCode=0 Mar 11 09:16:58 crc kubenswrapper[4840]: I0311 09:16:58.567094 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-j24h9" event={"ID":"1f3b6268-b59e-449e-a7d4-ecb9e26e1f39","Type":"ContainerDied","Data":"3bad46acf302210e1c1e5e33a7fd79daab79e6bbe2d7d0b0cff1789d3f58adb7"} Mar 11 09:16:58 crc kubenswrapper[4840]: I0311 09:16:58.576212 4840 generic.go:334] "Generic (PLEG): container finished" podID="44b4b840-62df-4871-b50d-30b3a8123389" containerID="1a1f1b693e6c0b1951096fc3d520bfefde55c6b825a9679fa0d7422d3542e748" exitCode=0 Mar 11 09:16:58 crc kubenswrapper[4840]: I0311 09:16:58.576327 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jzpgq" event={"ID":"44b4b840-62df-4871-b50d-30b3a8123389","Type":"ContainerDied","Data":"1a1f1b693e6c0b1951096fc3d520bfefde55c6b825a9679fa0d7422d3542e748"} Mar 11 09:16:58 crc kubenswrapper[4840]: I0311 09:16:58.579233 4840 generic.go:334] "Generic (PLEG): container finished" podID="e2c581ad-e4bb-40c7-aa81-8937e2aab87b" containerID="b24882edefa76ca986a45d7c188c0776d468318b10c2284d9028365926598890" exitCode=0 Mar 11 09:16:58 crc kubenswrapper[4840]: I0311 09:16:58.579322 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9c9b9" event={"ID":"e2c581ad-e4bb-40c7-aa81-8937e2aab87b","Type":"ContainerDied","Data":"b24882edefa76ca986a45d7c188c0776d468318b10c2284d9028365926598890"} Mar 11 09:16:58 crc kubenswrapper[4840]: I0311 09:16:58.653972 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-d6pcw" podStartSLOduration=4.708448701 podStartE2EDuration="17.653952615s" podCreationTimestamp="2026-03-11 09:16:41 +0000 UTC" firstStartedPulling="2026-03-11 09:16:43.196417855 +0000 UTC m=+1201.862087670" lastFinishedPulling="2026-03-11 09:16:56.141921769 +0000 UTC m=+1214.807591584" observedRunningTime="2026-03-11 09:16:58.642319603 +0000 UTC m=+1217.307989418" watchObservedRunningTime="2026-03-11 09:16:58.653952615 +0000 UTC m=+1217.319622440" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.101053 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-j24h9" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.226139 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f3b6268-b59e-449e-a7d4-ecb9e26e1f39-operator-scripts\") pod \"1f3b6268-b59e-449e-a7d4-ecb9e26e1f39\" (UID: \"1f3b6268-b59e-449e-a7d4-ecb9e26e1f39\") " Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.226219 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfgpj\" (UniqueName: \"kubernetes.io/projected/1f3b6268-b59e-449e-a7d4-ecb9e26e1f39-kube-api-access-bfgpj\") pod \"1f3b6268-b59e-449e-a7d4-ecb9e26e1f39\" (UID: \"1f3b6268-b59e-449e-a7d4-ecb9e26e1f39\") " Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.228813 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f3b6268-b59e-449e-a7d4-ecb9e26e1f39-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1f3b6268-b59e-449e-a7d4-ecb9e26e1f39" (UID: "1f3b6268-b59e-449e-a7d4-ecb9e26e1f39"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.252289 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f3b6268-b59e-449e-a7d4-ecb9e26e1f39-kube-api-access-bfgpj" (OuterVolumeSpecName: "kube-api-access-bfgpj") pod "1f3b6268-b59e-449e-a7d4-ecb9e26e1f39" (UID: "1f3b6268-b59e-449e-a7d4-ecb9e26e1f39"). InnerVolumeSpecName "kube-api-access-bfgpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.328946 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1f3b6268-b59e-449e-a7d4-ecb9e26e1f39-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.328971 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfgpj\" (UniqueName: \"kubernetes.io/projected/1f3b6268-b59e-449e-a7d4-ecb9e26e1f39-kube-api-access-bfgpj\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.386310 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c443-account-create-update-r8t8k" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.391417 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-bc41-account-create-update-cpkrn" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.420014 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cc8xs" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.428650 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g2p7c-config-k6wz4" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.430928 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9c9b9" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.446852 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8daa-account-create-update-rhmhk" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.465658 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jzpgq" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.531879 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44b4b840-62df-4871-b50d-30b3a8123389-operator-scripts\") pod \"44b4b840-62df-4871-b50d-30b3a8123389\" (UID: \"44b4b840-62df-4871-b50d-30b3a8123389\") " Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.531994 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16f6c965-932a-45fa-9ad7-608779e0bf25-operator-scripts\") pod \"16f6c965-932a-45fa-9ad7-608779e0bf25\" (UID: \"16f6c965-932a-45fa-9ad7-608779e0bf25\") " Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.532035 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0ae5d630-389b-46e9-b937-11cf0448b6b3-var-log-ovn\") pod \"0ae5d630-389b-46e9-b937-11cf0448b6b3\" (UID: \"0ae5d630-389b-46e9-b937-11cf0448b6b3\") " Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.532064 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjzth\" (UniqueName: \"kubernetes.io/projected/2b934acf-2bd8-420b-803b-dd9c6e993fe7-kube-api-access-sjzth\") pod \"2b934acf-2bd8-420b-803b-dd9c6e993fe7\" (UID: \"2b934acf-2bd8-420b-803b-dd9c6e993fe7\") " Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.532089 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tm29\" (UniqueName: \"kubernetes.io/projected/e2c581ad-e4bb-40c7-aa81-8937e2aab87b-kube-api-access-5tm29\") pod \"e2c581ad-e4bb-40c7-aa81-8937e2aab87b\" (UID: \"e2c581ad-e4bb-40c7-aa81-8937e2aab87b\") " Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.532125 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6w665\" (UniqueName: \"kubernetes.io/projected/44b4b840-62df-4871-b50d-30b3a8123389-kube-api-access-6w665\") pod \"44b4b840-62df-4871-b50d-30b3a8123389\" (UID: \"44b4b840-62df-4871-b50d-30b3a8123389\") " Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.532157 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b934acf-2bd8-420b-803b-dd9c6e993fe7-operator-scripts\") pod \"2b934acf-2bd8-420b-803b-dd9c6e993fe7\" (UID: \"2b934acf-2bd8-420b-803b-dd9c6e993fe7\") " Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.532184 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ae5d630-389b-46e9-b937-11cf0448b6b3-scripts\") pod \"0ae5d630-389b-46e9-b937-11cf0448b6b3\" (UID: \"0ae5d630-389b-46e9-b937-11cf0448b6b3\") " Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.532210 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkxzw\" (UniqueName: \"kubernetes.io/projected/66831303-1e03-43d2-aabb-d98617b373a0-kube-api-access-xkxzw\") pod \"66831303-1e03-43d2-aabb-d98617b373a0\" (UID: \"66831303-1e03-43d2-aabb-d98617b373a0\") " Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.532243 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2c581ad-e4bb-40c7-aa81-8937e2aab87b-operator-scripts\") pod \"e2c581ad-e4bb-40c7-aa81-8937e2aab87b\" (UID: \"e2c581ad-e4bb-40c7-aa81-8937e2aab87b\") " Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.532306 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv5j2\" (UniqueName: \"kubernetes.io/projected/16f6c965-932a-45fa-9ad7-608779e0bf25-kube-api-access-vv5j2\") pod \"16f6c965-932a-45fa-9ad7-608779e0bf25\" (UID: \"16f6c965-932a-45fa-9ad7-608779e0bf25\") " Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.532333 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0ae5d630-389b-46e9-b937-11cf0448b6b3-var-run\") pod \"0ae5d630-389b-46e9-b937-11cf0448b6b3\" (UID: \"0ae5d630-389b-46e9-b937-11cf0448b6b3\") " Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.532385 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ae5d630-389b-46e9-b937-11cf0448b6b3-var-run-ovn\") pod \"0ae5d630-389b-46e9-b937-11cf0448b6b3\" (UID: \"0ae5d630-389b-46e9-b937-11cf0448b6b3\") " Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.532406 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44b4b840-62df-4871-b50d-30b3a8123389-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "44b4b840-62df-4871-b50d-30b3a8123389" (UID: "44b4b840-62df-4871-b50d-30b3a8123389"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.532626 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/206be006-d33b-43d1-8fae-ecc49291d0a4-operator-scripts\") pod \"206be006-d33b-43d1-8fae-ecc49291d0a4\" (UID: \"206be006-d33b-43d1-8fae-ecc49291d0a4\") " Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.532654 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7pgc\" (UniqueName: \"kubernetes.io/projected/206be006-d33b-43d1-8fae-ecc49291d0a4-kube-api-access-z7pgc\") pod \"206be006-d33b-43d1-8fae-ecc49291d0a4\" (UID: \"206be006-d33b-43d1-8fae-ecc49291d0a4\") " Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.532678 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5r4c\" (UniqueName: \"kubernetes.io/projected/0ae5d630-389b-46e9-b937-11cf0448b6b3-kube-api-access-b5r4c\") pod \"0ae5d630-389b-46e9-b937-11cf0448b6b3\" (UID: \"0ae5d630-389b-46e9-b937-11cf0448b6b3\") " Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.532697 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66831303-1e03-43d2-aabb-d98617b373a0-operator-scripts\") pod \"66831303-1e03-43d2-aabb-d98617b373a0\" (UID: \"66831303-1e03-43d2-aabb-d98617b373a0\") " Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.532720 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae5d630-389b-46e9-b937-11cf0448b6b3-additional-scripts\") pod \"0ae5d630-389b-46e9-b937-11cf0448b6b3\" (UID: \"0ae5d630-389b-46e9-b937-11cf0448b6b3\") " Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.532850 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b934acf-2bd8-420b-803b-dd9c6e993fe7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b934acf-2bd8-420b-803b-dd9c6e993fe7" (UID: "2b934acf-2bd8-420b-803b-dd9c6e993fe7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.533110 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44b4b840-62df-4871-b50d-30b3a8123389-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.533129 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b934acf-2bd8-420b-803b-dd9c6e993fe7-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.533207 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16f6c965-932a-45fa-9ad7-608779e0bf25-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "16f6c965-932a-45fa-9ad7-608779e0bf25" (UID: "16f6c965-932a-45fa-9ad7-608779e0bf25"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.533237 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ae5d630-389b-46e9-b937-11cf0448b6b3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "0ae5d630-389b-46e9-b937-11cf0448b6b3" (UID: "0ae5d630-389b-46e9-b937-11cf0448b6b3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.533936 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ae5d630-389b-46e9-b937-11cf0448b6b3-var-run" (OuterVolumeSpecName: "var-run") pod "0ae5d630-389b-46e9-b937-11cf0448b6b3" (UID: "0ae5d630-389b-46e9-b937-11cf0448b6b3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.534616 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2c581ad-e4bb-40c7-aa81-8937e2aab87b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e2c581ad-e4bb-40c7-aa81-8937e2aab87b" (UID: "e2c581ad-e4bb-40c7-aa81-8937e2aab87b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.534766 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ae5d630-389b-46e9-b937-11cf0448b6b3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "0ae5d630-389b-46e9-b937-11cf0448b6b3" (UID: "0ae5d630-389b-46e9-b937-11cf0448b6b3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.535553 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/206be006-d33b-43d1-8fae-ecc49291d0a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "206be006-d33b-43d1-8fae-ecc49291d0a4" (UID: "206be006-d33b-43d1-8fae-ecc49291d0a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.535642 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66831303-1e03-43d2-aabb-d98617b373a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "66831303-1e03-43d2-aabb-d98617b373a0" (UID: "66831303-1e03-43d2-aabb-d98617b373a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.535751 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ae5d630-389b-46e9-b937-11cf0448b6b3-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "0ae5d630-389b-46e9-b937-11cf0448b6b3" (UID: "0ae5d630-389b-46e9-b937-11cf0448b6b3"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.536234 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ae5d630-389b-46e9-b937-11cf0448b6b3-scripts" (OuterVolumeSpecName: "scripts") pod "0ae5d630-389b-46e9-b937-11cf0448b6b3" (UID: "0ae5d630-389b-46e9-b937-11cf0448b6b3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.539684 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2c581ad-e4bb-40c7-aa81-8937e2aab87b-kube-api-access-5tm29" (OuterVolumeSpecName: "kube-api-access-5tm29") pod "e2c581ad-e4bb-40c7-aa81-8937e2aab87b" (UID: "e2c581ad-e4bb-40c7-aa81-8937e2aab87b"). InnerVolumeSpecName "kube-api-access-5tm29". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.539800 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16f6c965-932a-45fa-9ad7-608779e0bf25-kube-api-access-vv5j2" (OuterVolumeSpecName: "kube-api-access-vv5j2") pod "16f6c965-932a-45fa-9ad7-608779e0bf25" (UID: "16f6c965-932a-45fa-9ad7-608779e0bf25"). InnerVolumeSpecName "kube-api-access-vv5j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.539864 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66831303-1e03-43d2-aabb-d98617b373a0-kube-api-access-xkxzw" (OuterVolumeSpecName: "kube-api-access-xkxzw") pod "66831303-1e03-43d2-aabb-d98617b373a0" (UID: "66831303-1e03-43d2-aabb-d98617b373a0"). InnerVolumeSpecName "kube-api-access-xkxzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.540032 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ae5d630-389b-46e9-b937-11cf0448b6b3-kube-api-access-b5r4c" (OuterVolumeSpecName: "kube-api-access-b5r4c") pod "0ae5d630-389b-46e9-b937-11cf0448b6b3" (UID: "0ae5d630-389b-46e9-b937-11cf0448b6b3"). InnerVolumeSpecName "kube-api-access-b5r4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.543190 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b934acf-2bd8-420b-803b-dd9c6e993fe7-kube-api-access-sjzth" (OuterVolumeSpecName: "kube-api-access-sjzth") pod "2b934acf-2bd8-420b-803b-dd9c6e993fe7" (UID: "2b934acf-2bd8-420b-803b-dd9c6e993fe7"). InnerVolumeSpecName "kube-api-access-sjzth". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.543323 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/206be006-d33b-43d1-8fae-ecc49291d0a4-kube-api-access-z7pgc" (OuterVolumeSpecName: "kube-api-access-z7pgc") pod "206be006-d33b-43d1-8fae-ecc49291d0a4" (UID: "206be006-d33b-43d1-8fae-ecc49291d0a4"). InnerVolumeSpecName "kube-api-access-z7pgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.552350 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44b4b840-62df-4871-b50d-30b3a8123389-kube-api-access-6w665" (OuterVolumeSpecName: "kube-api-access-6w665") pod "44b4b840-62df-4871-b50d-30b3a8123389" (UID: "44b4b840-62df-4871-b50d-30b3a8123389"). InnerVolumeSpecName "kube-api-access-6w665". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.602400 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-j24h9" event={"ID":"1f3b6268-b59e-449e-a7d4-ecb9e26e1f39","Type":"ContainerDied","Data":"0a9a6f0d4489b3b597da0d07c1eaf97d25c1aac8e9c8bd6427007d9ef74e7b43"} Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.602496 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a9a6f0d4489b3b597da0d07c1eaf97d25c1aac8e9c8bd6427007d9ef74e7b43" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.602581 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-j24h9" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.603894 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-bc41-account-create-update-cpkrn" event={"ID":"66831303-1e03-43d2-aabb-d98617b373a0","Type":"ContainerDied","Data":"64647b67318809406eaf3e03cebbd70bc8f163262ca3707b419465867dba3ba3"} Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.603920 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64647b67318809406eaf3e03cebbd70bc8f163262ca3707b419465867dba3ba3" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.603977 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-bc41-account-create-update-cpkrn" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.608724 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jzpgq" event={"ID":"44b4b840-62df-4871-b50d-30b3a8123389","Type":"ContainerDied","Data":"bf5737bfc6417705925d4d1770df53715739c0927d7e276a7cc8e393c597883f"} Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.608768 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf5737bfc6417705925d4d1770df53715739c0927d7e276a7cc8e393c597883f" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.608834 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jzpgq" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.617398 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cc8xs" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.617373 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cc8xs" event={"ID":"206be006-d33b-43d1-8fae-ecc49291d0a4","Type":"ContainerDied","Data":"7ee198372cdea923aebabab4b54b905006946edba2127e8d622f19ff5c7e6caf"} Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.617452 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ee198372cdea923aebabab4b54b905006946edba2127e8d622f19ff5c7e6caf" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.623300 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8daa-account-create-update-rhmhk" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.623307 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8daa-account-create-update-rhmhk" event={"ID":"2b934acf-2bd8-420b-803b-dd9c6e993fe7","Type":"ContainerDied","Data":"86bf04339d7dbb462a91f0b0b01d20fe7435a83236d31a1fa3cf7b909f112b10"} Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.623415 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86bf04339d7dbb462a91f0b0b01d20fe7435a83236d31a1fa3cf7b909f112b10" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.625600 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c443-account-create-update-r8t8k" event={"ID":"16f6c965-932a-45fa-9ad7-608779e0bf25","Type":"ContainerDied","Data":"d649d5c78c3e315b081cccde8ca3213acc719cd04105e05ff938a5d101283db7"} Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.625634 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d649d5c78c3e315b081cccde8ca3213acc719cd04105e05ff938a5d101283db7" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.625724 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c443-account-create-update-r8t8k" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.628834 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9c9b9" event={"ID":"e2c581ad-e4bb-40c7-aa81-8937e2aab87b","Type":"ContainerDied","Data":"e37a7b6e4c5c8187dd8cc8bdf9f1f6808bdb342b36bfcd3f5d8ce87ccfbf0335"} Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.628873 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e37a7b6e4c5c8187dd8cc8bdf9f1f6808bdb342b36bfcd3f5d8ce87ccfbf0335" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.628945 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9c9b9" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.631571 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g2p7c-config-k6wz4" event={"ID":"0ae5d630-389b-46e9-b937-11cf0448b6b3","Type":"ContainerDied","Data":"6fe34fc5423e55b2db52ac28dce7b38305a67294b2105b743745811fa9c8189f"} Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.631607 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fe34fc5423e55b2db52ac28dce7b38305a67294b2105b743745811fa9c8189f" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.631664 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g2p7c-config-k6wz4" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.637556 4840 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ae5d630-389b-46e9-b937-11cf0448b6b3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.637585 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/206be006-d33b-43d1-8fae-ecc49291d0a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.637601 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7pgc\" (UniqueName: \"kubernetes.io/projected/206be006-d33b-43d1-8fae-ecc49291d0a4-kube-api-access-z7pgc\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.637613 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5r4c\" (UniqueName: \"kubernetes.io/projected/0ae5d630-389b-46e9-b937-11cf0448b6b3-kube-api-access-b5r4c\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.637624 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66831303-1e03-43d2-aabb-d98617b373a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.637683 4840 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae5d630-389b-46e9-b937-11cf0448b6b3-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.637695 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16f6c965-932a-45fa-9ad7-608779e0bf25-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.637706 4840 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0ae5d630-389b-46e9-b937-11cf0448b6b3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.637718 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjzth\" (UniqueName: \"kubernetes.io/projected/2b934acf-2bd8-420b-803b-dd9c6e993fe7-kube-api-access-sjzth\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.637729 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tm29\" (UniqueName: \"kubernetes.io/projected/e2c581ad-e4bb-40c7-aa81-8937e2aab87b-kube-api-access-5tm29\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.637740 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6w665\" (UniqueName: \"kubernetes.io/projected/44b4b840-62df-4871-b50d-30b3a8123389-kube-api-access-6w665\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.637776 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ae5d630-389b-46e9-b937-11cf0448b6b3-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.637788 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkxzw\" (UniqueName: \"kubernetes.io/projected/66831303-1e03-43d2-aabb-d98617b373a0-kube-api-access-xkxzw\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.637801 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2c581ad-e4bb-40c7-aa81-8937e2aab87b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.637814 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vv5j2\" (UniqueName: \"kubernetes.io/projected/16f6c965-932a-45fa-9ad7-608779e0bf25-kube-api-access-vv5j2\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:00 crc kubenswrapper[4840]: I0311 09:17:00.637826 4840 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0ae5d630-389b-46e9-b937-11cf0448b6b3-var-run\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.542936 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-g2p7c-config-k6wz4"] Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.577396 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-g2p7c-config-k6wz4"] Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.684528 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-g2p7c" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.812857 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-g2p7c-config-fjdgt"] Mar 11 09:17:01 crc kubenswrapper[4840]: E0311 09:17:01.813700 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206be006-d33b-43d1-8fae-ecc49291d0a4" containerName="mariadb-database-create" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.813817 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="206be006-d33b-43d1-8fae-ecc49291d0a4" containerName="mariadb-database-create" Mar 11 09:17:01 crc kubenswrapper[4840]: E0311 09:17:01.813928 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44b4b840-62df-4871-b50d-30b3a8123389" containerName="mariadb-account-create-update" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.814004 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="44b4b840-62df-4871-b50d-30b3a8123389" containerName="mariadb-account-create-update" Mar 11 09:17:01 crc kubenswrapper[4840]: E0311 09:17:01.814073 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae5d630-389b-46e9-b937-11cf0448b6b3" containerName="ovn-config" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.814133 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae5d630-389b-46e9-b937-11cf0448b6b3" containerName="ovn-config" Mar 11 09:17:01 crc kubenswrapper[4840]: E0311 09:17:01.814208 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2c581ad-e4bb-40c7-aa81-8937e2aab87b" containerName="mariadb-database-create" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.814271 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2c581ad-e4bb-40c7-aa81-8937e2aab87b" containerName="mariadb-database-create" Mar 11 09:17:01 crc kubenswrapper[4840]: E0311 09:17:01.814344 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="757b3a3d-0655-4517-a752-b944899642c9" containerName="swift-ring-rebalance" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.814399 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="757b3a3d-0655-4517-a752-b944899642c9" containerName="swift-ring-rebalance" Mar 11 09:17:01 crc kubenswrapper[4840]: E0311 09:17:01.814513 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f3b6268-b59e-449e-a7d4-ecb9e26e1f39" containerName="mariadb-database-create" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.814591 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f3b6268-b59e-449e-a7d4-ecb9e26e1f39" containerName="mariadb-database-create" Mar 11 09:17:01 crc kubenswrapper[4840]: E0311 09:17:01.814656 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b934acf-2bd8-420b-803b-dd9c6e993fe7" containerName="mariadb-account-create-update" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.814708 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b934acf-2bd8-420b-803b-dd9c6e993fe7" containerName="mariadb-account-create-update" Mar 11 09:17:01 crc kubenswrapper[4840]: E0311 09:17:01.814770 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66831303-1e03-43d2-aabb-d98617b373a0" containerName="mariadb-account-create-update" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.814827 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="66831303-1e03-43d2-aabb-d98617b373a0" containerName="mariadb-account-create-update" Mar 11 09:17:01 crc kubenswrapper[4840]: E0311 09:17:01.814883 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16f6c965-932a-45fa-9ad7-608779e0bf25" containerName="mariadb-account-create-update" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.814936 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f6c965-932a-45fa-9ad7-608779e0bf25" containerName="mariadb-account-create-update" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.815230 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ae5d630-389b-46e9-b937-11cf0448b6b3" containerName="ovn-config" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.815346 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="206be006-d33b-43d1-8fae-ecc49291d0a4" containerName="mariadb-database-create" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.815412 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b934acf-2bd8-420b-803b-dd9c6e993fe7" containerName="mariadb-account-create-update" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.815505 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="66831303-1e03-43d2-aabb-d98617b373a0" containerName="mariadb-account-create-update" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.815567 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="757b3a3d-0655-4517-a752-b944899642c9" containerName="swift-ring-rebalance" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.815640 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="16f6c965-932a-45fa-9ad7-608779e0bf25" containerName="mariadb-account-create-update" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.815720 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2c581ad-e4bb-40c7-aa81-8937e2aab87b" containerName="mariadb-database-create" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.815785 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f3b6268-b59e-449e-a7d4-ecb9e26e1f39" containerName="mariadb-database-create" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.815845 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="44b4b840-62df-4871-b50d-30b3a8123389" containerName="mariadb-account-create-update" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.816601 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g2p7c-config-fjdgt" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.824654 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.831191 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g2p7c-config-fjdgt"] Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.965852 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-var-log-ovn\") pod \"ovn-controller-g2p7c-config-fjdgt\" (UID: \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\") " pod="openstack/ovn-controller-g2p7c-config-fjdgt" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.965949 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-scripts\") pod \"ovn-controller-g2p7c-config-fjdgt\" (UID: \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\") " pod="openstack/ovn-controller-g2p7c-config-fjdgt" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.966056 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-additional-scripts\") pod \"ovn-controller-g2p7c-config-fjdgt\" (UID: \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\") " pod="openstack/ovn-controller-g2p7c-config-fjdgt" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.966122 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqx8s\" (UniqueName: \"kubernetes.io/projected/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-kube-api-access-wqx8s\") pod \"ovn-controller-g2p7c-config-fjdgt\" (UID: \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\") " pod="openstack/ovn-controller-g2p7c-config-fjdgt" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.966150 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-var-run-ovn\") pod \"ovn-controller-g2p7c-config-fjdgt\" (UID: \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\") " pod="openstack/ovn-controller-g2p7c-config-fjdgt" Mar 11 09:17:01 crc kubenswrapper[4840]: I0311 09:17:01.966263 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-var-run\") pod \"ovn-controller-g2p7c-config-fjdgt\" (UID: \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\") " pod="openstack/ovn-controller-g2p7c-config-fjdgt" Mar 11 09:17:02 crc kubenswrapper[4840]: I0311 09:17:02.067347 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-additional-scripts\") pod \"ovn-controller-g2p7c-config-fjdgt\" (UID: \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\") " pod="openstack/ovn-controller-g2p7c-config-fjdgt" Mar 11 09:17:02 crc kubenswrapper[4840]: I0311 09:17:02.067748 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqx8s\" (UniqueName: \"kubernetes.io/projected/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-kube-api-access-wqx8s\") pod \"ovn-controller-g2p7c-config-fjdgt\" (UID: \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\") " pod="openstack/ovn-controller-g2p7c-config-fjdgt" Mar 11 09:17:02 crc kubenswrapper[4840]: I0311 09:17:02.067772 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-var-run-ovn\") pod \"ovn-controller-g2p7c-config-fjdgt\" (UID: \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\") " pod="openstack/ovn-controller-g2p7c-config-fjdgt" Mar 11 09:17:02 crc kubenswrapper[4840]: I0311 09:17:02.067804 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-var-run\") pod \"ovn-controller-g2p7c-config-fjdgt\" (UID: \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\") " pod="openstack/ovn-controller-g2p7c-config-fjdgt" Mar 11 09:17:02 crc kubenswrapper[4840]: I0311 09:17:02.067855 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-var-log-ovn\") pod \"ovn-controller-g2p7c-config-fjdgt\" (UID: \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\") " pod="openstack/ovn-controller-g2p7c-config-fjdgt" Mar 11 09:17:02 crc kubenswrapper[4840]: I0311 09:17:02.067882 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-scripts\") pod \"ovn-controller-g2p7c-config-fjdgt\" (UID: \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\") " pod="openstack/ovn-controller-g2p7c-config-fjdgt" Mar 11 09:17:02 crc kubenswrapper[4840]: I0311 09:17:02.068161 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-additional-scripts\") pod \"ovn-controller-g2p7c-config-fjdgt\" (UID: \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\") " pod="openstack/ovn-controller-g2p7c-config-fjdgt" Mar 11 09:17:02 crc kubenswrapper[4840]: I0311 09:17:02.068180 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-var-run-ovn\") pod \"ovn-controller-g2p7c-config-fjdgt\" (UID: \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\") " pod="openstack/ovn-controller-g2p7c-config-fjdgt" Mar 11 09:17:02 crc kubenswrapper[4840]: I0311 09:17:02.068209 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-var-run\") pod \"ovn-controller-g2p7c-config-fjdgt\" (UID: \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\") " pod="openstack/ovn-controller-g2p7c-config-fjdgt" Mar 11 09:17:02 crc kubenswrapper[4840]: I0311 09:17:02.069743 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-scripts\") pod \"ovn-controller-g2p7c-config-fjdgt\" (UID: \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\") " pod="openstack/ovn-controller-g2p7c-config-fjdgt" Mar 11 09:17:02 crc kubenswrapper[4840]: I0311 09:17:02.071540 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-var-log-ovn\") pod \"ovn-controller-g2p7c-config-fjdgt\" (UID: \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\") " pod="openstack/ovn-controller-g2p7c-config-fjdgt" Mar 11 09:17:02 crc kubenswrapper[4840]: I0311 09:17:02.072844 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ae5d630-389b-46e9-b937-11cf0448b6b3" path="/var/lib/kubelet/pods/0ae5d630-389b-46e9-b937-11cf0448b6b3/volumes" Mar 11 09:17:02 crc kubenswrapper[4840]: I0311 09:17:02.095436 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqx8s\" (UniqueName: \"kubernetes.io/projected/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-kube-api-access-wqx8s\") pod \"ovn-controller-g2p7c-config-fjdgt\" (UID: \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\") " pod="openstack/ovn-controller-g2p7c-config-fjdgt" Mar 11 09:17:02 crc kubenswrapper[4840]: I0311 09:17:02.157322 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g2p7c-config-fjdgt" Mar 11 09:17:02 crc kubenswrapper[4840]: I0311 09:17:02.170383 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-etc-swift\") pod \"swift-storage-0\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " pod="openstack/swift-storage-0" Mar 11 09:17:02 crc kubenswrapper[4840]: I0311 09:17:02.185730 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-etc-swift\") pod \"swift-storage-0\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " pod="openstack/swift-storage-0" Mar 11 09:17:02 crc kubenswrapper[4840]: I0311 09:17:02.278524 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 11 09:17:02 crc kubenswrapper[4840]: I0311 09:17:02.405346 4840 scope.go:117] "RemoveContainer" containerID="d168afa9708d7e9797f8abbfd70ff9cf43042da62500a40e02f6c32967bf2557" Mar 11 09:17:05 crc kubenswrapper[4840]: I0311 09:17:05.127954 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-jzpgq"] Mar 11 09:17:05 crc kubenswrapper[4840]: I0311 09:17:05.137202 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-jzpgq"] Mar 11 09:17:06 crc kubenswrapper[4840]: I0311 09:17:05.703451 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qb59h" event={"ID":"462d670d-c6b2-4c12-9d5c-183069b04200","Type":"ContainerStarted","Data":"706a39957f553a13cf274703cc198be45700e0b2ee1f59e77e71aaa0883c6b97"} Mar 11 09:17:06 crc kubenswrapper[4840]: I0311 09:17:05.760127 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-qb59h" podStartSLOduration=3.904970499 podStartE2EDuration="11.760099392s" podCreationTimestamp="2026-03-11 09:16:54 +0000 UTC" firstStartedPulling="2026-03-11 09:16:57.157572002 +0000 UTC m=+1215.823241817" lastFinishedPulling="2026-03-11 09:17:05.012700895 +0000 UTC m=+1223.678370710" observedRunningTime="2026-03-11 09:17:05.742967631 +0000 UTC m=+1224.408637456" watchObservedRunningTime="2026-03-11 09:17:05.760099392 +0000 UTC m=+1224.425769207" Mar 11 09:17:06 crc kubenswrapper[4840]: I0311 09:17:06.070094 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44b4b840-62df-4871-b50d-30b3a8123389" path="/var/lib/kubelet/pods/44b4b840-62df-4871-b50d-30b3a8123389/volumes" Mar 11 09:17:06 crc kubenswrapper[4840]: I0311 09:17:06.786431 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 11 09:17:06 crc kubenswrapper[4840]: I0311 09:17:06.968299 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-g2p7c-config-fjdgt"] Mar 11 09:17:07 crc kubenswrapper[4840]: I0311 09:17:07.733296 4840 generic.go:334] "Generic (PLEG): container finished" podID="662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db" containerID="3df4f12dfe2d857d0c6efd1fb792b32e835e818cf6e3f66e38349596457c709f" exitCode=0 Mar 11 09:17:07 crc kubenswrapper[4840]: I0311 09:17:07.733443 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g2p7c-config-fjdgt" event={"ID":"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db","Type":"ContainerDied","Data":"3df4f12dfe2d857d0c6efd1fb792b32e835e818cf6e3f66e38349596457c709f"} Mar 11 09:17:07 crc kubenswrapper[4840]: I0311 09:17:07.733951 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g2p7c-config-fjdgt" event={"ID":"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db","Type":"ContainerStarted","Data":"dae154f784a60bc9f7ecc54072051fcf669579b123aac36871b83da51f6bf9c3"} Mar 11 09:17:07 crc kubenswrapper[4840]: I0311 09:17:07.736944 4840 generic.go:334] "Generic (PLEG): container finished" podID="51c6e6d6-9fb9-4042-a8c7-d14ad4067afb" containerID="2f7ff4f6ca7eaa7a90b03a36ff78a6df0e5070796f410f01246f899e6c6e9f45" exitCode=0 Mar 11 09:17:07 crc kubenswrapper[4840]: I0311 09:17:07.737035 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-d6pcw" event={"ID":"51c6e6d6-9fb9-4042-a8c7-d14ad4067afb","Type":"ContainerDied","Data":"2f7ff4f6ca7eaa7a90b03a36ff78a6df0e5070796f410f01246f899e6c6e9f45"} Mar 11 09:17:07 crc kubenswrapper[4840]: I0311 09:17:07.741239 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerStarted","Data":"38a9c76d272ff241fea464bbc6ac3e0f3186e63c3ddfd314d564eeb1ec00ae9d"} Mar 11 09:17:08 crc kubenswrapper[4840]: I0311 09:17:08.753485 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerStarted","Data":"d57ba4418c21d87822207480d1742dfe663f600ae09365578d4cc8cf912a7fa3"} Mar 11 09:17:08 crc kubenswrapper[4840]: I0311 09:17:08.754788 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerStarted","Data":"804f1935b7ae1994c956dc341f3afb51d2d0d59c2f35239a216e61bc6f1d79d2"} Mar 11 09:17:08 crc kubenswrapper[4840]: I0311 09:17:08.754809 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerStarted","Data":"7c83bfc113dd757cc74c4935f44d862430d391d2b34e5c7877464152ad5f1bdd"} Mar 11 09:17:08 crc kubenswrapper[4840]: I0311 09:17:08.754823 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerStarted","Data":"1d4b8fe0d33d601094bd4c0c808ede9155728eb0e8cb7eb8ed2a56feaa32a3ad"} Mar 11 09:17:08 crc kubenswrapper[4840]: I0311 09:17:08.756112 4840 generic.go:334] "Generic (PLEG): container finished" podID="462d670d-c6b2-4c12-9d5c-183069b04200" containerID="706a39957f553a13cf274703cc198be45700e0b2ee1f59e77e71aaa0883c6b97" exitCode=0 Mar 11 09:17:08 crc kubenswrapper[4840]: I0311 09:17:08.756231 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qb59h" event={"ID":"462d670d-c6b2-4c12-9d5c-183069b04200","Type":"ContainerDied","Data":"706a39957f553a13cf274703cc198be45700e0b2ee1f59e77e71aaa0883c6b97"} Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.171758 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g2p7c-config-fjdgt" Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.180660 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-d6pcw" Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.318226 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-var-run-ovn\") pod \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\" (UID: \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\") " Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.318330 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-scripts\") pod \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\" (UID: \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\") " Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.318354 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-var-run\") pod \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\" (UID: \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\") " Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.318408 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrstq\" (UniqueName: \"kubernetes.io/projected/51c6e6d6-9fb9-4042-a8c7-d14ad4067afb-kube-api-access-nrstq\") pod \"51c6e6d6-9fb9-4042-a8c7-d14ad4067afb\" (UID: \"51c6e6d6-9fb9-4042-a8c7-d14ad4067afb\") " Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.318457 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c6e6d6-9fb9-4042-a8c7-d14ad4067afb-config-data\") pod \"51c6e6d6-9fb9-4042-a8c7-d14ad4067afb\" (UID: \"51c6e6d6-9fb9-4042-a8c7-d14ad4067afb\") " Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.318565 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-additional-scripts\") pod \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\" (UID: \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\") " Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.318607 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-var-log-ovn\") pod \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\" (UID: \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\") " Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.318642 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/51c6e6d6-9fb9-4042-a8c7-d14ad4067afb-db-sync-config-data\") pod \"51c6e6d6-9fb9-4042-a8c7-d14ad4067afb\" (UID: \"51c6e6d6-9fb9-4042-a8c7-d14ad4067afb\") " Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.318702 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqx8s\" (UniqueName: \"kubernetes.io/projected/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-kube-api-access-wqx8s\") pod \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\" (UID: \"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db\") " Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.318734 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c6e6d6-9fb9-4042-a8c7-d14ad4067afb-combined-ca-bundle\") pod \"51c6e6d6-9fb9-4042-a8c7-d14ad4067afb\" (UID: \"51c6e6d6-9fb9-4042-a8c7-d14ad4067afb\") " Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.318713 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db" (UID: "662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.318757 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db" (UID: "662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.319528 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-var-run" (OuterVolumeSpecName: "var-run") pod "662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db" (UID: "662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.319581 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db" (UID: "662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.319822 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-scripts" (OuterVolumeSpecName: "scripts") pod "662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db" (UID: "662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.324540 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-kube-api-access-wqx8s" (OuterVolumeSpecName: "kube-api-access-wqx8s") pod "662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db" (UID: "662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db"). InnerVolumeSpecName "kube-api-access-wqx8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.324809 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51c6e6d6-9fb9-4042-a8c7-d14ad4067afb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "51c6e6d6-9fb9-4042-a8c7-d14ad4067afb" (UID: "51c6e6d6-9fb9-4042-a8c7-d14ad4067afb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.325084 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51c6e6d6-9fb9-4042-a8c7-d14ad4067afb-kube-api-access-nrstq" (OuterVolumeSpecName: "kube-api-access-nrstq") pod "51c6e6d6-9fb9-4042-a8c7-d14ad4067afb" (UID: "51c6e6d6-9fb9-4042-a8c7-d14ad4067afb"). InnerVolumeSpecName "kube-api-access-nrstq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.363638 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51c6e6d6-9fb9-4042-a8c7-d14ad4067afb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51c6e6d6-9fb9-4042-a8c7-d14ad4067afb" (UID: "51c6e6d6-9fb9-4042-a8c7-d14ad4067afb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.404629 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51c6e6d6-9fb9-4042-a8c7-d14ad4067afb-config-data" (OuterVolumeSpecName: "config-data") pod "51c6e6d6-9fb9-4042-a8c7-d14ad4067afb" (UID: "51c6e6d6-9fb9-4042-a8c7-d14ad4067afb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.421195 4840 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.421242 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.421256 4840 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-var-run\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.421275 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrstq\" (UniqueName: \"kubernetes.io/projected/51c6e6d6-9fb9-4042-a8c7-d14ad4067afb-kube-api-access-nrstq\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.421290 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51c6e6d6-9fb9-4042-a8c7-d14ad4067afb-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.421303 4840 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.421316 4840 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.421328 4840 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/51c6e6d6-9fb9-4042-a8c7-d14ad4067afb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.421340 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqx8s\" (UniqueName: \"kubernetes.io/projected/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db-kube-api-access-wqx8s\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.421352 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51c6e6d6-9fb9-4042-a8c7-d14ad4067afb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.768744 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g2p7c-config-fjdgt" event={"ID":"662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db","Type":"ContainerDied","Data":"dae154f784a60bc9f7ecc54072051fcf669579b123aac36871b83da51f6bf9c3"} Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.768790 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dae154f784a60bc9f7ecc54072051fcf669579b123aac36871b83da51f6bf9c3" Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.768858 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g2p7c-config-fjdgt" Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.774621 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-d6pcw" Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.778242 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-d6pcw" event={"ID":"51c6e6d6-9fb9-4042-a8c7-d14ad4067afb","Type":"ContainerDied","Data":"5c4b641fc01e6d28be1daf3106a41ad8f623151d9075310e1fc70d1d270217c0"} Mar 11 09:17:09 crc kubenswrapper[4840]: I0311 09:17:09.778292 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c4b641fc01e6d28be1daf3106a41ad8f623151d9075310e1fc70d1d270217c0" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.154805 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-8dnn2"] Mar 11 09:17:10 crc kubenswrapper[4840]: E0311 09:17:10.155388 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db" containerName="ovn-config" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.155405 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db" containerName="ovn-config" Mar 11 09:17:10 crc kubenswrapper[4840]: E0311 09:17:10.155567 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c6e6d6-9fb9-4042-a8c7-d14ad4067afb" containerName="glance-db-sync" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.155577 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c6e6d6-9fb9-4042-a8c7-d14ad4067afb" containerName="glance-db-sync" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.157343 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="51c6e6d6-9fb9-4042-a8c7-d14ad4067afb" containerName="glance-db-sync" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.157393 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db" containerName="ovn-config" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.158279 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8dnn2" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.161594 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.188041 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8dnn2"] Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.231279 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qb59h" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.236778 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0aed3b47-186d-4af2-b83e-c04526a43095-operator-scripts\") pod \"root-account-create-update-8dnn2\" (UID: \"0aed3b47-186d-4af2-b83e-c04526a43095\") " pod="openstack/root-account-create-update-8dnn2" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.236857 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrbmv\" (UniqueName: \"kubernetes.io/projected/0aed3b47-186d-4af2-b83e-c04526a43095-kube-api-access-mrbmv\") pod \"root-account-create-update-8dnn2\" (UID: \"0aed3b47-186d-4af2-b83e-c04526a43095\") " pod="openstack/root-account-create-update-8dnn2" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.247707 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f58d6bb6f-xrjk6"] Mar 11 09:17:10 crc kubenswrapper[4840]: E0311 09:17:10.254626 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="462d670d-c6b2-4c12-9d5c-183069b04200" containerName="keystone-db-sync" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.254662 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="462d670d-c6b2-4c12-9d5c-183069b04200" containerName="keystone-db-sync" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.255014 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="462d670d-c6b2-4c12-9d5c-183069b04200" containerName="keystone-db-sync" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.255884 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f58d6bb6f-xrjk6" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.284234 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f58d6bb6f-xrjk6"] Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.337756 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dj4q\" (UniqueName: \"kubernetes.io/projected/462d670d-c6b2-4c12-9d5c-183069b04200-kube-api-access-5dj4q\") pod \"462d670d-c6b2-4c12-9d5c-183069b04200\" (UID: \"462d670d-c6b2-4c12-9d5c-183069b04200\") " Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.337797 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462d670d-c6b2-4c12-9d5c-183069b04200-combined-ca-bundle\") pod \"462d670d-c6b2-4c12-9d5c-183069b04200\" (UID: \"462d670d-c6b2-4c12-9d5c-183069b04200\") " Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.337818 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462d670d-c6b2-4c12-9d5c-183069b04200-config-data\") pod \"462d670d-c6b2-4c12-9d5c-183069b04200\" (UID: \"462d670d-c6b2-4c12-9d5c-183069b04200\") " Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.337997 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0aed3b47-186d-4af2-b83e-c04526a43095-operator-scripts\") pod \"root-account-create-update-8dnn2\" (UID: \"0aed3b47-186d-4af2-b83e-c04526a43095\") " pod="openstack/root-account-create-update-8dnn2" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.338049 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrbmv\" (UniqueName: \"kubernetes.io/projected/0aed3b47-186d-4af2-b83e-c04526a43095-kube-api-access-mrbmv\") pod \"root-account-create-update-8dnn2\" (UID: \"0aed3b47-186d-4af2-b83e-c04526a43095\") " pod="openstack/root-account-create-update-8dnn2" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.338083 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-ovsdbserver-sb\") pod \"dnsmasq-dns-7f58d6bb6f-xrjk6\" (UID: \"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xrjk6" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.338112 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-ovsdbserver-nb\") pod \"dnsmasq-dns-7f58d6bb6f-xrjk6\" (UID: \"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xrjk6" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.338139 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm4zr\" (UniqueName: \"kubernetes.io/projected/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-kube-api-access-jm4zr\") pod \"dnsmasq-dns-7f58d6bb6f-xrjk6\" (UID: \"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xrjk6" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.338182 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-dns-svc\") pod \"dnsmasq-dns-7f58d6bb6f-xrjk6\" (UID: \"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xrjk6" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.338205 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-config\") pod \"dnsmasq-dns-7f58d6bb6f-xrjk6\" (UID: \"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xrjk6" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.339903 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0aed3b47-186d-4af2-b83e-c04526a43095-operator-scripts\") pod \"root-account-create-update-8dnn2\" (UID: \"0aed3b47-186d-4af2-b83e-c04526a43095\") " pod="openstack/root-account-create-update-8dnn2" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.343436 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/462d670d-c6b2-4c12-9d5c-183069b04200-kube-api-access-5dj4q" (OuterVolumeSpecName: "kube-api-access-5dj4q") pod "462d670d-c6b2-4c12-9d5c-183069b04200" (UID: "462d670d-c6b2-4c12-9d5c-183069b04200"). InnerVolumeSpecName "kube-api-access-5dj4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.360349 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-g2p7c-config-fjdgt"] Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.378727 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrbmv\" (UniqueName: \"kubernetes.io/projected/0aed3b47-186d-4af2-b83e-c04526a43095-kube-api-access-mrbmv\") pod \"root-account-create-update-8dnn2\" (UID: \"0aed3b47-186d-4af2-b83e-c04526a43095\") " pod="openstack/root-account-create-update-8dnn2" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.381889 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8dnn2" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.386245 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-g2p7c-config-fjdgt"] Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.427975 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462d670d-c6b2-4c12-9d5c-183069b04200-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "462d670d-c6b2-4c12-9d5c-183069b04200" (UID: "462d670d-c6b2-4c12-9d5c-183069b04200"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.440249 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-dns-svc\") pod \"dnsmasq-dns-7f58d6bb6f-xrjk6\" (UID: \"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xrjk6" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.440316 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-config\") pod \"dnsmasq-dns-7f58d6bb6f-xrjk6\" (UID: \"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xrjk6" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.440490 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-ovsdbserver-sb\") pod \"dnsmasq-dns-7f58d6bb6f-xrjk6\" (UID: \"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xrjk6" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.440536 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-ovsdbserver-nb\") pod \"dnsmasq-dns-7f58d6bb6f-xrjk6\" (UID: \"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xrjk6" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.440592 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm4zr\" (UniqueName: \"kubernetes.io/projected/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-kube-api-access-jm4zr\") pod \"dnsmasq-dns-7f58d6bb6f-xrjk6\" (UID: \"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xrjk6" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.440679 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dj4q\" (UniqueName: \"kubernetes.io/projected/462d670d-c6b2-4c12-9d5c-183069b04200-kube-api-access-5dj4q\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.440697 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462d670d-c6b2-4c12-9d5c-183069b04200-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.441756 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-config\") pod \"dnsmasq-dns-7f58d6bb6f-xrjk6\" (UID: \"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xrjk6" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.442379 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-dns-svc\") pod \"dnsmasq-dns-7f58d6bb6f-xrjk6\" (UID: \"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xrjk6" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.442791 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-ovsdbserver-sb\") pod \"dnsmasq-dns-7f58d6bb6f-xrjk6\" (UID: \"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xrjk6" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.443274 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-ovsdbserver-nb\") pod \"dnsmasq-dns-7f58d6bb6f-xrjk6\" (UID: \"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xrjk6" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.447591 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462d670d-c6b2-4c12-9d5c-183069b04200-config-data" (OuterVolumeSpecName: "config-data") pod "462d670d-c6b2-4c12-9d5c-183069b04200" (UID: "462d670d-c6b2-4c12-9d5c-183069b04200"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.464311 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm4zr\" (UniqueName: \"kubernetes.io/projected/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-kube-api-access-jm4zr\") pod \"dnsmasq-dns-7f58d6bb6f-xrjk6\" (UID: \"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1\") " pod="openstack/dnsmasq-dns-7f58d6bb6f-xrjk6" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.542457 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462d670d-c6b2-4c12-9d5c-183069b04200-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.679179 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f58d6bb6f-xrjk6" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.795426 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-qb59h" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.796710 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-qb59h" event={"ID":"462d670d-c6b2-4c12-9d5c-183069b04200","Type":"ContainerDied","Data":"afac74ae78d09a0387b687e7c3cffb9f15469133e34fb323971aaf05411411bf"} Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.796746 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afac74ae78d09a0387b687e7c3cffb9f15469133e34fb323971aaf05411411bf" Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.821405 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerStarted","Data":"9b3746cc36c2137ab5787473b5942125bb1e7796da2df7d082d9e9313eaffba4"} Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.821453 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerStarted","Data":"7ce263946b4448b3a0388e295bd5ae37d6a5148f4b13708343302829cff90b62"} Mar 11 09:17:10 crc kubenswrapper[4840]: I0311 09:17:10.920744 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8dnn2"] Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.122541 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-f7cv9"] Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.123887 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f7cv9" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.126720 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.127043 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.127374 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.127518 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.127645 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gtqwm" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.143634 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f58d6bb6f-xrjk6"] Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.157864 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-fernet-keys\") pod \"keystone-bootstrap-f7cv9\" (UID: \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\") " pod="openstack/keystone-bootstrap-f7cv9" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.157938 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht9v8\" (UniqueName: \"kubernetes.io/projected/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-kube-api-access-ht9v8\") pod \"keystone-bootstrap-f7cv9\" (UID: \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\") " pod="openstack/keystone-bootstrap-f7cv9" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.157970 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-scripts\") pod \"keystone-bootstrap-f7cv9\" (UID: \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\") " pod="openstack/keystone-bootstrap-f7cv9" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.157989 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-config-data\") pod \"keystone-bootstrap-f7cv9\" (UID: \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\") " pod="openstack/keystone-bootstrap-f7cv9" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.158008 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-combined-ca-bundle\") pod \"keystone-bootstrap-f7cv9\" (UID: \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\") " pod="openstack/keystone-bootstrap-f7cv9" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.158047 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-credential-keys\") pod \"keystone-bootstrap-f7cv9\" (UID: \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\") " pod="openstack/keystone-bootstrap-f7cv9" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.166094 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-f7cv9"] Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.206875 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-ffbd547-24lch"] Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.208256 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ffbd547-24lch" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.260018 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-dns-svc\") pod \"dnsmasq-dns-ffbd547-24lch\" (UID: \"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3\") " pod="openstack/dnsmasq-dns-ffbd547-24lch" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.260411 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht9v8\" (UniqueName: \"kubernetes.io/projected/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-kube-api-access-ht9v8\") pod \"keystone-bootstrap-f7cv9\" (UID: \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\") " pod="openstack/keystone-bootstrap-f7cv9" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.260448 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-scripts\") pod \"keystone-bootstrap-f7cv9\" (UID: \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\") " pod="openstack/keystone-bootstrap-f7cv9" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.260481 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-config-data\") pod \"keystone-bootstrap-f7cv9\" (UID: \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\") " pod="openstack/keystone-bootstrap-f7cv9" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.260504 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-combined-ca-bundle\") pod \"keystone-bootstrap-f7cv9\" (UID: \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\") " pod="openstack/keystone-bootstrap-f7cv9" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.260560 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-credential-keys\") pod \"keystone-bootstrap-f7cv9\" (UID: \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\") " pod="openstack/keystone-bootstrap-f7cv9" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.260591 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlntd\" (UniqueName: \"kubernetes.io/projected/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-kube-api-access-nlntd\") pod \"dnsmasq-dns-ffbd547-24lch\" (UID: \"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3\") " pod="openstack/dnsmasq-dns-ffbd547-24lch" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.260644 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-config\") pod \"dnsmasq-dns-ffbd547-24lch\" (UID: \"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3\") " pod="openstack/dnsmasq-dns-ffbd547-24lch" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.260661 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-ovsdbserver-nb\") pod \"dnsmasq-dns-ffbd547-24lch\" (UID: \"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3\") " pod="openstack/dnsmasq-dns-ffbd547-24lch" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.260684 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-ovsdbserver-sb\") pod \"dnsmasq-dns-ffbd547-24lch\" (UID: \"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3\") " pod="openstack/dnsmasq-dns-ffbd547-24lch" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.260721 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-fernet-keys\") pod \"keystone-bootstrap-f7cv9\" (UID: \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\") " pod="openstack/keystone-bootstrap-f7cv9" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.269564 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-ffbd547-24lch"] Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.285643 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-fernet-keys\") pod \"keystone-bootstrap-f7cv9\" (UID: \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\") " pod="openstack/keystone-bootstrap-f7cv9" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.285755 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f58d6bb6f-xrjk6"] Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.295608 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-credential-keys\") pod \"keystone-bootstrap-f7cv9\" (UID: \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\") " pod="openstack/keystone-bootstrap-f7cv9" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.295759 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-scripts\") pod \"keystone-bootstrap-f7cv9\" (UID: \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\") " pod="openstack/keystone-bootstrap-f7cv9" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.296145 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-config-data\") pod \"keystone-bootstrap-f7cv9\" (UID: \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\") " pod="openstack/keystone-bootstrap-f7cv9" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.296251 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-combined-ca-bundle\") pod \"keystone-bootstrap-f7cv9\" (UID: \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\") " pod="openstack/keystone-bootstrap-f7cv9" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.308506 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht9v8\" (UniqueName: \"kubernetes.io/projected/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-kube-api-access-ht9v8\") pod \"keystone-bootstrap-f7cv9\" (UID: \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\") " pod="openstack/keystone-bootstrap-f7cv9" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.365167 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-dns-svc\") pod \"dnsmasq-dns-ffbd547-24lch\" (UID: \"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3\") " pod="openstack/dnsmasq-dns-ffbd547-24lch" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.365270 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlntd\" (UniqueName: \"kubernetes.io/projected/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-kube-api-access-nlntd\") pod \"dnsmasq-dns-ffbd547-24lch\" (UID: \"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3\") " pod="openstack/dnsmasq-dns-ffbd547-24lch" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.365328 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-config\") pod \"dnsmasq-dns-ffbd547-24lch\" (UID: \"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3\") " pod="openstack/dnsmasq-dns-ffbd547-24lch" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.365344 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-ovsdbserver-nb\") pod \"dnsmasq-dns-ffbd547-24lch\" (UID: \"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3\") " pod="openstack/dnsmasq-dns-ffbd547-24lch" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.365365 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-ovsdbserver-sb\") pod \"dnsmasq-dns-ffbd547-24lch\" (UID: \"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3\") " pod="openstack/dnsmasq-dns-ffbd547-24lch" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.366456 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-ovsdbserver-sb\") pod \"dnsmasq-dns-ffbd547-24lch\" (UID: \"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3\") " pod="openstack/dnsmasq-dns-ffbd547-24lch" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.367072 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-dns-svc\") pod \"dnsmasq-dns-ffbd547-24lch\" (UID: \"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3\") " pod="openstack/dnsmasq-dns-ffbd547-24lch" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.368199 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-config\") pod \"dnsmasq-dns-ffbd547-24lch\" (UID: \"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3\") " pod="openstack/dnsmasq-dns-ffbd547-24lch" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.368543 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-ovsdbserver-nb\") pod \"dnsmasq-dns-ffbd547-24lch\" (UID: \"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3\") " pod="openstack/dnsmasq-dns-ffbd547-24lch" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.388673 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-k7tgg"] Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.396548 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-k7tgg" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.405451 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.405526 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.405788 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7djjf" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.405784 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlntd\" (UniqueName: \"kubernetes.io/projected/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-kube-api-access-nlntd\") pod \"dnsmasq-dns-ffbd547-24lch\" (UID: \"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3\") " pod="openstack/dnsmasq-dns-ffbd547-24lch" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.411923 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-dghqs"] Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.413175 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dghqs" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.426877 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.433197 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.433407 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-vpz57" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.433565 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.451728 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.455973 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.456354 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.463321 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f7cv9" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.469955 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-k7tgg"] Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.471008 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/adf5efde-cb19-4870-9c8e-e7e139523238-db-sync-config-data\") pod \"cinder-db-sync-k7tgg\" (UID: \"adf5efde-cb19-4870-9c8e-e7e139523238\") " pod="openstack/cinder-db-sync-k7tgg" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.471047 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02a9854a-9978-4414-972d-37c0b579b03b-log-httpd\") pod \"ceilometer-0\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " pod="openstack/ceilometer-0" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.471073 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02a9854a-9978-4414-972d-37c0b579b03b-scripts\") pod \"ceilometer-0\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " pod="openstack/ceilometer-0" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.471087 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02a9854a-9978-4414-972d-37c0b579b03b-config-data\") pod \"ceilometer-0\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " pod="openstack/ceilometer-0" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.471103 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02a9854a-9978-4414-972d-37c0b579b03b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " pod="openstack/ceilometer-0" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.471128 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlnft\" (UniqueName: \"kubernetes.io/projected/68dbceea-1597-4236-b2d6-ca071bf86d5a-kube-api-access-qlnft\") pod \"neutron-db-sync-dghqs\" (UID: \"68dbceea-1597-4236-b2d6-ca071bf86d5a\") " pod="openstack/neutron-db-sync-dghqs" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.471149 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf5efde-cb19-4870-9c8e-e7e139523238-combined-ca-bundle\") pod \"cinder-db-sync-k7tgg\" (UID: \"adf5efde-cb19-4870-9c8e-e7e139523238\") " pod="openstack/cinder-db-sync-k7tgg" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.471169 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02a9854a-9978-4414-972d-37c0b579b03b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " pod="openstack/ceilometer-0" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.471195 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68dbceea-1597-4236-b2d6-ca071bf86d5a-combined-ca-bundle\") pod \"neutron-db-sync-dghqs\" (UID: \"68dbceea-1597-4236-b2d6-ca071bf86d5a\") " pod="openstack/neutron-db-sync-dghqs" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.471224 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf5efde-cb19-4870-9c8e-e7e139523238-config-data\") pod \"cinder-db-sync-k7tgg\" (UID: \"adf5efde-cb19-4870-9c8e-e7e139523238\") " pod="openstack/cinder-db-sync-k7tgg" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.471246 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkj7g\" (UniqueName: \"kubernetes.io/projected/02a9854a-9978-4414-972d-37c0b579b03b-kube-api-access-tkj7g\") pod \"ceilometer-0\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " pod="openstack/ceilometer-0" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.471274 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvtvx\" (UniqueName: \"kubernetes.io/projected/adf5efde-cb19-4870-9c8e-e7e139523238-kube-api-access-gvtvx\") pod \"cinder-db-sync-k7tgg\" (UID: \"adf5efde-cb19-4870-9c8e-e7e139523238\") " pod="openstack/cinder-db-sync-k7tgg" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.471316 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/adf5efde-cb19-4870-9c8e-e7e139523238-etc-machine-id\") pod \"cinder-db-sync-k7tgg\" (UID: \"adf5efde-cb19-4870-9c8e-e7e139523238\") " pod="openstack/cinder-db-sync-k7tgg" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.471341 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/68dbceea-1597-4236-b2d6-ca071bf86d5a-config\") pod \"neutron-db-sync-dghqs\" (UID: \"68dbceea-1597-4236-b2d6-ca071bf86d5a\") " pod="openstack/neutron-db-sync-dghqs" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.471379 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adf5efde-cb19-4870-9c8e-e7e139523238-scripts\") pod \"cinder-db-sync-k7tgg\" (UID: \"adf5efde-cb19-4870-9c8e-e7e139523238\") " pod="openstack/cinder-db-sync-k7tgg" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.471404 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02a9854a-9978-4414-972d-37c0b579b03b-run-httpd\") pod \"ceilometer-0\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " pod="openstack/ceilometer-0" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.486657 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-dghqs"] Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.496654 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-r5wvc"] Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.498355 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-r5wvc" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.505002 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.505209 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-4knzv" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.531297 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.555942 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-r5wvc"] Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.572847 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jgnb\" (UniqueName: \"kubernetes.io/projected/eb9a61ad-a8fb-4968-b32a-ff5756add27b-kube-api-access-2jgnb\") pod \"barbican-db-sync-r5wvc\" (UID: \"eb9a61ad-a8fb-4968-b32a-ff5756add27b\") " pod="openstack/barbican-db-sync-r5wvc" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.572893 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkj7g\" (UniqueName: \"kubernetes.io/projected/02a9854a-9978-4414-972d-37c0b579b03b-kube-api-access-tkj7g\") pod \"ceilometer-0\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " pod="openstack/ceilometer-0" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.572923 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb9a61ad-a8fb-4968-b32a-ff5756add27b-db-sync-config-data\") pod \"barbican-db-sync-r5wvc\" (UID: \"eb9a61ad-a8fb-4968-b32a-ff5756add27b\") " pod="openstack/barbican-db-sync-r5wvc" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.572952 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvtvx\" (UniqueName: \"kubernetes.io/projected/adf5efde-cb19-4870-9c8e-e7e139523238-kube-api-access-gvtvx\") pod \"cinder-db-sync-k7tgg\" (UID: \"adf5efde-cb19-4870-9c8e-e7e139523238\") " pod="openstack/cinder-db-sync-k7tgg" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.572997 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/adf5efde-cb19-4870-9c8e-e7e139523238-etc-machine-id\") pod \"cinder-db-sync-k7tgg\" (UID: \"adf5efde-cb19-4870-9c8e-e7e139523238\") " pod="openstack/cinder-db-sync-k7tgg" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.573023 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/68dbceea-1597-4236-b2d6-ca071bf86d5a-config\") pod \"neutron-db-sync-dghqs\" (UID: \"68dbceea-1597-4236-b2d6-ca071bf86d5a\") " pod="openstack/neutron-db-sync-dghqs" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.573063 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adf5efde-cb19-4870-9c8e-e7e139523238-scripts\") pod \"cinder-db-sync-k7tgg\" (UID: \"adf5efde-cb19-4870-9c8e-e7e139523238\") " pod="openstack/cinder-db-sync-k7tgg" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.573091 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb9a61ad-a8fb-4968-b32a-ff5756add27b-combined-ca-bundle\") pod \"barbican-db-sync-r5wvc\" (UID: \"eb9a61ad-a8fb-4968-b32a-ff5756add27b\") " pod="openstack/barbican-db-sync-r5wvc" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.573116 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02a9854a-9978-4414-972d-37c0b579b03b-run-httpd\") pod \"ceilometer-0\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " pod="openstack/ceilometer-0" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.573145 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/adf5efde-cb19-4870-9c8e-e7e139523238-db-sync-config-data\") pod \"cinder-db-sync-k7tgg\" (UID: \"adf5efde-cb19-4870-9c8e-e7e139523238\") " pod="openstack/cinder-db-sync-k7tgg" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.573171 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02a9854a-9978-4414-972d-37c0b579b03b-log-httpd\") pod \"ceilometer-0\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " pod="openstack/ceilometer-0" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.573203 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02a9854a-9978-4414-972d-37c0b579b03b-scripts\") pod \"ceilometer-0\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " pod="openstack/ceilometer-0" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.573224 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02a9854a-9978-4414-972d-37c0b579b03b-config-data\") pod \"ceilometer-0\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " pod="openstack/ceilometer-0" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.573245 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02a9854a-9978-4414-972d-37c0b579b03b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " pod="openstack/ceilometer-0" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.573276 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlnft\" (UniqueName: \"kubernetes.io/projected/68dbceea-1597-4236-b2d6-ca071bf86d5a-kube-api-access-qlnft\") pod \"neutron-db-sync-dghqs\" (UID: \"68dbceea-1597-4236-b2d6-ca071bf86d5a\") " pod="openstack/neutron-db-sync-dghqs" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.573305 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf5efde-cb19-4870-9c8e-e7e139523238-combined-ca-bundle\") pod \"cinder-db-sync-k7tgg\" (UID: \"adf5efde-cb19-4870-9c8e-e7e139523238\") " pod="openstack/cinder-db-sync-k7tgg" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.573337 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02a9854a-9978-4414-972d-37c0b579b03b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " pod="openstack/ceilometer-0" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.573368 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68dbceea-1597-4236-b2d6-ca071bf86d5a-combined-ca-bundle\") pod \"neutron-db-sync-dghqs\" (UID: \"68dbceea-1597-4236-b2d6-ca071bf86d5a\") " pod="openstack/neutron-db-sync-dghqs" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.573406 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf5efde-cb19-4870-9c8e-e7e139523238-config-data\") pod \"cinder-db-sync-k7tgg\" (UID: \"adf5efde-cb19-4870-9c8e-e7e139523238\") " pod="openstack/cinder-db-sync-k7tgg" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.574267 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02a9854a-9978-4414-972d-37c0b579b03b-run-httpd\") pod \"ceilometer-0\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " pod="openstack/ceilometer-0" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.574757 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/adf5efde-cb19-4870-9c8e-e7e139523238-etc-machine-id\") pod \"cinder-db-sync-k7tgg\" (UID: \"adf5efde-cb19-4870-9c8e-e7e139523238\") " pod="openstack/cinder-db-sync-k7tgg" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.578371 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02a9854a-9978-4414-972d-37c0b579b03b-log-httpd\") pod \"ceilometer-0\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " pod="openstack/ceilometer-0" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.600798 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68dbceea-1597-4236-b2d6-ca071bf86d5a-combined-ca-bundle\") pod \"neutron-db-sync-dghqs\" (UID: \"68dbceea-1597-4236-b2d6-ca071bf86d5a\") " pod="openstack/neutron-db-sync-dghqs" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.601627 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ffbd547-24lch" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.604008 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf5efde-cb19-4870-9c8e-e7e139523238-combined-ca-bundle\") pod \"cinder-db-sync-k7tgg\" (UID: \"adf5efde-cb19-4870-9c8e-e7e139523238\") " pod="openstack/cinder-db-sync-k7tgg" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.606800 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02a9854a-9978-4414-972d-37c0b579b03b-config-data\") pod \"ceilometer-0\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " pod="openstack/ceilometer-0" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.608340 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02a9854a-9978-4414-972d-37c0b579b03b-scripts\") pod \"ceilometer-0\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " pod="openstack/ceilometer-0" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.610083 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02a9854a-9978-4414-972d-37c0b579b03b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " pod="openstack/ceilometer-0" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.621271 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/68dbceea-1597-4236-b2d6-ca071bf86d5a-config\") pod \"neutron-db-sync-dghqs\" (UID: \"68dbceea-1597-4236-b2d6-ca071bf86d5a\") " pod="openstack/neutron-db-sync-dghqs" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.622940 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf5efde-cb19-4870-9c8e-e7e139523238-config-data\") pod \"cinder-db-sync-k7tgg\" (UID: \"adf5efde-cb19-4870-9c8e-e7e139523238\") " pod="openstack/cinder-db-sync-k7tgg" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.623498 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adf5efde-cb19-4870-9c8e-e7e139523238-scripts\") pod \"cinder-db-sync-k7tgg\" (UID: \"adf5efde-cb19-4870-9c8e-e7e139523238\") " pod="openstack/cinder-db-sync-k7tgg" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.650575 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkj7g\" (UniqueName: \"kubernetes.io/projected/02a9854a-9978-4414-972d-37c0b579b03b-kube-api-access-tkj7g\") pod \"ceilometer-0\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " pod="openstack/ceilometer-0" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.661930 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/adf5efde-cb19-4870-9c8e-e7e139523238-db-sync-config-data\") pod \"cinder-db-sync-k7tgg\" (UID: \"adf5efde-cb19-4870-9c8e-e7e139523238\") " pod="openstack/cinder-db-sync-k7tgg" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.662884 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02a9854a-9978-4414-972d-37c0b579b03b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " pod="openstack/ceilometer-0" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.672012 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlnft\" (UniqueName: \"kubernetes.io/projected/68dbceea-1597-4236-b2d6-ca071bf86d5a-kube-api-access-qlnft\") pod \"neutron-db-sync-dghqs\" (UID: \"68dbceea-1597-4236-b2d6-ca071bf86d5a\") " pod="openstack/neutron-db-sync-dghqs" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.676558 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb9a61ad-a8fb-4968-b32a-ff5756add27b-combined-ca-bundle\") pod \"barbican-db-sync-r5wvc\" (UID: \"eb9a61ad-a8fb-4968-b32a-ff5756add27b\") " pod="openstack/barbican-db-sync-r5wvc" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.676693 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jgnb\" (UniqueName: \"kubernetes.io/projected/eb9a61ad-a8fb-4968-b32a-ff5756add27b-kube-api-access-2jgnb\") pod \"barbican-db-sync-r5wvc\" (UID: \"eb9a61ad-a8fb-4968-b32a-ff5756add27b\") " pod="openstack/barbican-db-sync-r5wvc" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.676738 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb9a61ad-a8fb-4968-b32a-ff5756add27b-db-sync-config-data\") pod \"barbican-db-sync-r5wvc\" (UID: \"eb9a61ad-a8fb-4968-b32a-ff5756add27b\") " pod="openstack/barbican-db-sync-r5wvc" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.687734 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb9a61ad-a8fb-4968-b32a-ff5756add27b-db-sync-config-data\") pod \"barbican-db-sync-r5wvc\" (UID: \"eb9a61ad-a8fb-4968-b32a-ff5756add27b\") " pod="openstack/barbican-db-sync-r5wvc" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.700770 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvtvx\" (UniqueName: \"kubernetes.io/projected/adf5efde-cb19-4870-9c8e-e7e139523238-kube-api-access-gvtvx\") pod \"cinder-db-sync-k7tgg\" (UID: \"adf5efde-cb19-4870-9c8e-e7e139523238\") " pod="openstack/cinder-db-sync-k7tgg" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.719198 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.721289 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb9a61ad-a8fb-4968-b32a-ff5756add27b-combined-ca-bundle\") pod \"barbican-db-sync-r5wvc\" (UID: \"eb9a61ad-a8fb-4968-b32a-ff5756add27b\") " pod="openstack/barbican-db-sync-r5wvc" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.759123 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jgnb\" (UniqueName: \"kubernetes.io/projected/eb9a61ad-a8fb-4968-b32a-ff5756add27b-kube-api-access-2jgnb\") pod \"barbican-db-sync-r5wvc\" (UID: \"eb9a61ad-a8fb-4968-b32a-ff5756add27b\") " pod="openstack/barbican-db-sync-r5wvc" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.764536 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-f7z67"] Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.765710 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f7z67" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.766773 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dghqs" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.801366 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.802677 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4q5fm" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.802884 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.883248 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-f7z67"] Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.941968 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/573b362e-582b-43f0-afae-c038cf95f625-config-data\") pod \"placement-db-sync-f7z67\" (UID: \"573b362e-582b-43f0-afae-c038cf95f625\") " pod="openstack/placement-db-sync-f7z67" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.942017 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573b362e-582b-43f0-afae-c038cf95f625-combined-ca-bundle\") pod \"placement-db-sync-f7z67\" (UID: \"573b362e-582b-43f0-afae-c038cf95f625\") " pod="openstack/placement-db-sync-f7z67" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.942056 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnbvw\" (UniqueName: \"kubernetes.io/projected/573b362e-582b-43f0-afae-c038cf95f625-kube-api-access-fnbvw\") pod \"placement-db-sync-f7z67\" (UID: \"573b362e-582b-43f0-afae-c038cf95f625\") " pod="openstack/placement-db-sync-f7z67" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.942076 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/573b362e-582b-43f0-afae-c038cf95f625-logs\") pod \"placement-db-sync-f7z67\" (UID: \"573b362e-582b-43f0-afae-c038cf95f625\") " pod="openstack/placement-db-sync-f7z67" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.942188 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/573b362e-582b-43f0-afae-c038cf95f625-scripts\") pod \"placement-db-sync-f7z67\" (UID: \"573b362e-582b-43f0-afae-c038cf95f625\") " pod="openstack/placement-db-sync-f7z67" Mar 11 09:17:11 crc kubenswrapper[4840]: I0311 09:17:11.944007 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-k7tgg" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.016244 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-ffbd547-24lch"] Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.024280 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-r5wvc" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.048866 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573b362e-582b-43f0-afae-c038cf95f625-combined-ca-bundle\") pod \"placement-db-sync-f7z67\" (UID: \"573b362e-582b-43f0-afae-c038cf95f625\") " pod="openstack/placement-db-sync-f7z67" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.048930 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnbvw\" (UniqueName: \"kubernetes.io/projected/573b362e-582b-43f0-afae-c038cf95f625-kube-api-access-fnbvw\") pod \"placement-db-sync-f7z67\" (UID: \"573b362e-582b-43f0-afae-c038cf95f625\") " pod="openstack/placement-db-sync-f7z67" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.048951 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/573b362e-582b-43f0-afae-c038cf95f625-logs\") pod \"placement-db-sync-f7z67\" (UID: \"573b362e-582b-43f0-afae-c038cf95f625\") " pod="openstack/placement-db-sync-f7z67" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.049067 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/573b362e-582b-43f0-afae-c038cf95f625-scripts\") pod \"placement-db-sync-f7z67\" (UID: \"573b362e-582b-43f0-afae-c038cf95f625\") " pod="openstack/placement-db-sync-f7z67" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.049169 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/573b362e-582b-43f0-afae-c038cf95f625-config-data\") pod \"placement-db-sync-f7z67\" (UID: \"573b362e-582b-43f0-afae-c038cf95f625\") " pod="openstack/placement-db-sync-f7z67" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.053071 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/573b362e-582b-43f0-afae-c038cf95f625-logs\") pod \"placement-db-sync-f7z67\" (UID: \"573b362e-582b-43f0-afae-c038cf95f625\") " pod="openstack/placement-db-sync-f7z67" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.068000 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573b362e-582b-43f0-afae-c038cf95f625-combined-ca-bundle\") pod \"placement-db-sync-f7z67\" (UID: \"573b362e-582b-43f0-afae-c038cf95f625\") " pod="openstack/placement-db-sync-f7z67" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.071021 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/573b362e-582b-43f0-afae-c038cf95f625-scripts\") pod \"placement-db-sync-f7z67\" (UID: \"573b362e-582b-43f0-afae-c038cf95f625\") " pod="openstack/placement-db-sync-f7z67" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.074325 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/573b362e-582b-43f0-afae-c038cf95f625-config-data\") pod \"placement-db-sync-f7z67\" (UID: \"573b362e-582b-43f0-afae-c038cf95f625\") " pod="openstack/placement-db-sync-f7z67" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.121022 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db" path="/var/lib/kubelet/pods/662bb0cd-c6d7-40bb-a7cc-d3a4c1ab22db/volumes" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.121819 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74b48649cc-qqk8g"] Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.125499 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnbvw\" (UniqueName: \"kubernetes.io/projected/573b362e-582b-43f0-afae-c038cf95f625-kube-api-access-fnbvw\") pod \"placement-db-sync-f7z67\" (UID: \"573b362e-582b-43f0-afae-c038cf95f625\") " pod="openstack/placement-db-sync-f7z67" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.159321 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerStarted","Data":"25eb7124241e743ab0f23cf5b6fd16ff25372ee1ef90290032d988d844df4427"} Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.159371 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74b48649cc-qqk8g"] Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.159387 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8dnn2" event={"ID":"0aed3b47-186d-4af2-b83e-c04526a43095","Type":"ContainerStarted","Data":"d497f54d8fae9f2a73679effc95823f4140980c021583551b3be7bba6bfbf0a9"} Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.159400 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8dnn2" event={"ID":"0aed3b47-186d-4af2-b83e-c04526a43095","Type":"ContainerStarted","Data":"93cc0229be69167f4799b249425bf5289654f9fc1ebbe0eb382cbe75b91066e1"} Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.159412 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f58d6bb6f-xrjk6" event={"ID":"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1","Type":"ContainerStarted","Data":"f1ee8efee92daf9aaa09ae341ed962140ff5087aa7162d3f0b107b4b7744246f"} Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.159519 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.169965 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f7z67" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.237542 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.239114 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.243521 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.245616 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8l29s" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.245917 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.258121 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-config\") pod \"dnsmasq-dns-74b48649cc-qqk8g\" (UID: \"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d\") " pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.258192 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-dns-svc\") pod \"dnsmasq-dns-74b48649cc-qqk8g\" (UID: \"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d\") " pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.258227 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-ovsdbserver-sb\") pod \"dnsmasq-dns-74b48649cc-qqk8g\" (UID: \"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d\") " pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.258303 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-ovsdbserver-nb\") pod \"dnsmasq-dns-74b48649cc-qqk8g\" (UID: \"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d\") " pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.258518 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcl9t\" (UniqueName: \"kubernetes.io/projected/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-kube-api-access-dcl9t\") pod \"dnsmasq-dns-74b48649cc-qqk8g\" (UID: \"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d\") " pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.260265 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.293448 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-8dnn2" podStartSLOduration=2.293426803 podStartE2EDuration="2.293426803s" podCreationTimestamp="2026-03-11 09:17:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:17:12.245497727 +0000 UTC m=+1230.911167532" watchObservedRunningTime="2026-03-11 09:17:12.293426803 +0000 UTC m=+1230.959096618" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.360520 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.360609 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcl9t\" (UniqueName: \"kubernetes.io/projected/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-kube-api-access-dcl9t\") pod \"dnsmasq-dns-74b48649cc-qqk8g\" (UID: \"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d\") " pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.360652 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.360692 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.360725 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-logs\") pod \"glance-default-external-api-0\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.360780 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-config\") pod \"dnsmasq-dns-74b48649cc-qqk8g\" (UID: \"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d\") " pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.360821 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-config-data\") pod \"glance-default-external-api-0\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.360857 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-dns-svc\") pod \"dnsmasq-dns-74b48649cc-qqk8g\" (UID: \"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d\") " pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.360882 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-ovsdbserver-sb\") pod \"dnsmasq-dns-74b48649cc-qqk8g\" (UID: \"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d\") " pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.360904 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-ovsdbserver-nb\") pod \"dnsmasq-dns-74b48649cc-qqk8g\" (UID: \"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d\") " pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.360960 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh2dn\" (UniqueName: \"kubernetes.io/projected/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-kube-api-access-gh2dn\") pod \"glance-default-external-api-0\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.361001 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-scripts\") pod \"glance-default-external-api-0\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.363160 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-config\") pod \"dnsmasq-dns-74b48649cc-qqk8g\" (UID: \"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d\") " pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.363923 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-dns-svc\") pod \"dnsmasq-dns-74b48649cc-qqk8g\" (UID: \"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d\") " pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.364607 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-ovsdbserver-sb\") pod \"dnsmasq-dns-74b48649cc-qqk8g\" (UID: \"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d\") " pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.381859 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-ovsdbserver-nb\") pod \"dnsmasq-dns-74b48649cc-qqk8g\" (UID: \"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d\") " pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.411229 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcl9t\" (UniqueName: \"kubernetes.io/projected/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-kube-api-access-dcl9t\") pod \"dnsmasq-dns-74b48649cc-qqk8g\" (UID: \"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d\") " pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.413990 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.415544 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.422443 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.452704 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.465622 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh2dn\" (UniqueName: \"kubernetes.io/projected/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-kube-api-access-gh2dn\") pod \"glance-default-external-api-0\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.466073 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-scripts\") pod \"glance-default-external-api-0\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.466210 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03be0af0-186c-4c90-91d5-9895f6d87534-logs\") pod \"glance-default-internal-api-0\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.466388 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dnzx\" (UniqueName: \"kubernetes.io/projected/03be0af0-186c-4c90-91d5-9895f6d87534-kube-api-access-7dnzx\") pod \"glance-default-internal-api-0\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.466500 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03be0af0-186c-4c90-91d5-9895f6d87534-scripts\") pod \"glance-default-internal-api-0\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.466629 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.466792 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03be0af0-186c-4c90-91d5-9895f6d87534-config-data\") pod \"glance-default-internal-api-0\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.466884 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.466982 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.467064 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.467167 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-logs\") pod \"glance-default-external-api-0\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.467327 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-config-data\") pod \"glance-default-external-api-0\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.467477 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/03be0af0-186c-4c90-91d5-9895f6d87534-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.467590 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03be0af0-186c-4c90-91d5-9895f6d87534-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.468952 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.471625 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.477623 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-logs\") pod \"glance-default-external-api-0\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.477972 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.491336 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-config-data\") pod \"glance-default-external-api-0\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.494384 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-scripts\") pod \"glance-default-external-api-0\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.509446 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh2dn\" (UniqueName: \"kubernetes.io/projected/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-kube-api-access-gh2dn\") pod \"glance-default-external-api-0\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.512384 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.569721 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03be0af0-186c-4c90-91d5-9895f6d87534-scripts\") pod \"glance-default-internal-api-0\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.569802 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03be0af0-186c-4c90-91d5-9895f6d87534-config-data\") pod \"glance-default-internal-api-0\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.569829 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.569895 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/03be0af0-186c-4c90-91d5-9895f6d87534-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.569914 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03be0af0-186c-4c90-91d5-9895f6d87534-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.569969 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03be0af0-186c-4c90-91d5-9895f6d87534-logs\") pod \"glance-default-internal-api-0\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.570014 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dnzx\" (UniqueName: \"kubernetes.io/projected/03be0af0-186c-4c90-91d5-9895f6d87534-kube-api-access-7dnzx\") pod \"glance-default-internal-api-0\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.571028 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.571832 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/03be0af0-186c-4c90-91d5-9895f6d87534-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.572986 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03be0af0-186c-4c90-91d5-9895f6d87534-logs\") pod \"glance-default-internal-api-0\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.596171 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03be0af0-186c-4c90-91d5-9895f6d87534-scripts\") pod \"glance-default-internal-api-0\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.601049 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03be0af0-186c-4c90-91d5-9895f6d87534-config-data\") pod \"glance-default-internal-api-0\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.601139 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03be0af0-186c-4c90-91d5-9895f6d87534-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.605981 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dnzx\" (UniqueName: \"kubernetes.io/projected/03be0af0-186c-4c90-91d5-9895f6d87534-kube-api-access-7dnzx\") pod \"glance-default-internal-api-0\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.620693 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.649089 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-f7cv9"] Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.659039 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.736015 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-dghqs"] Mar 11 09:17:12 crc kubenswrapper[4840]: W0311 09:17:12.738420 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68dbceea_1597_4236_b2d6_ca071bf86d5a.slice/crio-554ad8c77383acfd1421868f5d5b96e6290ba042f9822e64fa192433fce3a8c2 WatchSource:0}: Error finding container 554ad8c77383acfd1421868f5d5b96e6290ba042f9822e64fa192433fce3a8c2: Status 404 returned error can't find the container with id 554ad8c77383acfd1421868f5d5b96e6290ba042f9822e64fa192433fce3a8c2 Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.841867 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.917204 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.927128 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:17:12 crc kubenswrapper[4840]: I0311 09:17:12.937626 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-ffbd547-24lch"] Mar 11 09:17:12 crc kubenswrapper[4840]: W0311 09:17:12.951786 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02a9854a_9978_4414_972d_37c0b579b03b.slice/crio-ee82f8bf6300d739d7f47e43679cfc6eb3d0f04478c9feb1e89fdac90b54a890 WatchSource:0}: Error finding container ee82f8bf6300d739d7f47e43679cfc6eb3d0f04478c9feb1e89fdac90b54a890: Status 404 returned error can't find the container with id ee82f8bf6300d739d7f47e43679cfc6eb3d0f04478c9feb1e89fdac90b54a890 Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.111793 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-r5wvc"] Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.131807 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-f7z67"] Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.156238 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-k7tgg"] Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.170667 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dghqs" event={"ID":"68dbceea-1597-4236-b2d6-ca071bf86d5a","Type":"ContainerStarted","Data":"b4a7c49eb1ab96ca872e6f2ffbdf1839b577c44a108a40de7c18a2cb34f4e9f7"} Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.170735 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dghqs" event={"ID":"68dbceea-1597-4236-b2d6-ca071bf86d5a","Type":"ContainerStarted","Data":"554ad8c77383acfd1421868f5d5b96e6290ba042f9822e64fa192433fce3a8c2"} Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.216174 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerStarted","Data":"22f7a187c7f01516652e793a521148fe2ac2ab6befee1d8ed2d686c2bc8b8adc"} Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.220128 4840 generic.go:334] "Generic (PLEG): container finished" podID="0aed3b47-186d-4af2-b83e-c04526a43095" containerID="d497f54d8fae9f2a73679effc95823f4140980c021583551b3be7bba6bfbf0a9" exitCode=0 Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.220201 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8dnn2" event={"ID":"0aed3b47-186d-4af2-b83e-c04526a43095","Type":"ContainerDied","Data":"d497f54d8fae9f2a73679effc95823f4140980c021583551b3be7bba6bfbf0a9"} Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.241184 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02a9854a-9978-4414-972d-37c0b579b03b","Type":"ContainerStarted","Data":"ee82f8bf6300d739d7f47e43679cfc6eb3d0f04478c9feb1e89fdac90b54a890"} Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.247291 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-dghqs" podStartSLOduration=2.247259551 podStartE2EDuration="2.247259551s" podCreationTimestamp="2026-03-11 09:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:17:13.197444698 +0000 UTC m=+1231.863114513" watchObservedRunningTime="2026-03-11 09:17:13.247259551 +0000 UTC m=+1231.912929366" Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.256791 4840 generic.go:334] "Generic (PLEG): container finished" podID="6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1" containerID="836b9ee8d97fbda82441dfa2c250215584a3b55e48c06b48c457133bdcd06bc1" exitCode=0 Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.256883 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f58d6bb6f-xrjk6" event={"ID":"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1","Type":"ContainerDied","Data":"836b9ee8d97fbda82441dfa2c250215584a3b55e48c06b48c457133bdcd06bc1"} Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.269478 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ffbd547-24lch" event={"ID":"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3","Type":"ContainerStarted","Data":"4b917085dba1648e4099a8fea040971f4cf96b38ffa6c19fbf404fd69644e3b6"} Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.301759 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f7cv9" event={"ID":"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae","Type":"ContainerStarted","Data":"d21c0f90e51840f3f77422c2bb05bbcae4d2b8d27155ff011bccb70548bf78f9"} Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.301803 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f7cv9" event={"ID":"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae","Type":"ContainerStarted","Data":"e182579b33db7ce0269e37927f313d401d274b9eb3441f6ad2b72c8ae43856e7"} Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.322151 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74b48649cc-qqk8g"] Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.482682 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-f7cv9" podStartSLOduration=2.482658172 podStartE2EDuration="2.482658172s" podCreationTimestamp="2026-03-11 09:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:17:13.33469649 +0000 UTC m=+1232.000366305" watchObservedRunningTime="2026-03-11 09:17:13.482658172 +0000 UTC m=+1232.148327987" Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.497150 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.587915 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:17:13 crc kubenswrapper[4840]: W0311 09:17:13.618816 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc589dfd_ede2_4eff_b9df_1f530ceeabfb.slice/crio-3c1d80befcc4ce8a0191ee3669ddbe7efc015da6f50911b61720b14825dade0a WatchSource:0}: Error finding container 3c1d80befcc4ce8a0191ee3669ddbe7efc015da6f50911b61720b14825dade0a: Status 404 returned error can't find the container with id 3c1d80befcc4ce8a0191ee3669ddbe7efc015da6f50911b61720b14825dade0a Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.656215 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f58d6bb6f-xrjk6" Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.816080 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm4zr\" (UniqueName: \"kubernetes.io/projected/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-kube-api-access-jm4zr\") pod \"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1\" (UID: \"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1\") " Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.816139 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-config\") pod \"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1\" (UID: \"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1\") " Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.816263 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-dns-svc\") pod \"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1\" (UID: \"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1\") " Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.817237 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-ovsdbserver-sb\") pod \"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1\" (UID: \"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1\") " Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.817312 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-ovsdbserver-nb\") pod \"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1\" (UID: \"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1\") " Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.830771 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-kube-api-access-jm4zr" (OuterVolumeSpecName: "kube-api-access-jm4zr") pod "6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1" (UID: "6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1"). InnerVolumeSpecName "kube-api-access-jm4zr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.859739 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1" (UID: "6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.866606 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1" (UID: "6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.874056 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1" (UID: "6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.895586 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-config" (OuterVolumeSpecName: "config") pod "6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1" (UID: "6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.922299 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm4zr\" (UniqueName: \"kubernetes.io/projected/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-kube-api-access-jm4zr\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.922373 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.922383 4840 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.922393 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:13 crc kubenswrapper[4840]: I0311 09:17:13.922404 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:14 crc kubenswrapper[4840]: I0311 09:17:14.359294 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-r5wvc" event={"ID":"eb9a61ad-a8fb-4968-b32a-ff5756add27b","Type":"ContainerStarted","Data":"458533b995fb6adc2119952be162e4cfec45297d75bb9ff07b988c0fb474e5a9"} Mar 11 09:17:14 crc kubenswrapper[4840]: I0311 09:17:14.364941 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"03be0af0-186c-4c90-91d5-9895f6d87534","Type":"ContainerStarted","Data":"02cb91a6f3b13d173ddab7a0d07129849408fbc7d8b808375cb0660834b6dcdd"} Mar 11 09:17:14 crc kubenswrapper[4840]: I0311 09:17:14.368542 4840 generic.go:334] "Generic (PLEG): container finished" podID="ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3" containerID="d95207171f15efedecedbe710f939ba187a64cfd7dc462fe67513e720c47dc4b" exitCode=0 Mar 11 09:17:14 crc kubenswrapper[4840]: I0311 09:17:14.368638 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ffbd547-24lch" event={"ID":"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3","Type":"ContainerDied","Data":"d95207171f15efedecedbe710f939ba187a64cfd7dc462fe67513e720c47dc4b"} Mar 11 09:17:14 crc kubenswrapper[4840]: I0311 09:17:14.375717 4840 generic.go:334] "Generic (PLEG): container finished" podID="1bcd01ea-4ad0-4aba-8acd-c509e2e5225d" containerID="930fdc17eea09a6c237d3c90b6dd35e9890981d6f7813e5ca2fd41393daedef2" exitCode=0 Mar 11 09:17:14 crc kubenswrapper[4840]: I0311 09:17:14.375774 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" event={"ID":"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d","Type":"ContainerDied","Data":"930fdc17eea09a6c237d3c90b6dd35e9890981d6f7813e5ca2fd41393daedef2"} Mar 11 09:17:14 crc kubenswrapper[4840]: I0311 09:17:14.375797 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" event={"ID":"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d","Type":"ContainerStarted","Data":"113d4b742dd8ae19b28f146f0585ccc5cc9c6536af5db33c3900fb0851a5daf1"} Mar 11 09:17:14 crc kubenswrapper[4840]: I0311 09:17:14.378833 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc589dfd-ede2-4eff-b9df-1f530ceeabfb","Type":"ContainerStarted","Data":"3c1d80befcc4ce8a0191ee3669ddbe7efc015da6f50911b61720b14825dade0a"} Mar 11 09:17:14 crc kubenswrapper[4840]: I0311 09:17:14.382760 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-k7tgg" event={"ID":"adf5efde-cb19-4870-9c8e-e7e139523238","Type":"ContainerStarted","Data":"f91e898c850f5653fa3835a7ec57434f65344b67721200e7cc0a02b60d136a97"} Mar 11 09:17:14 crc kubenswrapper[4840]: I0311 09:17:14.392766 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f7z67" event={"ID":"573b362e-582b-43f0-afae-c038cf95f625","Type":"ContainerStarted","Data":"963dc77ee7668becb5b7fc2217ec39d066f33e2b0f510e5804d226bd63bc8ed6"} Mar 11 09:17:14 crc kubenswrapper[4840]: I0311 09:17:14.398360 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f58d6bb6f-xrjk6" Mar 11 09:17:14 crc kubenswrapper[4840]: I0311 09:17:14.400791 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f58d6bb6f-xrjk6" event={"ID":"6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1","Type":"ContainerDied","Data":"f1ee8efee92daf9aaa09ae341ed962140ff5087aa7162d3f0b107b4b7744246f"} Mar 11 09:17:14 crc kubenswrapper[4840]: I0311 09:17:14.400860 4840 scope.go:117] "RemoveContainer" containerID="836b9ee8d97fbda82441dfa2c250215584a3b55e48c06b48c457133bdcd06bc1" Mar 11 09:17:14 crc kubenswrapper[4840]: I0311 09:17:14.468443 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f58d6bb6f-xrjk6"] Mar 11 09:17:14 crc kubenswrapper[4840]: I0311 09:17:14.475818 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f58d6bb6f-xrjk6"] Mar 11 09:17:15 crc kubenswrapper[4840]: I0311 09:17:15.081534 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:17:15 crc kubenswrapper[4840]: I0311 09:17:15.216922 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:17:15 crc kubenswrapper[4840]: I0311 09:17:15.223323 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:17:15 crc kubenswrapper[4840]: I0311 09:17:15.426400 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"03be0af0-186c-4c90-91d5-9895f6d87534","Type":"ContainerStarted","Data":"7bdb76b58979f6c59c045a8bb37ccd8a2f733bde7988ef6c779293c4330d3d38"} Mar 11 09:17:15 crc kubenswrapper[4840]: I0311 09:17:15.433243 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc589dfd-ede2-4eff-b9df-1f530ceeabfb","Type":"ContainerStarted","Data":"b189cbaf0d57dfc40acde85e58dca7c846b253a117e7b61c6343c88f00c64c74"} Mar 11 09:17:15 crc kubenswrapper[4840]: I0311 09:17:15.950974 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ffbd547-24lch" Mar 11 09:17:15 crc kubenswrapper[4840]: I0311 09:17:15.957255 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8dnn2" Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.079569 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1" path="/var/lib/kubelet/pods/6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1/volumes" Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.084640 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlntd\" (UniqueName: \"kubernetes.io/projected/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-kube-api-access-nlntd\") pod \"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3\" (UID: \"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3\") " Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.084747 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrbmv\" (UniqueName: \"kubernetes.io/projected/0aed3b47-186d-4af2-b83e-c04526a43095-kube-api-access-mrbmv\") pod \"0aed3b47-186d-4af2-b83e-c04526a43095\" (UID: \"0aed3b47-186d-4af2-b83e-c04526a43095\") " Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.084779 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-dns-svc\") pod \"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3\" (UID: \"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3\") " Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.084869 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-ovsdbserver-sb\") pod \"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3\" (UID: \"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3\") " Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.085131 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-ovsdbserver-nb\") pod \"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3\" (UID: \"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3\") " Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.085169 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0aed3b47-186d-4af2-b83e-c04526a43095-operator-scripts\") pod \"0aed3b47-186d-4af2-b83e-c04526a43095\" (UID: \"0aed3b47-186d-4af2-b83e-c04526a43095\") " Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.085267 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-config\") pod \"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3\" (UID: \"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3\") " Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.087623 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aed3b47-186d-4af2-b83e-c04526a43095-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0aed3b47-186d-4af2-b83e-c04526a43095" (UID: "0aed3b47-186d-4af2-b83e-c04526a43095"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.097087 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aed3b47-186d-4af2-b83e-c04526a43095-kube-api-access-mrbmv" (OuterVolumeSpecName: "kube-api-access-mrbmv") pod "0aed3b47-186d-4af2-b83e-c04526a43095" (UID: "0aed3b47-186d-4af2-b83e-c04526a43095"). InnerVolumeSpecName "kube-api-access-mrbmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.097149 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-kube-api-access-nlntd" (OuterVolumeSpecName: "kube-api-access-nlntd") pod "ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3" (UID: "ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3"). InnerVolumeSpecName "kube-api-access-nlntd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.144857 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3" (UID: "ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.157883 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3" (UID: "ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.169913 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3" (UID: "ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.177404 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-config" (OuterVolumeSpecName: "config") pod "ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3" (UID: "ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.189285 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.189331 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlntd\" (UniqueName: \"kubernetes.io/projected/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-kube-api-access-nlntd\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.189347 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrbmv\" (UniqueName: \"kubernetes.io/projected/0aed3b47-186d-4af2-b83e-c04526a43095-kube-api-access-mrbmv\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.189359 4840 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.189372 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.189384 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.189395 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0aed3b47-186d-4af2-b83e-c04526a43095-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.452358 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ffbd547-24lch" event={"ID":"ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3","Type":"ContainerDied","Data":"4b917085dba1648e4099a8fea040971f4cf96b38ffa6c19fbf404fd69644e3b6"} Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.452811 4840 scope.go:117] "RemoveContainer" containerID="d95207171f15efedecedbe710f939ba187a64cfd7dc462fe67513e720c47dc4b" Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.452677 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ffbd547-24lch" Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.458723 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" event={"ID":"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d","Type":"ContainerStarted","Data":"912b29bc7933f8813ea8a407e5436ed074c596d88ce388b752d3bede3b50533d"} Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.459233 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.473342 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerStarted","Data":"322dd29ccdd62a25f4af6457497d29a55dbd6bb69515090d89cfac3caafffe97"} Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.478715 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8dnn2" event={"ID":"0aed3b47-186d-4af2-b83e-c04526a43095","Type":"ContainerDied","Data":"93cc0229be69167f4799b249425bf5289654f9fc1ebbe0eb382cbe75b91066e1"} Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.478766 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93cc0229be69167f4799b249425bf5289654f9fc1ebbe0eb382cbe75b91066e1" Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.478820 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8dnn2" Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.482373 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" podStartSLOduration=5.482347632 podStartE2EDuration="5.482347632s" podCreationTimestamp="2026-03-11 09:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:17:16.477180992 +0000 UTC m=+1235.142850807" watchObservedRunningTime="2026-03-11 09:17:16.482347632 +0000 UTC m=+1235.148017437" Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.539691 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-ffbd547-24lch"] Mar 11 09:17:16 crc kubenswrapper[4840]: I0311 09:17:16.553614 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-ffbd547-24lch"] Mar 11 09:17:17 crc kubenswrapper[4840]: I0311 09:17:17.503066 4840 generic.go:334] "Generic (PLEG): container finished" podID="30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae" containerID="d21c0f90e51840f3f77422c2bb05bbcae4d2b8d27155ff011bccb70548bf78f9" exitCode=0 Mar 11 09:17:17 crc kubenswrapper[4840]: I0311 09:17:17.503157 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f7cv9" event={"ID":"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae","Type":"ContainerDied","Data":"d21c0f90e51840f3f77422c2bb05bbcae4d2b8d27155ff011bccb70548bf78f9"} Mar 11 09:17:17 crc kubenswrapper[4840]: I0311 09:17:17.507355 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc589dfd-ede2-4eff-b9df-1f530ceeabfb","Type":"ContainerStarted","Data":"636f6abc77dd1f3ce24ed033b717b5ac63c48352718bc3f1258e5e024fd36fb4"} Mar 11 09:17:17 crc kubenswrapper[4840]: I0311 09:17:17.507413 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="cc589dfd-ede2-4eff-b9df-1f530ceeabfb" containerName="glance-log" containerID="cri-o://b189cbaf0d57dfc40acde85e58dca7c846b253a117e7b61c6343c88f00c64c74" gracePeriod=30 Mar 11 09:17:17 crc kubenswrapper[4840]: I0311 09:17:17.507494 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="cc589dfd-ede2-4eff-b9df-1f530ceeabfb" containerName="glance-httpd" containerID="cri-o://636f6abc77dd1f3ce24ed033b717b5ac63c48352718bc3f1258e5e024fd36fb4" gracePeriod=30 Mar 11 09:17:17 crc kubenswrapper[4840]: I0311 09:17:17.516427 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerStarted","Data":"e1983bac8c82d7bd2cb300b8c29f24106251022a1b8393dbcfa2f98ffc26c55b"} Mar 11 09:17:17 crc kubenswrapper[4840]: I0311 09:17:17.516491 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerStarted","Data":"51312dd1b22898c3fcfef3be786a86dd0bc450bace061ed340d18f1903ce9f72"} Mar 11 09:17:17 crc kubenswrapper[4840]: I0311 09:17:17.532953 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"03be0af0-186c-4c90-91d5-9895f6d87534","Type":"ContainerStarted","Data":"2784ee8b7cd4db78078c936947f2fd2782696aab173f8f098b21886da085a55e"} Mar 11 09:17:17 crc kubenswrapper[4840]: I0311 09:17:17.533201 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="03be0af0-186c-4c90-91d5-9895f6d87534" containerName="glance-log" containerID="cri-o://7bdb76b58979f6c59c045a8bb37ccd8a2f733bde7988ef6c779293c4330d3d38" gracePeriod=30 Mar 11 09:17:17 crc kubenswrapper[4840]: I0311 09:17:17.533317 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="03be0af0-186c-4c90-91d5-9895f6d87534" containerName="glance-httpd" containerID="cri-o://2784ee8b7cd4db78078c936947f2fd2782696aab173f8f098b21886da085a55e" gracePeriod=30 Mar 11 09:17:17 crc kubenswrapper[4840]: I0311 09:17:17.554504 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.554468306 podStartE2EDuration="6.554468306s" podCreationTimestamp="2026-03-11 09:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:17:17.546393143 +0000 UTC m=+1236.212062958" watchObservedRunningTime="2026-03-11 09:17:17.554468306 +0000 UTC m=+1236.220138121" Mar 11 09:17:17 crc kubenswrapper[4840]: I0311 09:17:17.574148 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.57412144 podStartE2EDuration="6.57412144s" podCreationTimestamp="2026-03-11 09:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:17:17.567321909 +0000 UTC m=+1236.232991724" watchObservedRunningTime="2026-03-11 09:17:17.57412144 +0000 UTC m=+1236.239791265" Mar 11 09:17:18 crc kubenswrapper[4840]: I0311 09:17:18.077637 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3" path="/var/lib/kubelet/pods/ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3/volumes" Mar 11 09:17:18 crc kubenswrapper[4840]: I0311 09:17:18.550119 4840 generic.go:334] "Generic (PLEG): container finished" podID="cc589dfd-ede2-4eff-b9df-1f530ceeabfb" containerID="636f6abc77dd1f3ce24ed033b717b5ac63c48352718bc3f1258e5e024fd36fb4" exitCode=0 Mar 11 09:17:18 crc kubenswrapper[4840]: I0311 09:17:18.550154 4840 generic.go:334] "Generic (PLEG): container finished" podID="cc589dfd-ede2-4eff-b9df-1f530ceeabfb" containerID="b189cbaf0d57dfc40acde85e58dca7c846b253a117e7b61c6343c88f00c64c74" exitCode=143 Mar 11 09:17:18 crc kubenswrapper[4840]: I0311 09:17:18.550209 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc589dfd-ede2-4eff-b9df-1f530ceeabfb","Type":"ContainerDied","Data":"636f6abc77dd1f3ce24ed033b717b5ac63c48352718bc3f1258e5e024fd36fb4"} Mar 11 09:17:18 crc kubenswrapper[4840]: I0311 09:17:18.550342 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc589dfd-ede2-4eff-b9df-1f530ceeabfb","Type":"ContainerDied","Data":"b189cbaf0d57dfc40acde85e58dca7c846b253a117e7b61c6343c88f00c64c74"} Mar 11 09:17:18 crc kubenswrapper[4840]: I0311 09:17:18.553159 4840 generic.go:334] "Generic (PLEG): container finished" podID="03be0af0-186c-4c90-91d5-9895f6d87534" containerID="2784ee8b7cd4db78078c936947f2fd2782696aab173f8f098b21886da085a55e" exitCode=0 Mar 11 09:17:18 crc kubenswrapper[4840]: I0311 09:17:18.553225 4840 generic.go:334] "Generic (PLEG): container finished" podID="03be0af0-186c-4c90-91d5-9895f6d87534" containerID="7bdb76b58979f6c59c045a8bb37ccd8a2f733bde7988ef6c779293c4330d3d38" exitCode=143 Mar 11 09:17:18 crc kubenswrapper[4840]: I0311 09:17:18.553253 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"03be0af0-186c-4c90-91d5-9895f6d87534","Type":"ContainerDied","Data":"2784ee8b7cd4db78078c936947f2fd2782696aab173f8f098b21886da085a55e"} Mar 11 09:17:18 crc kubenswrapper[4840]: I0311 09:17:18.553315 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"03be0af0-186c-4c90-91d5-9895f6d87534","Type":"ContainerDied","Data":"7bdb76b58979f6c59c045a8bb37ccd8a2f733bde7988ef6c779293c4330d3d38"} Mar 11 09:17:21 crc kubenswrapper[4840]: I0311 09:17:21.592743 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-f7cv9" event={"ID":"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae","Type":"ContainerDied","Data":"e182579b33db7ce0269e37927f313d401d274b9eb3441f6ad2b72c8ae43856e7"} Mar 11 09:17:21 crc kubenswrapper[4840]: I0311 09:17:21.593519 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e182579b33db7ce0269e37927f313d401d274b9eb3441f6ad2b72c8ae43856e7" Mar 11 09:17:21 crc kubenswrapper[4840]: I0311 09:17:21.610952 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f7cv9" Mar 11 09:17:21 crc kubenswrapper[4840]: I0311 09:17:21.712195 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-config-data\") pod \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\" (UID: \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\") " Mar 11 09:17:21 crc kubenswrapper[4840]: I0311 09:17:21.712443 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-credential-keys\") pod \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\" (UID: \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\") " Mar 11 09:17:21 crc kubenswrapper[4840]: I0311 09:17:21.712583 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-combined-ca-bundle\") pod \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\" (UID: \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\") " Mar 11 09:17:21 crc kubenswrapper[4840]: I0311 09:17:21.712609 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-scripts\") pod \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\" (UID: \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\") " Mar 11 09:17:21 crc kubenswrapper[4840]: I0311 09:17:21.712655 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht9v8\" (UniqueName: \"kubernetes.io/projected/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-kube-api-access-ht9v8\") pod \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\" (UID: \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\") " Mar 11 09:17:21 crc kubenswrapper[4840]: I0311 09:17:21.712688 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-fernet-keys\") pod \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\" (UID: \"30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae\") " Mar 11 09:17:21 crc kubenswrapper[4840]: I0311 09:17:21.723217 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-scripts" (OuterVolumeSpecName: "scripts") pod "30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae" (UID: "30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:21 crc kubenswrapper[4840]: I0311 09:17:21.723237 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-kube-api-access-ht9v8" (OuterVolumeSpecName: "kube-api-access-ht9v8") pod "30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae" (UID: "30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae"). InnerVolumeSpecName "kube-api-access-ht9v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:17:21 crc kubenswrapper[4840]: I0311 09:17:21.730673 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae" (UID: "30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:21 crc kubenswrapper[4840]: I0311 09:17:21.732748 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae" (UID: "30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:21 crc kubenswrapper[4840]: I0311 09:17:21.747979 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-config-data" (OuterVolumeSpecName: "config-data") pod "30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae" (UID: "30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:21 crc kubenswrapper[4840]: I0311 09:17:21.753635 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae" (UID: "30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:21 crc kubenswrapper[4840]: I0311 09:17:21.815068 4840 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:21 crc kubenswrapper[4840]: I0311 09:17:21.815102 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:21 crc kubenswrapper[4840]: I0311 09:17:21.815112 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:21 crc kubenswrapper[4840]: I0311 09:17:21.815121 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht9v8\" (UniqueName: \"kubernetes.io/projected/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-kube-api-access-ht9v8\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:21 crc kubenswrapper[4840]: I0311 09:17:21.815131 4840 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:21 crc kubenswrapper[4840]: I0311 09:17:21.815139 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.471649 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.531802 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f7dd995-5qwn2"] Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.532060 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" podUID="ed273fa9-3890-4c96-ac5f-ac17d2b2dad9" containerName="dnsmasq-dns" containerID="cri-o://03ec2c771246b57617ea48edda4ac7074f5ae66716e89aabd682da671e7f2772" gracePeriod=10 Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.605958 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-f7cv9" Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.709352 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-f7cv9"] Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.717280 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-f7cv9"] Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.803244 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jlpsf"] Mar 11 09:17:22 crc kubenswrapper[4840]: E0311 09:17:22.803742 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3" containerName="init" Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.803762 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3" containerName="init" Mar 11 09:17:22 crc kubenswrapper[4840]: E0311 09:17:22.803777 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae" containerName="keystone-bootstrap" Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.803787 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae" containerName="keystone-bootstrap" Mar 11 09:17:22 crc kubenswrapper[4840]: E0311 09:17:22.803807 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1" containerName="init" Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.803816 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1" containerName="init" Mar 11 09:17:22 crc kubenswrapper[4840]: E0311 09:17:22.803849 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aed3b47-186d-4af2-b83e-c04526a43095" containerName="mariadb-account-create-update" Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.803856 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aed3b47-186d-4af2-b83e-c04526a43095" containerName="mariadb-account-create-update" Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.804025 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae" containerName="keystone-bootstrap" Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.804039 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed4c2dfa-3010-4eb1-8ad1-e7e183ec4ca3" containerName="init" Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.804051 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aed3b47-186d-4af2-b83e-c04526a43095" containerName="mariadb-account-create-update" Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.804063 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d25eab0-c4c4-4227-aeaa-2cb67a78e8b1" containerName="init" Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.804787 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jlpsf" Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.808009 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.812041 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gtqwm" Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.812236 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.812354 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.815087 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.817831 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jlpsf"] Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.934411 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-combined-ca-bundle\") pod \"keystone-bootstrap-jlpsf\" (UID: \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\") " pod="openstack/keystone-bootstrap-jlpsf" Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.934700 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-fernet-keys\") pod \"keystone-bootstrap-jlpsf\" (UID: \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\") " pod="openstack/keystone-bootstrap-jlpsf" Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.934764 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-scripts\") pod \"keystone-bootstrap-jlpsf\" (UID: \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\") " pod="openstack/keystone-bootstrap-jlpsf" Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.934809 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-credential-keys\") pod \"keystone-bootstrap-jlpsf\" (UID: \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\") " pod="openstack/keystone-bootstrap-jlpsf" Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.934957 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-config-data\") pod \"keystone-bootstrap-jlpsf\" (UID: \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\") " pod="openstack/keystone-bootstrap-jlpsf" Mar 11 09:17:22 crc kubenswrapper[4840]: I0311 09:17:22.935013 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xl5q\" (UniqueName: \"kubernetes.io/projected/4d2cbae7-4ece-49e8-b85e-30db29d6c172-kube-api-access-7xl5q\") pod \"keystone-bootstrap-jlpsf\" (UID: \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\") " pod="openstack/keystone-bootstrap-jlpsf" Mar 11 09:17:23 crc kubenswrapper[4840]: I0311 09:17:23.037297 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-fernet-keys\") pod \"keystone-bootstrap-jlpsf\" (UID: \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\") " pod="openstack/keystone-bootstrap-jlpsf" Mar 11 09:17:23 crc kubenswrapper[4840]: I0311 09:17:23.037340 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-scripts\") pod \"keystone-bootstrap-jlpsf\" (UID: \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\") " pod="openstack/keystone-bootstrap-jlpsf" Mar 11 09:17:23 crc kubenswrapper[4840]: I0311 09:17:23.037357 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-credential-keys\") pod \"keystone-bootstrap-jlpsf\" (UID: \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\") " pod="openstack/keystone-bootstrap-jlpsf" Mar 11 09:17:23 crc kubenswrapper[4840]: I0311 09:17:23.037397 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-config-data\") pod \"keystone-bootstrap-jlpsf\" (UID: \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\") " pod="openstack/keystone-bootstrap-jlpsf" Mar 11 09:17:23 crc kubenswrapper[4840]: I0311 09:17:23.037418 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xl5q\" (UniqueName: \"kubernetes.io/projected/4d2cbae7-4ece-49e8-b85e-30db29d6c172-kube-api-access-7xl5q\") pod \"keystone-bootstrap-jlpsf\" (UID: \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\") " pod="openstack/keystone-bootstrap-jlpsf" Mar 11 09:17:23 crc kubenswrapper[4840]: I0311 09:17:23.037529 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-combined-ca-bundle\") pod \"keystone-bootstrap-jlpsf\" (UID: \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\") " pod="openstack/keystone-bootstrap-jlpsf" Mar 11 09:17:23 crc kubenswrapper[4840]: I0311 09:17:23.043184 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-credential-keys\") pod \"keystone-bootstrap-jlpsf\" (UID: \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\") " pod="openstack/keystone-bootstrap-jlpsf" Mar 11 09:17:23 crc kubenswrapper[4840]: I0311 09:17:23.043512 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-combined-ca-bundle\") pod \"keystone-bootstrap-jlpsf\" (UID: \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\") " pod="openstack/keystone-bootstrap-jlpsf" Mar 11 09:17:23 crc kubenswrapper[4840]: I0311 09:17:23.044078 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-scripts\") pod \"keystone-bootstrap-jlpsf\" (UID: \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\") " pod="openstack/keystone-bootstrap-jlpsf" Mar 11 09:17:23 crc kubenswrapper[4840]: I0311 09:17:23.044245 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-fernet-keys\") pod \"keystone-bootstrap-jlpsf\" (UID: \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\") " pod="openstack/keystone-bootstrap-jlpsf" Mar 11 09:17:23 crc kubenswrapper[4840]: I0311 09:17:23.044976 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-config-data\") pod \"keystone-bootstrap-jlpsf\" (UID: \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\") " pod="openstack/keystone-bootstrap-jlpsf" Mar 11 09:17:23 crc kubenswrapper[4840]: I0311 09:17:23.064432 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xl5q\" (UniqueName: \"kubernetes.io/projected/4d2cbae7-4ece-49e8-b85e-30db29d6c172-kube-api-access-7xl5q\") pod \"keystone-bootstrap-jlpsf\" (UID: \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\") " pod="openstack/keystone-bootstrap-jlpsf" Mar 11 09:17:23 crc kubenswrapper[4840]: I0311 09:17:23.132920 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jlpsf" Mar 11 09:17:23 crc kubenswrapper[4840]: I0311 09:17:23.617544 4840 generic.go:334] "Generic (PLEG): container finished" podID="ed273fa9-3890-4c96-ac5f-ac17d2b2dad9" containerID="03ec2c771246b57617ea48edda4ac7074f5ae66716e89aabd682da671e7f2772" exitCode=0 Mar 11 09:17:23 crc kubenswrapper[4840]: I0311 09:17:23.617638 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" event={"ID":"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9","Type":"ContainerDied","Data":"03ec2c771246b57617ea48edda4ac7074f5ae66716e89aabd682da671e7f2772"} Mar 11 09:17:24 crc kubenswrapper[4840]: I0311 09:17:24.070338 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae" path="/var/lib/kubelet/pods/30ca9c9c-f433-4ecd-8049-2c1d0f4f39ae/volumes" Mar 11 09:17:24 crc kubenswrapper[4840]: I0311 09:17:24.321483 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" podUID="ed273fa9-3890-4c96-ac5f-ac17d2b2dad9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: connect: connection refused" Mar 11 09:17:32 crc kubenswrapper[4840]: E0311 09:17:32.774291 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:2c52c0f4b4baa15796eb284522adff7fa9e5c85a2d77c2e47ef4afdf8e4a7c7d" Mar 11 09:17:32 crc kubenswrapper[4840]: E0311 09:17:32.775043 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:2c52c0f4b4baa15796eb284522adff7fa9e5c85a2d77c2e47ef4afdf8e4a7c7d,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2jgnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-r5wvc_openstack(eb9a61ad-a8fb-4968-b32a-ff5756add27b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 11 09:17:32 crc kubenswrapper[4840]: E0311 09:17:32.776285 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-r5wvc" podUID="eb9a61ad-a8fb-4968-b32a-ff5756add27b" Mar 11 09:17:32 crc kubenswrapper[4840]: I0311 09:17:32.871406 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 11 09:17:32 crc kubenswrapper[4840]: I0311 09:17:32.882702 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" Mar 11 09:17:32 crc kubenswrapper[4840]: I0311 09:17:32.909780 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 11 09:17:32 crc kubenswrapper[4840]: I0311 09:17:32.983546 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-combined-ca-bundle\") pod \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " Mar 11 09:17:32 crc kubenswrapper[4840]: I0311 09:17:32.984133 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh2dn\" (UniqueName: \"kubernetes.io/projected/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-kube-api-access-gh2dn\") pod \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " Mar 11 09:17:32 crc kubenswrapper[4840]: I0311 09:17:32.984176 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-logs\") pod \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " Mar 11 09:17:32 crc kubenswrapper[4840]: I0311 09:17:32.984248 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-ovsdbserver-sb\") pod \"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9\" (UID: \"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9\") " Mar 11 09:17:32 crc kubenswrapper[4840]: I0311 09:17:32.984297 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-scripts\") pod \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " Mar 11 09:17:32 crc kubenswrapper[4840]: I0311 09:17:32.984337 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " Mar 11 09:17:32 crc kubenswrapper[4840]: I0311 09:17:32.984362 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-config-data\") pod \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " Mar 11 09:17:32 crc kubenswrapper[4840]: I0311 09:17:32.984406 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8b85\" (UniqueName: \"kubernetes.io/projected/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-kube-api-access-x8b85\") pod \"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9\" (UID: \"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9\") " Mar 11 09:17:32 crc kubenswrapper[4840]: I0311 09:17:32.984615 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-httpd-run\") pod \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\" (UID: \"cc589dfd-ede2-4eff-b9df-1f530ceeabfb\") " Mar 11 09:17:32 crc kubenswrapper[4840]: I0311 09:17:32.984686 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-dns-svc\") pod \"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9\" (UID: \"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9\") " Mar 11 09:17:32 crc kubenswrapper[4840]: I0311 09:17:32.984780 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-ovsdbserver-nb\") pod \"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9\" (UID: \"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9\") " Mar 11 09:17:32 crc kubenswrapper[4840]: I0311 09:17:32.985045 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-config\") pod \"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9\" (UID: \"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9\") " Mar 11 09:17:32 crc kubenswrapper[4840]: I0311 09:17:32.986393 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cc589dfd-ede2-4eff-b9df-1f530ceeabfb" (UID: "cc589dfd-ede2-4eff-b9df-1f530ceeabfb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:17:32 crc kubenswrapper[4840]: I0311 09:17:32.986430 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-logs" (OuterVolumeSpecName: "logs") pod "cc589dfd-ede2-4eff-b9df-1f530ceeabfb" (UID: "cc589dfd-ede2-4eff-b9df-1f530ceeabfb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.001888 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-kube-api-access-gh2dn" (OuterVolumeSpecName: "kube-api-access-gh2dn") pod "cc589dfd-ede2-4eff-b9df-1f530ceeabfb" (UID: "cc589dfd-ede2-4eff-b9df-1f530ceeabfb"). InnerVolumeSpecName "kube-api-access-gh2dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.002124 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "cc589dfd-ede2-4eff-b9df-1f530ceeabfb" (UID: "cc589dfd-ede2-4eff-b9df-1f530ceeabfb"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.002517 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-scripts" (OuterVolumeSpecName: "scripts") pod "cc589dfd-ede2-4eff-b9df-1f530ceeabfb" (UID: "cc589dfd-ede2-4eff-b9df-1f530ceeabfb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.009065 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-kube-api-access-x8b85" (OuterVolumeSpecName: "kube-api-access-x8b85") pod "ed273fa9-3890-4c96-ac5f-ac17d2b2dad9" (UID: "ed273fa9-3890-4c96-ac5f-ac17d2b2dad9"). InnerVolumeSpecName "kube-api-access-x8b85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.016538 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc589dfd-ede2-4eff-b9df-1f530ceeabfb" (UID: "cc589dfd-ede2-4eff-b9df-1f530ceeabfb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.038271 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ed273fa9-3890-4c96-ac5f-ac17d2b2dad9" (UID: "ed273fa9-3890-4c96-ac5f-ac17d2b2dad9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.041112 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-config-data" (OuterVolumeSpecName: "config-data") pod "cc589dfd-ede2-4eff-b9df-1f530ceeabfb" (UID: "cc589dfd-ede2-4eff-b9df-1f530ceeabfb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.054053 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-config" (OuterVolumeSpecName: "config") pod "ed273fa9-3890-4c96-ac5f-ac17d2b2dad9" (UID: "ed273fa9-3890-4c96-ac5f-ac17d2b2dad9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.071202 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ed273fa9-3890-4c96-ac5f-ac17d2b2dad9" (UID: "ed273fa9-3890-4c96-ac5f-ac17d2b2dad9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.081818 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ed273fa9-3890-4c96-ac5f-ac17d2b2dad9" (UID: "ed273fa9-3890-4c96-ac5f-ac17d2b2dad9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.087971 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03be0af0-186c-4c90-91d5-9895f6d87534-logs\") pod \"03be0af0-186c-4c90-91d5-9895f6d87534\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.088063 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03be0af0-186c-4c90-91d5-9895f6d87534-config-data\") pod \"03be0af0-186c-4c90-91d5-9895f6d87534\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.088143 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03be0af0-186c-4c90-91d5-9895f6d87534-combined-ca-bundle\") pod \"03be0af0-186c-4c90-91d5-9895f6d87534\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.088185 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03be0af0-186c-4c90-91d5-9895f6d87534-scripts\") pod \"03be0af0-186c-4c90-91d5-9895f6d87534\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.088331 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dnzx\" (UniqueName: \"kubernetes.io/projected/03be0af0-186c-4c90-91d5-9895f6d87534-kube-api-access-7dnzx\") pod \"03be0af0-186c-4c90-91d5-9895f6d87534\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.088379 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"03be0af0-186c-4c90-91d5-9895f6d87534\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.088429 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/03be0af0-186c-4c90-91d5-9895f6d87534-httpd-run\") pod \"03be0af0-186c-4c90-91d5-9895f6d87534\" (UID: \"03be0af0-186c-4c90-91d5-9895f6d87534\") " Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.088725 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03be0af0-186c-4c90-91d5-9895f6d87534-logs" (OuterVolumeSpecName: "logs") pod "03be0af0-186c-4c90-91d5-9895f6d87534" (UID: "03be0af0-186c-4c90-91d5-9895f6d87534"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.089585 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03be0af0-186c-4c90-91d5-9895f6d87534-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "03be0af0-186c-4c90-91d5-9895f6d87534" (UID: "03be0af0-186c-4c90-91d5-9895f6d87534"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.090526 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.090564 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gh2dn\" (UniqueName: \"kubernetes.io/projected/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-kube-api-access-gh2dn\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.090578 4840 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.090591 4840 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03be0af0-186c-4c90-91d5-9895f6d87534-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.090603 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.090614 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.090644 4840 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.090656 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.090666 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8b85\" (UniqueName: \"kubernetes.io/projected/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-kube-api-access-x8b85\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.090677 4840 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cc589dfd-ede2-4eff-b9df-1f530ceeabfb-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.090688 4840 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.090697 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.090706 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.096992 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "03be0af0-186c-4c90-91d5-9895f6d87534" (UID: "03be0af0-186c-4c90-91d5-9895f6d87534"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.098032 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03be0af0-186c-4c90-91d5-9895f6d87534-kube-api-access-7dnzx" (OuterVolumeSpecName: "kube-api-access-7dnzx") pod "03be0af0-186c-4c90-91d5-9895f6d87534" (UID: "03be0af0-186c-4c90-91d5-9895f6d87534"). InnerVolumeSpecName "kube-api-access-7dnzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.101962 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03be0af0-186c-4c90-91d5-9895f6d87534-scripts" (OuterVolumeSpecName: "scripts") pod "03be0af0-186c-4c90-91d5-9895f6d87534" (UID: "03be0af0-186c-4c90-91d5-9895f6d87534"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.118462 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03be0af0-186c-4c90-91d5-9895f6d87534-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03be0af0-186c-4c90-91d5-9895f6d87534" (UID: "03be0af0-186c-4c90-91d5-9895f6d87534"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.120994 4840 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.136534 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03be0af0-186c-4c90-91d5-9895f6d87534-config-data" (OuterVolumeSpecName: "config-data") pod "03be0af0-186c-4c90-91d5-9895f6d87534" (UID: "03be0af0-186c-4c90-91d5-9895f6d87534"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.193007 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dnzx\" (UniqueName: \"kubernetes.io/projected/03be0af0-186c-4c90-91d5-9895f6d87534-kube-api-access-7dnzx\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.193065 4840 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.193078 4840 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/03be0af0-186c-4c90-91d5-9895f6d87534-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.193087 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03be0af0-186c-4c90-91d5-9895f6d87534-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.193097 4840 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.193109 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03be0af0-186c-4c90-91d5-9895f6d87534-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.193125 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03be0af0-186c-4c90-91d5-9895f6d87534-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.213529 4840 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.295059 4840 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.739422 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"03be0af0-186c-4c90-91d5-9895f6d87534","Type":"ContainerDied","Data":"02cb91a6f3b13d173ddab7a0d07129849408fbc7d8b808375cb0660834b6dcdd"} Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.739492 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.739541 4840 scope.go:117] "RemoveContainer" containerID="2784ee8b7cd4db78078c936947f2fd2782696aab173f8f098b21886da085a55e" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.744495 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.744510 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cc589dfd-ede2-4eff-b9df-1f530ceeabfb","Type":"ContainerDied","Data":"3c1d80befcc4ce8a0191ee3669ddbe7efc015da6f50911b61720b14825dade0a"} Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.751134 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerStarted","Data":"dd4d20030dddae38ba6950210cd3d087c5fd1596090ecd0351b94eeaa971448c"} Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.754644 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" event={"ID":"ed273fa9-3890-4c96-ac5f-ac17d2b2dad9","Type":"ContainerDied","Data":"5abe8789a9e548cafba146d220fac104fb52ff361797f05980a09326015575b1"} Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.754671 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" Mar 11 09:17:33 crc kubenswrapper[4840]: E0311 09:17:33.758975 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:2c52c0f4b4baa15796eb284522adff7fa9e5c85a2d77c2e47ef4afdf8e4a7c7d\\\"\"" pod="openstack/barbican-db-sync-r5wvc" podUID="eb9a61ad-a8fb-4968-b32a-ff5756add27b" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.833613 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.847548 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.858730 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f7dd995-5qwn2"] Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.869614 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f7dd995-5qwn2"] Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.888923 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.899485 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:17:33 crc kubenswrapper[4840]: E0311 09:17:33.899928 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc589dfd-ede2-4eff-b9df-1f530ceeabfb" containerName="glance-httpd" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.899949 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc589dfd-ede2-4eff-b9df-1f530ceeabfb" containerName="glance-httpd" Mar 11 09:17:33 crc kubenswrapper[4840]: E0311 09:17:33.899967 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed273fa9-3890-4c96-ac5f-ac17d2b2dad9" containerName="init" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.899973 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed273fa9-3890-4c96-ac5f-ac17d2b2dad9" containerName="init" Mar 11 09:17:33 crc kubenswrapper[4840]: E0311 09:17:33.899985 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed273fa9-3890-4c96-ac5f-ac17d2b2dad9" containerName="dnsmasq-dns" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.899992 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed273fa9-3890-4c96-ac5f-ac17d2b2dad9" containerName="dnsmasq-dns" Mar 11 09:17:33 crc kubenswrapper[4840]: E0311 09:17:33.900005 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc589dfd-ede2-4eff-b9df-1f530ceeabfb" containerName="glance-log" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.900011 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc589dfd-ede2-4eff-b9df-1f530ceeabfb" containerName="glance-log" Mar 11 09:17:33 crc kubenswrapper[4840]: E0311 09:17:33.900026 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03be0af0-186c-4c90-91d5-9895f6d87534" containerName="glance-log" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.900033 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="03be0af0-186c-4c90-91d5-9895f6d87534" containerName="glance-log" Mar 11 09:17:33 crc kubenswrapper[4840]: E0311 09:17:33.900048 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03be0af0-186c-4c90-91d5-9895f6d87534" containerName="glance-httpd" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.900054 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="03be0af0-186c-4c90-91d5-9895f6d87534" containerName="glance-httpd" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.900199 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc589dfd-ede2-4eff-b9df-1f530ceeabfb" containerName="glance-httpd" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.900217 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed273fa9-3890-4c96-ac5f-ac17d2b2dad9" containerName="dnsmasq-dns" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.900228 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc589dfd-ede2-4eff-b9df-1f530ceeabfb" containerName="glance-log" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.900240 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="03be0af0-186c-4c90-91d5-9895f6d87534" containerName="glance-httpd" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.900249 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="03be0af0-186c-4c90-91d5-9895f6d87534" containerName="glance-log" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.901254 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.910335 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.910594 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.910842 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8l29s" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.910991 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.914624 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.927511 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.940607 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.944105 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.946794 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.947131 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 11 09:17:33 crc kubenswrapper[4840]: I0311 09:17:33.950373 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.015148 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzsbb\" (UniqueName: \"kubernetes.io/projected/8ea45272-7dc6-4227-ba52-e506fd81c0b4-kube-api-access-tzsbb\") pod \"glance-default-external-api-0\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.015204 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.015230 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea45272-7dc6-4227-ba52-e506fd81c0b4-config-data\") pod \"glance-default-external-api-0\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.015287 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea45272-7dc6-4227-ba52-e506fd81c0b4-logs\") pod \"glance-default-external-api-0\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.015327 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ea45272-7dc6-4227-ba52-e506fd81c0b4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.015720 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea45272-7dc6-4227-ba52-e506fd81c0b4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.015859 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ea45272-7dc6-4227-ba52-e506fd81c0b4-scripts\") pod \"glance-default-external-api-0\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.016046 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ea45272-7dc6-4227-ba52-e506fd81c0b4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.072720 4840 scope.go:117] "RemoveContainer" containerID="7bdb76b58979f6c59c045a8bb37ccd8a2f733bde7988ef6c779293c4330d3d38" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.074891 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03be0af0-186c-4c90-91d5-9895f6d87534" path="/var/lib/kubelet/pods/03be0af0-186c-4c90-91d5-9895f6d87534/volumes" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.075860 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc589dfd-ede2-4eff-b9df-1f530ceeabfb" path="/var/lib/kubelet/pods/cc589dfd-ede2-4eff-b9df-1f530ceeabfb/volumes" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.076953 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed273fa9-3890-4c96-ac5f-ac17d2b2dad9" path="/var/lib/kubelet/pods/ed273fa9-3890-4c96-ac5f-ac17d2b2dad9/volumes" Mar 11 09:17:34 crc kubenswrapper[4840]: E0311 09:17:34.120577 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:44ed1ca84e17bd0f004cfbdc3c0827d767daba52abb8e83e076bfd0e6c02f838" Mar 11 09:17:34 crc kubenswrapper[4840]: E0311 09:17:34.122239 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:44ed1ca84e17bd0f004cfbdc3c0827d767daba52abb8e83e076bfd0e6c02f838,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvtvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-k7tgg_openstack(adf5efde-cb19-4870-9c8e-e7e139523238): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.122421 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7zkj\" (UniqueName: \"kubernetes.io/projected/b3ad2408-3dda-4009-a898-5f2618fc18cf-kube-api-access-b7zkj\") pod \"glance-default-internal-api-0\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.122506 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ea45272-7dc6-4227-ba52-e506fd81c0b4-scripts\") pod \"glance-default-external-api-0\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.122552 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ad2408-3dda-4009-a898-5f2618fc18cf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.122594 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3ad2408-3dda-4009-a898-5f2618fc18cf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.122675 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ea45272-7dc6-4227-ba52-e506fd81c0b4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.122722 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzsbb\" (UniqueName: \"kubernetes.io/projected/8ea45272-7dc6-4227-ba52-e506fd81c0b4-kube-api-access-tzsbb\") pod \"glance-default-external-api-0\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.122744 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3ad2408-3dda-4009-a898-5f2618fc18cf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.122775 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.122797 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.122816 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea45272-7dc6-4227-ba52-e506fd81c0b4-config-data\") pod \"glance-default-external-api-0\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.122882 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ad2408-3dda-4009-a898-5f2618fc18cf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.122907 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea45272-7dc6-4227-ba52-e506fd81c0b4-logs\") pod \"glance-default-external-api-0\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.122947 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ea45272-7dc6-4227-ba52-e506fd81c0b4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.122968 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ad2408-3dda-4009-a898-5f2618fc18cf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.122993 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3ad2408-3dda-4009-a898-5f2618fc18cf-logs\") pod \"glance-default-internal-api-0\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.123024 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea45272-7dc6-4227-ba52-e506fd81c0b4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.127629 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: E0311 09:17:34.132323 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-k7tgg" podUID="adf5efde-cb19-4870-9c8e-e7e139523238" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.132799 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ea45272-7dc6-4227-ba52-e506fd81c0b4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.136856 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea45272-7dc6-4227-ba52-e506fd81c0b4-logs\") pod \"glance-default-external-api-0\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.142215 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea45272-7dc6-4227-ba52-e506fd81c0b4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.150090 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ea45272-7dc6-4227-ba52-e506fd81c0b4-scripts\") pod \"glance-default-external-api-0\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.151489 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea45272-7dc6-4227-ba52-e506fd81c0b4-config-data\") pod \"glance-default-external-api-0\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.152718 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzsbb\" (UniqueName: \"kubernetes.io/projected/8ea45272-7dc6-4227-ba52-e506fd81c0b4-kube-api-access-tzsbb\") pod \"glance-default-external-api-0\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.170722 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.172350 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ea45272-7dc6-4227-ba52-e506fd81c0b4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.224631 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3ad2408-3dda-4009-a898-5f2618fc18cf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.225141 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.225237 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ad2408-3dda-4009-a898-5f2618fc18cf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.225298 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ad2408-3dda-4009-a898-5f2618fc18cf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.225364 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3ad2408-3dda-4009-a898-5f2618fc18cf-logs\") pod \"glance-default-internal-api-0\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.225427 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7zkj\" (UniqueName: \"kubernetes.io/projected/b3ad2408-3dda-4009-a898-5f2618fc18cf-kube-api-access-b7zkj\") pod \"glance-default-internal-api-0\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.225540 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ad2408-3dda-4009-a898-5f2618fc18cf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.225587 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3ad2408-3dda-4009-a898-5f2618fc18cf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.226458 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3ad2408-3dda-4009-a898-5f2618fc18cf-logs\") pod \"glance-default-internal-api-0\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.226847 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.231227 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3ad2408-3dda-4009-a898-5f2618fc18cf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.250553 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ad2408-3dda-4009-a898-5f2618fc18cf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.250621 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3ad2408-3dda-4009-a898-5f2618fc18cf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.251506 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ad2408-3dda-4009-a898-5f2618fc18cf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.252834 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ad2408-3dda-4009-a898-5f2618fc18cf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.257020 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.257733 4840 scope.go:117] "RemoveContainer" containerID="636f6abc77dd1f3ce24ed033b717b5ac63c48352718bc3f1258e5e024fd36fb4" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.268566 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7zkj\" (UniqueName: \"kubernetes.io/projected/b3ad2408-3dda-4009-a898-5f2618fc18cf-kube-api-access-b7zkj\") pod \"glance-default-internal-api-0\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.272699 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.321122 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-675f7dd995-5qwn2" podUID="ed273fa9-3890-4c96-ac5f-ac17d2b2dad9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.119:5353: i/o timeout" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.338782 4840 scope.go:117] "RemoveContainer" containerID="b189cbaf0d57dfc40acde85e58dca7c846b253a117e7b61c6343c88f00c64c74" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.368526 4840 scope.go:117] "RemoveContainer" containerID="03ec2c771246b57617ea48edda4ac7074f5ae66716e89aabd682da671e7f2772" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.396037 4840 scope.go:117] "RemoveContainer" containerID="bcf97b20ebe322efcd63d649c6350dbc476441008719578f91fb01bcd6286035" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.566026 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.622043 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jlpsf"] Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.785681 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02a9854a-9978-4414-972d-37c0b579b03b","Type":"ContainerStarted","Data":"9188e168bdbca78a271d83dc12ba7a2d0a3e57c143bfede6dc8fa4f26e41a614"} Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.788506 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f7z67" event={"ID":"573b362e-582b-43f0-afae-c038cf95f625","Type":"ContainerStarted","Data":"43c4acb8183ccb5e90836bfc3e4bcd3459d9f4424c2deb8ad2b687db74f8be98"} Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.794928 4840 generic.go:334] "Generic (PLEG): container finished" podID="68dbceea-1597-4236-b2d6-ca071bf86d5a" containerID="b4a7c49eb1ab96ca872e6f2ffbdf1839b577c44a108a40de7c18a2cb34f4e9f7" exitCode=0 Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.794985 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dghqs" event={"ID":"68dbceea-1597-4236-b2d6-ca071bf86d5a","Type":"ContainerDied","Data":"b4a7c49eb1ab96ca872e6f2ffbdf1839b577c44a108a40de7c18a2cb34f4e9f7"} Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.796244 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jlpsf" event={"ID":"4d2cbae7-4ece-49e8-b85e-30db29d6c172","Type":"ContainerStarted","Data":"1e881be16226d81f76acd158735451aaf9d7074d679384038a3240d165e70133"} Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.801284 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerStarted","Data":"905763d5f4df2194269413da5305c2cade78aefa23c40527af321dc5de2fb39b"} Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.801311 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerStarted","Data":"9feba492b523c8dcc3c35b13e0ce09e4fe030b4986a39633f2601ebb6e23baa2"} Mar 11 09:17:34 crc kubenswrapper[4840]: E0311 09:17:34.810531 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:44ed1ca84e17bd0f004cfbdc3c0827d767daba52abb8e83e076bfd0e6c02f838\\\"\"" pod="openstack/cinder-db-sync-k7tgg" podUID="adf5efde-cb19-4870-9c8e-e7e139523238" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.822756 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-f7z67" podStartSLOduration=2.9333895979999998 podStartE2EDuration="23.822736546s" podCreationTimestamp="2026-03-11 09:17:11 +0000 UTC" firstStartedPulling="2026-03-11 09:17:13.163062924 +0000 UTC m=+1231.828732739" lastFinishedPulling="2026-03-11 09:17:34.052409872 +0000 UTC m=+1252.718079687" observedRunningTime="2026-03-11 09:17:34.814792066 +0000 UTC m=+1253.480461881" watchObservedRunningTime="2026-03-11 09:17:34.822736546 +0000 UTC m=+1253.488406361" Mar 11 09:17:34 crc kubenswrapper[4840]: I0311 09:17:34.953359 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:17:35 crc kubenswrapper[4840]: I0311 09:17:35.198336 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:17:35 crc kubenswrapper[4840]: W0311 09:17:35.215790 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3ad2408_3dda_4009_a898_5f2618fc18cf.slice/crio-cae0ee9cc9a39f3f6b139b839d6bb08a2817d9b18835e9df953610610da27737 WatchSource:0}: Error finding container cae0ee9cc9a39f3f6b139b839d6bb08a2817d9b18835e9df953610610da27737: Status 404 returned error can't find the container with id cae0ee9cc9a39f3f6b139b839d6bb08a2817d9b18835e9df953610610da27737 Mar 11 09:17:35 crc kubenswrapper[4840]: I0311 09:17:35.823772 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jlpsf" event={"ID":"4d2cbae7-4ece-49e8-b85e-30db29d6c172","Type":"ContainerStarted","Data":"c928b472f2542eafc13cd171a09a9a135f9ed04cc8a7a9eb07a835da8146c648"} Mar 11 09:17:35 crc kubenswrapper[4840]: I0311 09:17:35.833432 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerStarted","Data":"d1ae76c881c421b02784a6adef0131cd437bfae62ec031f6665865845df2bf74"} Mar 11 09:17:35 crc kubenswrapper[4840]: I0311 09:17:35.838573 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8ea45272-7dc6-4227-ba52-e506fd81c0b4","Type":"ContainerStarted","Data":"12da96256bafcbc710e6afeaa00bdcd0372baf66eb1a6d2439ecdbd9e02d7b40"} Mar 11 09:17:35 crc kubenswrapper[4840]: I0311 09:17:35.838654 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8ea45272-7dc6-4227-ba52-e506fd81c0b4","Type":"ContainerStarted","Data":"641f0434d44e5d56a4e2ce90c3d1d2824a770c4a5206ef5c0c4746e61d3a77b8"} Mar 11 09:17:35 crc kubenswrapper[4840]: I0311 09:17:35.846776 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jlpsf" podStartSLOduration=13.846755129 podStartE2EDuration="13.846755129s" podCreationTimestamp="2026-03-11 09:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:17:35.843758394 +0000 UTC m=+1254.509428219" watchObservedRunningTime="2026-03-11 09:17:35.846755129 +0000 UTC m=+1254.512424944" Mar 11 09:17:35 crc kubenswrapper[4840]: I0311 09:17:35.848595 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3ad2408-3dda-4009-a898-5f2618fc18cf","Type":"ContainerStarted","Data":"cae0ee9cc9a39f3f6b139b839d6bb08a2817d9b18835e9df953610610da27737"} Mar 11 09:17:35 crc kubenswrapper[4840]: I0311 09:17:35.882363 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=57.789694068 podStartE2EDuration="1m6.882342034s" podCreationTimestamp="2026-03-11 09:16:29 +0000 UTC" firstStartedPulling="2026-03-11 09:17:06.796659411 +0000 UTC m=+1225.462329226" lastFinishedPulling="2026-03-11 09:17:15.889307367 +0000 UTC m=+1234.554977192" observedRunningTime="2026-03-11 09:17:35.881161405 +0000 UTC m=+1254.546831230" watchObservedRunningTime="2026-03-11 09:17:35.882342034 +0000 UTC m=+1254.548011849" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.196704 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-ccd7c9f8f-rvxvk"] Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.202590 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ccd7c9f8f-rvxvk" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.208600 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.296037 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-ccd7c9f8f-rvxvk"] Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.302701 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-dns-swift-storage-0\") pod \"dnsmasq-dns-ccd7c9f8f-rvxvk\" (UID: \"eddf4c27-da2d-482e-8525-4c43594defc7\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-rvxvk" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.303161 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-ovsdbserver-sb\") pod \"dnsmasq-dns-ccd7c9f8f-rvxvk\" (UID: \"eddf4c27-da2d-482e-8525-4c43594defc7\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-rvxvk" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.303354 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gkqz\" (UniqueName: \"kubernetes.io/projected/eddf4c27-da2d-482e-8525-4c43594defc7-kube-api-access-4gkqz\") pod \"dnsmasq-dns-ccd7c9f8f-rvxvk\" (UID: \"eddf4c27-da2d-482e-8525-4c43594defc7\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-rvxvk" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.303614 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-dns-svc\") pod \"dnsmasq-dns-ccd7c9f8f-rvxvk\" (UID: \"eddf4c27-da2d-482e-8525-4c43594defc7\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-rvxvk" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.303818 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-ovsdbserver-nb\") pod \"dnsmasq-dns-ccd7c9f8f-rvxvk\" (UID: \"eddf4c27-da2d-482e-8525-4c43594defc7\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-rvxvk" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.303977 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-config\") pod \"dnsmasq-dns-ccd7c9f8f-rvxvk\" (UID: \"eddf4c27-da2d-482e-8525-4c43594defc7\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-rvxvk" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.320023 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dghqs" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.405518 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlnft\" (UniqueName: \"kubernetes.io/projected/68dbceea-1597-4236-b2d6-ca071bf86d5a-kube-api-access-qlnft\") pod \"68dbceea-1597-4236-b2d6-ca071bf86d5a\" (UID: \"68dbceea-1597-4236-b2d6-ca071bf86d5a\") " Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.405754 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68dbceea-1597-4236-b2d6-ca071bf86d5a-combined-ca-bundle\") pod \"68dbceea-1597-4236-b2d6-ca071bf86d5a\" (UID: \"68dbceea-1597-4236-b2d6-ca071bf86d5a\") " Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.405798 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/68dbceea-1597-4236-b2d6-ca071bf86d5a-config\") pod \"68dbceea-1597-4236-b2d6-ca071bf86d5a\" (UID: \"68dbceea-1597-4236-b2d6-ca071bf86d5a\") " Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.406339 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-ovsdbserver-nb\") pod \"dnsmasq-dns-ccd7c9f8f-rvxvk\" (UID: \"eddf4c27-da2d-482e-8525-4c43594defc7\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-rvxvk" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.406414 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-config\") pod \"dnsmasq-dns-ccd7c9f8f-rvxvk\" (UID: \"eddf4c27-da2d-482e-8525-4c43594defc7\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-rvxvk" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.406571 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-dns-swift-storage-0\") pod \"dnsmasq-dns-ccd7c9f8f-rvxvk\" (UID: \"eddf4c27-da2d-482e-8525-4c43594defc7\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-rvxvk" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.406600 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-ovsdbserver-sb\") pod \"dnsmasq-dns-ccd7c9f8f-rvxvk\" (UID: \"eddf4c27-da2d-482e-8525-4c43594defc7\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-rvxvk" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.406651 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gkqz\" (UniqueName: \"kubernetes.io/projected/eddf4c27-da2d-482e-8525-4c43594defc7-kube-api-access-4gkqz\") pod \"dnsmasq-dns-ccd7c9f8f-rvxvk\" (UID: \"eddf4c27-da2d-482e-8525-4c43594defc7\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-rvxvk" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.406717 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-dns-svc\") pod \"dnsmasq-dns-ccd7c9f8f-rvxvk\" (UID: \"eddf4c27-da2d-482e-8525-4c43594defc7\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-rvxvk" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.407692 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-dns-svc\") pod \"dnsmasq-dns-ccd7c9f8f-rvxvk\" (UID: \"eddf4c27-da2d-482e-8525-4c43594defc7\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-rvxvk" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.412824 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-ovsdbserver-sb\") pod \"dnsmasq-dns-ccd7c9f8f-rvxvk\" (UID: \"eddf4c27-da2d-482e-8525-4c43594defc7\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-rvxvk" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.413510 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-dns-swift-storage-0\") pod \"dnsmasq-dns-ccd7c9f8f-rvxvk\" (UID: \"eddf4c27-da2d-482e-8525-4c43594defc7\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-rvxvk" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.416692 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-ovsdbserver-nb\") pod \"dnsmasq-dns-ccd7c9f8f-rvxvk\" (UID: \"eddf4c27-da2d-482e-8525-4c43594defc7\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-rvxvk" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.419226 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-config\") pod \"dnsmasq-dns-ccd7c9f8f-rvxvk\" (UID: \"eddf4c27-da2d-482e-8525-4c43594defc7\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-rvxvk" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.419901 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68dbceea-1597-4236-b2d6-ca071bf86d5a-kube-api-access-qlnft" (OuterVolumeSpecName: "kube-api-access-qlnft") pod "68dbceea-1597-4236-b2d6-ca071bf86d5a" (UID: "68dbceea-1597-4236-b2d6-ca071bf86d5a"). InnerVolumeSpecName "kube-api-access-qlnft". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.441254 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gkqz\" (UniqueName: \"kubernetes.io/projected/eddf4c27-da2d-482e-8525-4c43594defc7-kube-api-access-4gkqz\") pod \"dnsmasq-dns-ccd7c9f8f-rvxvk\" (UID: \"eddf4c27-da2d-482e-8525-4c43594defc7\") " pod="openstack/dnsmasq-dns-ccd7c9f8f-rvxvk" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.452560 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68dbceea-1597-4236-b2d6-ca071bf86d5a-config" (OuterVolumeSpecName: "config") pod "68dbceea-1597-4236-b2d6-ca071bf86d5a" (UID: "68dbceea-1597-4236-b2d6-ca071bf86d5a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.455332 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68dbceea-1597-4236-b2d6-ca071bf86d5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68dbceea-1597-4236-b2d6-ca071bf86d5a" (UID: "68dbceea-1597-4236-b2d6-ca071bf86d5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.509596 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68dbceea-1597-4236-b2d6-ca071bf86d5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.510388 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/68dbceea-1597-4236-b2d6-ca071bf86d5a-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.510490 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlnft\" (UniqueName: \"kubernetes.io/projected/68dbceea-1597-4236-b2d6-ca071bf86d5a-kube-api-access-qlnft\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.644763 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ccd7c9f8f-rvxvk" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.880190 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02a9854a-9978-4414-972d-37c0b579b03b","Type":"ContainerStarted","Data":"06059cb8a2046225ba5bd791cd76894815640d133945e7292e8f7f4aa3ee6dd2"} Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.884630 4840 generic.go:334] "Generic (PLEG): container finished" podID="573b362e-582b-43f0-afae-c038cf95f625" containerID="43c4acb8183ccb5e90836bfc3e4bcd3459d9f4424c2deb8ad2b687db74f8be98" exitCode=0 Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.884709 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f7z67" event={"ID":"573b362e-582b-43f0-afae-c038cf95f625","Type":"ContainerDied","Data":"43c4acb8183ccb5e90836bfc3e4bcd3459d9f4424c2deb8ad2b687db74f8be98"} Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.890212 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3ad2408-3dda-4009-a898-5f2618fc18cf","Type":"ContainerStarted","Data":"0f3c80a9ff890a01d385011299ef95918b4a997ceb457e6a46517cb25523ebe7"} Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.904289 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dghqs" event={"ID":"68dbceea-1597-4236-b2d6-ca071bf86d5a","Type":"ContainerDied","Data":"554ad8c77383acfd1421868f5d5b96e6290ba042f9822e64fa192433fce3a8c2"} Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.904335 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="554ad8c77383acfd1421868f5d5b96e6290ba042f9822e64fa192433fce3a8c2" Mar 11 09:17:36 crc kubenswrapper[4840]: I0311 09:17:36.905783 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dghqs" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.093163 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-ccd7c9f8f-rvxvk"] Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.142766 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7859c7799c-2ksc5"] Mar 11 09:17:37 crc kubenswrapper[4840]: E0311 09:17:37.143489 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68dbceea-1597-4236-b2d6-ca071bf86d5a" containerName="neutron-db-sync" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.144301 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="68dbceea-1597-4236-b2d6-ca071bf86d5a" containerName="neutron-db-sync" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.144597 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="68dbceea-1597-4236-b2d6-ca071bf86d5a" containerName="neutron-db-sync" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.145738 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.164650 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7859c7799c-2ksc5"] Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.192033 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-ccd7c9f8f-rvxvk"] Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.230769 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-config\") pod \"dnsmasq-dns-7859c7799c-2ksc5\" (UID: \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\") " pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.235599 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-ovsdbserver-nb\") pod \"dnsmasq-dns-7859c7799c-2ksc5\" (UID: \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\") " pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.235694 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-ovsdbserver-sb\") pod \"dnsmasq-dns-7859c7799c-2ksc5\" (UID: \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\") " pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.235833 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-dns-svc\") pod \"dnsmasq-dns-7859c7799c-2ksc5\" (UID: \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\") " pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.235911 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjg28\" (UniqueName: \"kubernetes.io/projected/dfe1c60c-ed7c-4a0a-a85e-82261146409d-kube-api-access-fjg28\") pod \"dnsmasq-dns-7859c7799c-2ksc5\" (UID: \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\") " pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.236102 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-dns-swift-storage-0\") pod \"dnsmasq-dns-7859c7799c-2ksc5\" (UID: \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\") " pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.335713 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-9dc6d5c86-nszp2"] Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.338427 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-dns-swift-storage-0\") pod \"dnsmasq-dns-7859c7799c-2ksc5\" (UID: \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\") " pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.338632 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-config\") pod \"dnsmasq-dns-7859c7799c-2ksc5\" (UID: \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\") " pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.338657 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-ovsdbserver-nb\") pod \"dnsmasq-dns-7859c7799c-2ksc5\" (UID: \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\") " pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.338678 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-ovsdbserver-sb\") pod \"dnsmasq-dns-7859c7799c-2ksc5\" (UID: \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\") " pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.338712 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-dns-svc\") pod \"dnsmasq-dns-7859c7799c-2ksc5\" (UID: \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\") " pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.338733 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjg28\" (UniqueName: \"kubernetes.io/projected/dfe1c60c-ed7c-4a0a-a85e-82261146409d-kube-api-access-fjg28\") pod \"dnsmasq-dns-7859c7799c-2ksc5\" (UID: \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\") " pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.339865 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-dns-swift-storage-0\") pod \"dnsmasq-dns-7859c7799c-2ksc5\" (UID: \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\") " pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.340407 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-config\") pod \"dnsmasq-dns-7859c7799c-2ksc5\" (UID: \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\") " pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.341182 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-ovsdbserver-sb\") pod \"dnsmasq-dns-7859c7799c-2ksc5\" (UID: \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\") " pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.342304 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-ovsdbserver-nb\") pod \"dnsmasq-dns-7859c7799c-2ksc5\" (UID: \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\") " pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.342950 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-dns-svc\") pod \"dnsmasq-dns-7859c7799c-2ksc5\" (UID: \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\") " pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.357695 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9dc6d5c86-nszp2"] Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.357843 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9dc6d5c86-nszp2" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.374969 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjg28\" (UniqueName: \"kubernetes.io/projected/dfe1c60c-ed7c-4a0a-a85e-82261146409d-kube-api-access-fjg28\") pod \"dnsmasq-dns-7859c7799c-2ksc5\" (UID: \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\") " pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.375164 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.375515 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-vpz57" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.375627 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.375731 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.440275 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-combined-ca-bundle\") pod \"neutron-9dc6d5c86-nszp2\" (UID: \"5e3d8e86-db0a-40d7-bfc2-47253da00ec7\") " pod="openstack/neutron-9dc6d5c86-nszp2" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.440314 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-ovndb-tls-certs\") pod \"neutron-9dc6d5c86-nszp2\" (UID: \"5e3d8e86-db0a-40d7-bfc2-47253da00ec7\") " pod="openstack/neutron-9dc6d5c86-nszp2" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.440346 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-httpd-config\") pod \"neutron-9dc6d5c86-nszp2\" (UID: \"5e3d8e86-db0a-40d7-bfc2-47253da00ec7\") " pod="openstack/neutron-9dc6d5c86-nszp2" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.440377 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l7xm\" (UniqueName: \"kubernetes.io/projected/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-kube-api-access-2l7xm\") pod \"neutron-9dc6d5c86-nszp2\" (UID: \"5e3d8e86-db0a-40d7-bfc2-47253da00ec7\") " pod="openstack/neutron-9dc6d5c86-nszp2" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.440456 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-config\") pod \"neutron-9dc6d5c86-nszp2\" (UID: \"5e3d8e86-db0a-40d7-bfc2-47253da00ec7\") " pod="openstack/neutron-9dc6d5c86-nszp2" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.487337 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.542814 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-config\") pod \"neutron-9dc6d5c86-nszp2\" (UID: \"5e3d8e86-db0a-40d7-bfc2-47253da00ec7\") " pod="openstack/neutron-9dc6d5c86-nszp2" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.542932 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-combined-ca-bundle\") pod \"neutron-9dc6d5c86-nszp2\" (UID: \"5e3d8e86-db0a-40d7-bfc2-47253da00ec7\") " pod="openstack/neutron-9dc6d5c86-nszp2" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.542962 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-ovndb-tls-certs\") pod \"neutron-9dc6d5c86-nszp2\" (UID: \"5e3d8e86-db0a-40d7-bfc2-47253da00ec7\") " pod="openstack/neutron-9dc6d5c86-nszp2" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.543002 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-httpd-config\") pod \"neutron-9dc6d5c86-nszp2\" (UID: \"5e3d8e86-db0a-40d7-bfc2-47253da00ec7\") " pod="openstack/neutron-9dc6d5c86-nszp2" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.543046 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l7xm\" (UniqueName: \"kubernetes.io/projected/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-kube-api-access-2l7xm\") pod \"neutron-9dc6d5c86-nszp2\" (UID: \"5e3d8e86-db0a-40d7-bfc2-47253da00ec7\") " pod="openstack/neutron-9dc6d5c86-nszp2" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.553518 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-ovndb-tls-certs\") pod \"neutron-9dc6d5c86-nszp2\" (UID: \"5e3d8e86-db0a-40d7-bfc2-47253da00ec7\") " pod="openstack/neutron-9dc6d5c86-nszp2" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.555956 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-combined-ca-bundle\") pod \"neutron-9dc6d5c86-nszp2\" (UID: \"5e3d8e86-db0a-40d7-bfc2-47253da00ec7\") " pod="openstack/neutron-9dc6d5c86-nszp2" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.556178 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-config\") pod \"neutron-9dc6d5c86-nszp2\" (UID: \"5e3d8e86-db0a-40d7-bfc2-47253da00ec7\") " pod="openstack/neutron-9dc6d5c86-nszp2" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.557179 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-httpd-config\") pod \"neutron-9dc6d5c86-nszp2\" (UID: \"5e3d8e86-db0a-40d7-bfc2-47253da00ec7\") " pod="openstack/neutron-9dc6d5c86-nszp2" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.568636 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l7xm\" (UniqueName: \"kubernetes.io/projected/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-kube-api-access-2l7xm\") pod \"neutron-9dc6d5c86-nszp2\" (UID: \"5e3d8e86-db0a-40d7-bfc2-47253da00ec7\") " pod="openstack/neutron-9dc6d5c86-nszp2" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.725031 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9dc6d5c86-nszp2" Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.935577 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8ea45272-7dc6-4227-ba52-e506fd81c0b4","Type":"ContainerStarted","Data":"a4dd624ea8792381b07e9ab369608d7d8ed8a0f685ccd5a7af208b7afb64530d"} Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.939250 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3ad2408-3dda-4009-a898-5f2618fc18cf","Type":"ContainerStarted","Data":"58052d3369d8502a608a76a3fac8089afd975a97605d8aede537f5084de1526b"} Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.941601 4840 generic.go:334] "Generic (PLEG): container finished" podID="eddf4c27-da2d-482e-8525-4c43594defc7" containerID="a985e620cbaed31f44e64d5bdea3e0496a71d53c8c6e8badb3a08abe4bbd5ffd" exitCode=0 Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.941713 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ccd7c9f8f-rvxvk" event={"ID":"eddf4c27-da2d-482e-8525-4c43594defc7","Type":"ContainerDied","Data":"a985e620cbaed31f44e64d5bdea3e0496a71d53c8c6e8badb3a08abe4bbd5ffd"} Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.941757 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ccd7c9f8f-rvxvk" event={"ID":"eddf4c27-da2d-482e-8525-4c43594defc7","Type":"ContainerStarted","Data":"e58923ff18d6fd659c502ea47c0286bf2ec5552743e2046af17c079b7079b9e4"} Mar 11 09:17:37 crc kubenswrapper[4840]: I0311 09:17:37.972982 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.972962942 podStartE2EDuration="4.972962942s" podCreationTimestamp="2026-03-11 09:17:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:17:37.963091294 +0000 UTC m=+1256.628761109" watchObservedRunningTime="2026-03-11 09:17:37.972962942 +0000 UTC m=+1256.638632757" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.016816 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.016785234 podStartE2EDuration="5.016785234s" podCreationTimestamp="2026-03-11 09:17:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:17:38.006218349 +0000 UTC m=+1256.671888174" watchObservedRunningTime="2026-03-11 09:17:38.016785234 +0000 UTC m=+1256.682455049" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.044399 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7859c7799c-2ksc5"] Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.392172 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-9dc6d5c86-nszp2"] Mar 11 09:17:38 crc kubenswrapper[4840]: W0311 09:17:38.458849 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e3d8e86_db0a_40d7_bfc2_47253da00ec7.slice/crio-7c00c25e651728f75ee2f78f5a18804b4417a369f94146852075105195868065 WatchSource:0}: Error finding container 7c00c25e651728f75ee2f78f5a18804b4417a369f94146852075105195868065: Status 404 returned error can't find the container with id 7c00c25e651728f75ee2f78f5a18804b4417a369f94146852075105195868065 Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.470239 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f7z67" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.499224 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ccd7c9f8f-rvxvk" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.565320 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/573b362e-582b-43f0-afae-c038cf95f625-scripts\") pod \"573b362e-582b-43f0-afae-c038cf95f625\" (UID: \"573b362e-582b-43f0-afae-c038cf95f625\") " Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.565426 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-config\") pod \"eddf4c27-da2d-482e-8525-4c43594defc7\" (UID: \"eddf4c27-da2d-482e-8525-4c43594defc7\") " Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.565453 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-ovsdbserver-nb\") pod \"eddf4c27-da2d-482e-8525-4c43594defc7\" (UID: \"eddf4c27-da2d-482e-8525-4c43594defc7\") " Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.565500 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-dns-svc\") pod \"eddf4c27-da2d-482e-8525-4c43594defc7\" (UID: \"eddf4c27-da2d-482e-8525-4c43594defc7\") " Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.565601 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-ovsdbserver-sb\") pod \"eddf4c27-da2d-482e-8525-4c43594defc7\" (UID: \"eddf4c27-da2d-482e-8525-4c43594defc7\") " Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.565631 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573b362e-582b-43f0-afae-c038cf95f625-combined-ca-bundle\") pod \"573b362e-582b-43f0-afae-c038cf95f625\" (UID: \"573b362e-582b-43f0-afae-c038cf95f625\") " Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.565698 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-dns-swift-storage-0\") pod \"eddf4c27-da2d-482e-8525-4c43594defc7\" (UID: \"eddf4c27-da2d-482e-8525-4c43594defc7\") " Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.566043 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gkqz\" (UniqueName: \"kubernetes.io/projected/eddf4c27-da2d-482e-8525-4c43594defc7-kube-api-access-4gkqz\") pod \"eddf4c27-da2d-482e-8525-4c43594defc7\" (UID: \"eddf4c27-da2d-482e-8525-4c43594defc7\") " Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.566077 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnbvw\" (UniqueName: \"kubernetes.io/projected/573b362e-582b-43f0-afae-c038cf95f625-kube-api-access-fnbvw\") pod \"573b362e-582b-43f0-afae-c038cf95f625\" (UID: \"573b362e-582b-43f0-afae-c038cf95f625\") " Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.566106 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/573b362e-582b-43f0-afae-c038cf95f625-config-data\") pod \"573b362e-582b-43f0-afae-c038cf95f625\" (UID: \"573b362e-582b-43f0-afae-c038cf95f625\") " Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.566124 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/573b362e-582b-43f0-afae-c038cf95f625-logs\") pod \"573b362e-582b-43f0-afae-c038cf95f625\" (UID: \"573b362e-582b-43f0-afae-c038cf95f625\") " Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.566938 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/573b362e-582b-43f0-afae-c038cf95f625-logs" (OuterVolumeSpecName: "logs") pod "573b362e-582b-43f0-afae-c038cf95f625" (UID: "573b362e-582b-43f0-afae-c038cf95f625"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.570147 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eddf4c27-da2d-482e-8525-4c43594defc7-kube-api-access-4gkqz" (OuterVolumeSpecName: "kube-api-access-4gkqz") pod "eddf4c27-da2d-482e-8525-4c43594defc7" (UID: "eddf4c27-da2d-482e-8525-4c43594defc7"). InnerVolumeSpecName "kube-api-access-4gkqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.570748 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/573b362e-582b-43f0-afae-c038cf95f625-kube-api-access-fnbvw" (OuterVolumeSpecName: "kube-api-access-fnbvw") pod "573b362e-582b-43f0-afae-c038cf95f625" (UID: "573b362e-582b-43f0-afae-c038cf95f625"). InnerVolumeSpecName "kube-api-access-fnbvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.572187 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/573b362e-582b-43f0-afae-c038cf95f625-scripts" (OuterVolumeSpecName: "scripts") pod "573b362e-582b-43f0-afae-c038cf95f625" (UID: "573b362e-582b-43f0-afae-c038cf95f625"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.595377 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eddf4c27-da2d-482e-8525-4c43594defc7" (UID: "eddf4c27-da2d-482e-8525-4c43594defc7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.597558 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eddf4c27-da2d-482e-8525-4c43594defc7" (UID: "eddf4c27-da2d-482e-8525-4c43594defc7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.597592 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-config" (OuterVolumeSpecName: "config") pod "eddf4c27-da2d-482e-8525-4c43594defc7" (UID: "eddf4c27-da2d-482e-8525-4c43594defc7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.598288 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/573b362e-582b-43f0-afae-c038cf95f625-config-data" (OuterVolumeSpecName: "config-data") pod "573b362e-582b-43f0-afae-c038cf95f625" (UID: "573b362e-582b-43f0-afae-c038cf95f625"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.599542 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eddf4c27-da2d-482e-8525-4c43594defc7" (UID: "eddf4c27-da2d-482e-8525-4c43594defc7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.600781 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/573b362e-582b-43f0-afae-c038cf95f625-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "573b362e-582b-43f0-afae-c038cf95f625" (UID: "573b362e-582b-43f0-afae-c038cf95f625"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.601247 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eddf4c27-da2d-482e-8525-4c43594defc7" (UID: "eddf4c27-da2d-482e-8525-4c43594defc7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.668831 4840 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.668865 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gkqz\" (UniqueName: \"kubernetes.io/projected/eddf4c27-da2d-482e-8525-4c43594defc7-kube-api-access-4gkqz\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.668876 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnbvw\" (UniqueName: \"kubernetes.io/projected/573b362e-582b-43f0-afae-c038cf95f625-kube-api-access-fnbvw\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.668886 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/573b362e-582b-43f0-afae-c038cf95f625-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.668895 4840 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/573b362e-582b-43f0-afae-c038cf95f625-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.668903 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/573b362e-582b-43f0-afae-c038cf95f625-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.668912 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.668921 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.668930 4840 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.668937 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eddf4c27-da2d-482e-8525-4c43594defc7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.668946 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/573b362e-582b-43f0-afae-c038cf95f625-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.955718 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-f7z67" event={"ID":"573b362e-582b-43f0-afae-c038cf95f625","Type":"ContainerDied","Data":"963dc77ee7668becb5b7fc2217ec39d066f33e2b0f510e5804d226bd63bc8ed6"} Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.955776 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="963dc77ee7668becb5b7fc2217ec39d066f33e2b0f510e5804d226bd63bc8ed6" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.955780 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-f7z67" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.958988 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ccd7c9f8f-rvxvk" event={"ID":"eddf4c27-da2d-482e-8525-4c43594defc7","Type":"ContainerDied","Data":"e58923ff18d6fd659c502ea47c0286bf2ec5552743e2046af17c079b7079b9e4"} Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.959011 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ccd7c9f8f-rvxvk" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.959059 4840 scope.go:117] "RemoveContainer" containerID="a985e620cbaed31f44e64d5bdea3e0496a71d53c8c6e8badb3a08abe4bbd5ffd" Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.963118 4840 generic.go:334] "Generic (PLEG): container finished" podID="dfe1c60c-ed7c-4a0a-a85e-82261146409d" containerID="00a2e783db3d806c9ff2d6b8406e19b84ed9d4280233512b98ebce497eb3d170" exitCode=0 Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.963181 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" event={"ID":"dfe1c60c-ed7c-4a0a-a85e-82261146409d","Type":"ContainerDied","Data":"00a2e783db3d806c9ff2d6b8406e19b84ed9d4280233512b98ebce497eb3d170"} Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.963227 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" event={"ID":"dfe1c60c-ed7c-4a0a-a85e-82261146409d","Type":"ContainerStarted","Data":"427015fc66421f7d2372e9d9640e6873336ad2d1d5fcf966a0f6277b980f499a"} Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.971936 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9dc6d5c86-nszp2" event={"ID":"5e3d8e86-db0a-40d7-bfc2-47253da00ec7","Type":"ContainerStarted","Data":"1ecbc4daf8b0a4f8ae2ccee73e5aa09fda6b5019b1f4313630e50f513c537cb4"} Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.972021 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9dc6d5c86-nszp2" event={"ID":"5e3d8e86-db0a-40d7-bfc2-47253da00ec7","Type":"ContainerStarted","Data":"7c00c25e651728f75ee2f78f5a18804b4417a369f94146852075105195868065"} Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.979735 4840 generic.go:334] "Generic (PLEG): container finished" podID="4d2cbae7-4ece-49e8-b85e-30db29d6c172" containerID="c928b472f2542eafc13cd171a09a9a135f9ed04cc8a7a9eb07a835da8146c648" exitCode=0 Mar 11 09:17:38 crc kubenswrapper[4840]: I0311 09:17:38.980327 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jlpsf" event={"ID":"4d2cbae7-4ece-49e8-b85e-30db29d6c172","Type":"ContainerDied","Data":"c928b472f2542eafc13cd171a09a9a135f9ed04cc8a7a9eb07a835da8146c648"} Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.050485 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-75cbbc8dcd-sc25j"] Mar 11 09:17:39 crc kubenswrapper[4840]: E0311 09:17:39.050913 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eddf4c27-da2d-482e-8525-4c43594defc7" containerName="init" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.050930 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="eddf4c27-da2d-482e-8525-4c43594defc7" containerName="init" Mar 11 09:17:39 crc kubenswrapper[4840]: E0311 09:17:39.050957 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="573b362e-582b-43f0-afae-c038cf95f625" containerName="placement-db-sync" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.050964 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="573b362e-582b-43f0-afae-c038cf95f625" containerName="placement-db-sync" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.051126 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="573b362e-582b-43f0-afae-c038cf95f625" containerName="placement-db-sync" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.051167 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="eddf4c27-da2d-482e-8525-4c43594defc7" containerName="init" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.052292 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.058390 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.058696 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.058861 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.058928 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.059015 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4q5fm" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.079906 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-75cbbc8dcd-sc25j"] Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.182220 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-config-data\") pod \"placement-75cbbc8dcd-sc25j\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.182265 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fccf048e-d3b5-4e8d-a940-ec306fe071a0-logs\") pod \"placement-75cbbc8dcd-sc25j\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.182301 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-internal-tls-certs\") pod \"placement-75cbbc8dcd-sc25j\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.182358 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-scripts\") pod \"placement-75cbbc8dcd-sc25j\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.182391 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb5xd\" (UniqueName: \"kubernetes.io/projected/fccf048e-d3b5-4e8d-a940-ec306fe071a0-kube-api-access-tb5xd\") pod \"placement-75cbbc8dcd-sc25j\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.182437 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-combined-ca-bundle\") pod \"placement-75cbbc8dcd-sc25j\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.182454 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-public-tls-certs\") pod \"placement-75cbbc8dcd-sc25j\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.223892 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-ccd7c9f8f-rvxvk"] Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.236939 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-ccd7c9f8f-rvxvk"] Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.283800 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-scripts\") pod \"placement-75cbbc8dcd-sc25j\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.283865 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb5xd\" (UniqueName: \"kubernetes.io/projected/fccf048e-d3b5-4e8d-a940-ec306fe071a0-kube-api-access-tb5xd\") pod \"placement-75cbbc8dcd-sc25j\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.283911 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-combined-ca-bundle\") pod \"placement-75cbbc8dcd-sc25j\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.283932 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-public-tls-certs\") pod \"placement-75cbbc8dcd-sc25j\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.283991 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-config-data\") pod \"placement-75cbbc8dcd-sc25j\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.284008 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fccf048e-d3b5-4e8d-a940-ec306fe071a0-logs\") pod \"placement-75cbbc8dcd-sc25j\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.284037 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-internal-tls-certs\") pod \"placement-75cbbc8dcd-sc25j\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.287705 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fccf048e-d3b5-4e8d-a940-ec306fe071a0-logs\") pod \"placement-75cbbc8dcd-sc25j\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.288273 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-internal-tls-certs\") pod \"placement-75cbbc8dcd-sc25j\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.289196 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-public-tls-certs\") pod \"placement-75cbbc8dcd-sc25j\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.290638 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-scripts\") pod \"placement-75cbbc8dcd-sc25j\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.291845 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-combined-ca-bundle\") pod \"placement-75cbbc8dcd-sc25j\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.297868 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-config-data\") pod \"placement-75cbbc8dcd-sc25j\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.308144 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb5xd\" (UniqueName: \"kubernetes.io/projected/fccf048e-d3b5-4e8d-a940-ec306fe071a0-kube-api-access-tb5xd\") pod \"placement-75cbbc8dcd-sc25j\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.506165 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.709026 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-656d6c4465-srbrx"] Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.711505 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.718630 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.718769 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.748387 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-656d6c4465-srbrx"] Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.792569 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-public-tls-certs\") pod \"neutron-656d6c4465-srbrx\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.792969 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-httpd-config\") pod \"neutron-656d6c4465-srbrx\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.792999 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-ovndb-tls-certs\") pod \"neutron-656d6c4465-srbrx\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.793030 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-combined-ca-bundle\") pod \"neutron-656d6c4465-srbrx\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.793081 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-internal-tls-certs\") pod \"neutron-656d6c4465-srbrx\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.793162 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-config\") pod \"neutron-656d6c4465-srbrx\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.793203 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dtdx\" (UniqueName: \"kubernetes.io/projected/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-kube-api-access-9dtdx\") pod \"neutron-656d6c4465-srbrx\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.895857 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-public-tls-certs\") pod \"neutron-656d6c4465-srbrx\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.895963 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-httpd-config\") pod \"neutron-656d6c4465-srbrx\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.895985 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-ovndb-tls-certs\") pod \"neutron-656d6c4465-srbrx\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.896014 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-combined-ca-bundle\") pod \"neutron-656d6c4465-srbrx\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.896147 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-internal-tls-certs\") pod \"neutron-656d6c4465-srbrx\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.896275 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-config\") pod \"neutron-656d6c4465-srbrx\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.896301 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dtdx\" (UniqueName: \"kubernetes.io/projected/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-kube-api-access-9dtdx\") pod \"neutron-656d6c4465-srbrx\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.903831 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-ovndb-tls-certs\") pod \"neutron-656d6c4465-srbrx\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.903905 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-public-tls-certs\") pod \"neutron-656d6c4465-srbrx\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.904840 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-config\") pod \"neutron-656d6c4465-srbrx\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.904916 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-httpd-config\") pod \"neutron-656d6c4465-srbrx\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.905588 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-internal-tls-certs\") pod \"neutron-656d6c4465-srbrx\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.906527 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-combined-ca-bundle\") pod \"neutron-656d6c4465-srbrx\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.912209 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dtdx\" (UniqueName: \"kubernetes.io/projected/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-kube-api-access-9dtdx\") pod \"neutron-656d6c4465-srbrx\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.992253 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" event={"ID":"dfe1c60c-ed7c-4a0a-a85e-82261146409d","Type":"ContainerStarted","Data":"79e1c66334965e63af538892b3410f5e2ff8b5aabb04319e2e2c1653a24b1496"} Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.992386 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.995417 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9dc6d5c86-nszp2" event={"ID":"5e3d8e86-db0a-40d7-bfc2-47253da00ec7","Type":"ContainerStarted","Data":"9a4896b371fb249686089f6c512e0572e3d7f61f6c18b54d57a46d90437bb7a1"} Mar 11 09:17:39 crc kubenswrapper[4840]: I0311 09:17:39.995546 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-9dc6d5c86-nszp2" Mar 11 09:17:40 crc kubenswrapper[4840]: I0311 09:17:40.029527 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" podStartSLOduration=3.029505734 podStartE2EDuration="3.029505734s" podCreationTimestamp="2026-03-11 09:17:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:17:40.018261741 +0000 UTC m=+1258.683931576" watchObservedRunningTime="2026-03-11 09:17:40.029505734 +0000 UTC m=+1258.695175549" Mar 11 09:17:40 crc kubenswrapper[4840]: I0311 09:17:40.042669 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-9dc6d5c86-nszp2" podStartSLOduration=3.042642475 podStartE2EDuration="3.042642475s" podCreationTimestamp="2026-03-11 09:17:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:17:40.037855694 +0000 UTC m=+1258.703525529" watchObservedRunningTime="2026-03-11 09:17:40.042642475 +0000 UTC m=+1258.708312290" Mar 11 09:17:40 crc kubenswrapper[4840]: I0311 09:17:40.053334 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:17:40 crc kubenswrapper[4840]: I0311 09:17:40.078105 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eddf4c27-da2d-482e-8525-4c43594defc7" path="/var/lib/kubelet/pods/eddf4c27-da2d-482e-8525-4c43594defc7/volumes" Mar 11 09:17:40 crc kubenswrapper[4840]: I0311 09:17:40.078694 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-75cbbc8dcd-sc25j"] Mar 11 09:17:40 crc kubenswrapper[4840]: I0311 09:17:40.368810 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jlpsf" Mar 11 09:17:40 crc kubenswrapper[4840]: I0311 09:17:40.407980 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-combined-ca-bundle\") pod \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\" (UID: \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\") " Mar 11 09:17:40 crc kubenswrapper[4840]: I0311 09:17:40.408023 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-credential-keys\") pod \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\" (UID: \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\") " Mar 11 09:17:40 crc kubenswrapper[4840]: I0311 09:17:40.408041 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-fernet-keys\") pod \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\" (UID: \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\") " Mar 11 09:17:40 crc kubenswrapper[4840]: I0311 09:17:40.408119 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xl5q\" (UniqueName: \"kubernetes.io/projected/4d2cbae7-4ece-49e8-b85e-30db29d6c172-kube-api-access-7xl5q\") pod \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\" (UID: \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\") " Mar 11 09:17:40 crc kubenswrapper[4840]: I0311 09:17:40.408141 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-config-data\") pod \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\" (UID: \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\") " Mar 11 09:17:40 crc kubenswrapper[4840]: I0311 09:17:40.408187 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-scripts\") pod \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\" (UID: \"4d2cbae7-4ece-49e8-b85e-30db29d6c172\") " Mar 11 09:17:40 crc kubenswrapper[4840]: I0311 09:17:40.415695 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-scripts" (OuterVolumeSpecName: "scripts") pod "4d2cbae7-4ece-49e8-b85e-30db29d6c172" (UID: "4d2cbae7-4ece-49e8-b85e-30db29d6c172"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:40 crc kubenswrapper[4840]: I0311 09:17:40.416778 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d2cbae7-4ece-49e8-b85e-30db29d6c172-kube-api-access-7xl5q" (OuterVolumeSpecName: "kube-api-access-7xl5q") pod "4d2cbae7-4ece-49e8-b85e-30db29d6c172" (UID: "4d2cbae7-4ece-49e8-b85e-30db29d6c172"). InnerVolumeSpecName "kube-api-access-7xl5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:17:40 crc kubenswrapper[4840]: I0311 09:17:40.417605 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4d2cbae7-4ece-49e8-b85e-30db29d6c172" (UID: "4d2cbae7-4ece-49e8-b85e-30db29d6c172"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:40 crc kubenswrapper[4840]: I0311 09:17:40.420747 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4d2cbae7-4ece-49e8-b85e-30db29d6c172" (UID: "4d2cbae7-4ece-49e8-b85e-30db29d6c172"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:40 crc kubenswrapper[4840]: I0311 09:17:40.440845 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-config-data" (OuterVolumeSpecName: "config-data") pod "4d2cbae7-4ece-49e8-b85e-30db29d6c172" (UID: "4d2cbae7-4ece-49e8-b85e-30db29d6c172"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:40 crc kubenswrapper[4840]: I0311 09:17:40.459085 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d2cbae7-4ece-49e8-b85e-30db29d6c172" (UID: "4d2cbae7-4ece-49e8-b85e-30db29d6c172"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:40 crc kubenswrapper[4840]: I0311 09:17:40.509917 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:40 crc kubenswrapper[4840]: I0311 09:17:40.509963 4840 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:40 crc kubenswrapper[4840]: I0311 09:17:40.509973 4840 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:40 crc kubenswrapper[4840]: I0311 09:17:40.509983 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xl5q\" (UniqueName: \"kubernetes.io/projected/4d2cbae7-4ece-49e8-b85e-30db29d6c172-kube-api-access-7xl5q\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:40 crc kubenswrapper[4840]: I0311 09:17:40.509995 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:40 crc kubenswrapper[4840]: I0311 09:17:40.510010 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d2cbae7-4ece-49e8-b85e-30db29d6c172-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:40 crc kubenswrapper[4840]: I0311 09:17:40.782189 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-656d6c4465-srbrx"] Mar 11 09:17:40 crc kubenswrapper[4840]: W0311 09:17:40.791301 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c84bd84_b44d_4d4a_a6ee_0b85e349bc69.slice/crio-ba33e48969beb305fce56209396ea4df724febced0cc98ad88ab4387b68e228f WatchSource:0}: Error finding container ba33e48969beb305fce56209396ea4df724febced0cc98ad88ab4387b68e228f: Status 404 returned error can't find the container with id ba33e48969beb305fce56209396ea4df724febced0cc98ad88ab4387b68e228f Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.012709 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-656d6c4465-srbrx" event={"ID":"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69","Type":"ContainerStarted","Data":"ba33e48969beb305fce56209396ea4df724febced0cc98ad88ab4387b68e228f"} Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.015979 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75cbbc8dcd-sc25j" event={"ID":"fccf048e-d3b5-4e8d-a940-ec306fe071a0","Type":"ContainerStarted","Data":"191fc6b9a70ef6981941d87ed503a96b3604cf4bb58e3fb543512122c2c8cfe6"} Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.016029 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75cbbc8dcd-sc25j" event={"ID":"fccf048e-d3b5-4e8d-a940-ec306fe071a0","Type":"ContainerStarted","Data":"7fcf04b257aa6ca622ecd00debacd364f42e749d25c27411e1e49c6c6d470dfe"} Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.016040 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75cbbc8dcd-sc25j" event={"ID":"fccf048e-d3b5-4e8d-a940-ec306fe071a0","Type":"ContainerStarted","Data":"279ec998b5c3d891654bcaaf1b5077282d98422d63225443a41a25a7c71ea3ce"} Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.016393 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.019765 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jlpsf" event={"ID":"4d2cbae7-4ece-49e8-b85e-30db29d6c172","Type":"ContainerDied","Data":"1e881be16226d81f76acd158735451aaf9d7074d679384038a3240d165e70133"} Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.019795 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jlpsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.019818 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e881be16226d81f76acd158735451aaf9d7074d679384038a3240d165e70133" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.052897 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-75cbbc8dcd-sc25j" podStartSLOduration=2.052878671 podStartE2EDuration="2.052878671s" podCreationTimestamp="2026-03-11 09:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:17:41.046974242 +0000 UTC m=+1259.712644057" watchObservedRunningTime="2026-03-11 09:17:41.052878671 +0000 UTC m=+1259.718548486" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.184530 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bcbd67b5c-tnrsf"] Mar 11 09:17:41 crc kubenswrapper[4840]: E0311 09:17:41.185017 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d2cbae7-4ece-49e8-b85e-30db29d6c172" containerName="keystone-bootstrap" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.185033 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d2cbae7-4ece-49e8-b85e-30db29d6c172" containerName="keystone-bootstrap" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.185206 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d2cbae7-4ece-49e8-b85e-30db29d6c172" containerName="keystone-bootstrap" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.185866 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.190894 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.190894 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.191005 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.191230 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-gtqwm" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.191367 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.191485 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.198618 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bcbd67b5c-tnrsf"] Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.330700 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-scripts\") pod \"keystone-bcbd67b5c-tnrsf\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.330761 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-fernet-keys\") pod \"keystone-bcbd67b5c-tnrsf\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.330780 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-combined-ca-bundle\") pod \"keystone-bcbd67b5c-tnrsf\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.330801 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5glxg\" (UniqueName: \"kubernetes.io/projected/f86e8b7a-656b-423e-8cf0-6d1025486c46-kube-api-access-5glxg\") pod \"keystone-bcbd67b5c-tnrsf\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.330849 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-credential-keys\") pod \"keystone-bcbd67b5c-tnrsf\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.330883 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-config-data\") pod \"keystone-bcbd67b5c-tnrsf\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.330926 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-internal-tls-certs\") pod \"keystone-bcbd67b5c-tnrsf\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.330967 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-public-tls-certs\") pod \"keystone-bcbd67b5c-tnrsf\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.433194 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-scripts\") pod \"keystone-bcbd67b5c-tnrsf\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.433619 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-fernet-keys\") pod \"keystone-bcbd67b5c-tnrsf\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.433653 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-combined-ca-bundle\") pod \"keystone-bcbd67b5c-tnrsf\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.433671 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5glxg\" (UniqueName: \"kubernetes.io/projected/f86e8b7a-656b-423e-8cf0-6d1025486c46-kube-api-access-5glxg\") pod \"keystone-bcbd67b5c-tnrsf\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.433757 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-credential-keys\") pod \"keystone-bcbd67b5c-tnrsf\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.434351 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-config-data\") pod \"keystone-bcbd67b5c-tnrsf\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.434576 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-internal-tls-certs\") pod \"keystone-bcbd67b5c-tnrsf\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.434660 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-public-tls-certs\") pod \"keystone-bcbd67b5c-tnrsf\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.438062 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-combined-ca-bundle\") pod \"keystone-bcbd67b5c-tnrsf\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.440360 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-fernet-keys\") pod \"keystone-bcbd67b5c-tnrsf\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.440908 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-scripts\") pod \"keystone-bcbd67b5c-tnrsf\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.440236 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-internal-tls-certs\") pod \"keystone-bcbd67b5c-tnrsf\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.442775 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-config-data\") pod \"keystone-bcbd67b5c-tnrsf\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.443348 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-public-tls-certs\") pod \"keystone-bcbd67b5c-tnrsf\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.452877 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-credential-keys\") pod \"keystone-bcbd67b5c-tnrsf\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.455775 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5glxg\" (UniqueName: \"kubernetes.io/projected/f86e8b7a-656b-423e-8cf0-6d1025486c46-kube-api-access-5glxg\") pod \"keystone-bcbd67b5c-tnrsf\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.516360 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.924380 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5bc9bbdff8-zczrp"] Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.926191 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:17:41 crc kubenswrapper[4840]: I0311 09:17:41.948419 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5bc9bbdff8-zczrp"] Mar 11 09:17:42 crc kubenswrapper[4840]: I0311 09:17:42.036905 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-656d6c4465-srbrx" event={"ID":"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69","Type":"ContainerStarted","Data":"058efdd9153e199b2781bb56a2799093fa38fbf8d86f0fa7912fbc7f671e203f"} Mar 11 09:17:42 crc kubenswrapper[4840]: I0311 09:17:42.037043 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:17:42 crc kubenswrapper[4840]: I0311 09:17:42.049478 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-config-data\") pod \"placement-5bc9bbdff8-zczrp\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:17:42 crc kubenswrapper[4840]: I0311 09:17:42.050051 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-combined-ca-bundle\") pod \"placement-5bc9bbdff8-zczrp\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:17:42 crc kubenswrapper[4840]: I0311 09:17:42.050097 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-public-tls-certs\") pod \"placement-5bc9bbdff8-zczrp\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:17:42 crc kubenswrapper[4840]: I0311 09:17:42.050334 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f415e24c-207c-4dc7-b68c-14180ac09391-logs\") pod \"placement-5bc9bbdff8-zczrp\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:17:42 crc kubenswrapper[4840]: I0311 09:17:42.050358 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g6fx\" (UniqueName: \"kubernetes.io/projected/f415e24c-207c-4dc7-b68c-14180ac09391-kube-api-access-7g6fx\") pod \"placement-5bc9bbdff8-zczrp\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:17:42 crc kubenswrapper[4840]: I0311 09:17:42.050415 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-scripts\") pod \"placement-5bc9bbdff8-zczrp\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:17:42 crc kubenswrapper[4840]: I0311 09:17:42.050462 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-internal-tls-certs\") pod \"placement-5bc9bbdff8-zczrp\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:17:42 crc kubenswrapper[4840]: I0311 09:17:42.152534 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-scripts\") pod \"placement-5bc9bbdff8-zczrp\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:17:42 crc kubenswrapper[4840]: I0311 09:17:42.153287 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-internal-tls-certs\") pod \"placement-5bc9bbdff8-zczrp\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:17:42 crc kubenswrapper[4840]: I0311 09:17:42.154291 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-config-data\") pod \"placement-5bc9bbdff8-zczrp\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:17:42 crc kubenswrapper[4840]: I0311 09:17:42.155338 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-combined-ca-bundle\") pod \"placement-5bc9bbdff8-zczrp\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:17:42 crc kubenswrapper[4840]: I0311 09:17:42.155379 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-public-tls-certs\") pod \"placement-5bc9bbdff8-zczrp\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:17:42 crc kubenswrapper[4840]: I0311 09:17:42.155789 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f415e24c-207c-4dc7-b68c-14180ac09391-logs\") pod \"placement-5bc9bbdff8-zczrp\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:17:42 crc kubenswrapper[4840]: I0311 09:17:42.155822 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g6fx\" (UniqueName: \"kubernetes.io/projected/f415e24c-207c-4dc7-b68c-14180ac09391-kube-api-access-7g6fx\") pod \"placement-5bc9bbdff8-zczrp\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:17:42 crc kubenswrapper[4840]: I0311 09:17:42.157196 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f415e24c-207c-4dc7-b68c-14180ac09391-logs\") pod \"placement-5bc9bbdff8-zczrp\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:17:42 crc kubenswrapper[4840]: I0311 09:17:42.157597 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-internal-tls-certs\") pod \"placement-5bc9bbdff8-zczrp\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:17:42 crc kubenswrapper[4840]: I0311 09:17:42.159013 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-scripts\") pod \"placement-5bc9bbdff8-zczrp\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:17:42 crc kubenswrapper[4840]: I0311 09:17:42.161604 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-config-data\") pod \"placement-5bc9bbdff8-zczrp\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:17:42 crc kubenswrapper[4840]: I0311 09:17:42.173222 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-combined-ca-bundle\") pod \"placement-5bc9bbdff8-zczrp\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:17:42 crc kubenswrapper[4840]: I0311 09:17:42.173707 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-public-tls-certs\") pod \"placement-5bc9bbdff8-zczrp\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:17:42 crc kubenswrapper[4840]: I0311 09:17:42.179072 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g6fx\" (UniqueName: \"kubernetes.io/projected/f415e24c-207c-4dc7-b68c-14180ac09391-kube-api-access-7g6fx\") pod \"placement-5bc9bbdff8-zczrp\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:17:42 crc kubenswrapper[4840]: I0311 09:17:42.276240 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:17:44 crc kubenswrapper[4840]: I0311 09:17:44.258658 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 11 09:17:44 crc kubenswrapper[4840]: I0311 09:17:44.259064 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 11 09:17:44 crc kubenswrapper[4840]: I0311 09:17:44.300873 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 11 09:17:44 crc kubenswrapper[4840]: I0311 09:17:44.311584 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 11 09:17:44 crc kubenswrapper[4840]: I0311 09:17:44.566835 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 11 09:17:44 crc kubenswrapper[4840]: I0311 09:17:44.566884 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 11 09:17:44 crc kubenswrapper[4840]: I0311 09:17:44.636821 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 11 09:17:44 crc kubenswrapper[4840]: I0311 09:17:44.648623 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 11 09:17:45 crc kubenswrapper[4840]: I0311 09:17:45.067512 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02a9854a-9978-4414-972d-37c0b579b03b","Type":"ContainerStarted","Data":"8195ff056506e76b1783f1fea390441b636fd4acf55fad53349f25d7bc0caf3b"} Mar 11 09:17:45 crc kubenswrapper[4840]: I0311 09:17:45.076977 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-656d6c4465-srbrx" event={"ID":"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69","Type":"ContainerStarted","Data":"820aa259a8b7e35284681d77e3f29d049e162d8ecbf5d0f603d802b8267103a7"} Mar 11 09:17:45 crc kubenswrapper[4840]: I0311 09:17:45.077033 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 11 09:17:45 crc kubenswrapper[4840]: I0311 09:17:45.080969 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 11 09:17:45 crc kubenswrapper[4840]: I0311 09:17:45.081005 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:17:45 crc kubenswrapper[4840]: I0311 09:17:45.081017 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 11 09:17:45 crc kubenswrapper[4840]: I0311 09:17:45.081208 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 11 09:17:45 crc kubenswrapper[4840]: I0311 09:17:45.149408 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-656d6c4465-srbrx" podStartSLOduration=6.149386077 podStartE2EDuration="6.149386077s" podCreationTimestamp="2026-03-11 09:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:17:45.1121619 +0000 UTC m=+1263.777831735" watchObservedRunningTime="2026-03-11 09:17:45.149386077 +0000 UTC m=+1263.815055892" Mar 11 09:17:45 crc kubenswrapper[4840]: I0311 09:17:45.154542 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bcbd67b5c-tnrsf"] Mar 11 09:17:45 crc kubenswrapper[4840]: I0311 09:17:45.239576 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5bc9bbdff8-zczrp"] Mar 11 09:17:45 crc kubenswrapper[4840]: W0311 09:17:45.243938 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf415e24c_207c_4dc7_b68c_14180ac09391.slice/crio-402a0d6c12cf3081f162b44a7bac1199ca1b57a8d2f74b430f973ab11ba90cd7 WatchSource:0}: Error finding container 402a0d6c12cf3081f162b44a7bac1199ca1b57a8d2f74b430f973ab11ba90cd7: Status 404 returned error can't find the container with id 402a0d6c12cf3081f162b44a7bac1199ca1b57a8d2f74b430f973ab11ba90cd7 Mar 11 09:17:46 crc kubenswrapper[4840]: I0311 09:17:46.100564 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bc9bbdff8-zczrp" event={"ID":"f415e24c-207c-4dc7-b68c-14180ac09391","Type":"ContainerStarted","Data":"9c0b5e241b0b51a690dd9f346219c8bee2d0c8e7039ad56b281f6ca277463616"} Mar 11 09:17:46 crc kubenswrapper[4840]: I0311 09:17:46.101010 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bc9bbdff8-zczrp" event={"ID":"f415e24c-207c-4dc7-b68c-14180ac09391","Type":"ContainerStarted","Data":"b9fdfd8bf27758ee9513c698619fd2a58d3ef811e44b476641b6d47eafc6a701"} Mar 11 09:17:46 crc kubenswrapper[4840]: I0311 09:17:46.101028 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bc9bbdff8-zczrp" event={"ID":"f415e24c-207c-4dc7-b68c-14180ac09391","Type":"ContainerStarted","Data":"402a0d6c12cf3081f162b44a7bac1199ca1b57a8d2f74b430f973ab11ba90cd7"} Mar 11 09:17:46 crc kubenswrapper[4840]: I0311 09:17:46.102726 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:17:46 crc kubenswrapper[4840]: I0311 09:17:46.102760 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:17:46 crc kubenswrapper[4840]: I0311 09:17:46.113004 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bcbd67b5c-tnrsf" event={"ID":"f86e8b7a-656b-423e-8cf0-6d1025486c46","Type":"ContainerStarted","Data":"4eb33fb06f0c44c9373d79d5c966ac7a03ec43ee8c7e1b0b5af3343cd05a504e"} Mar 11 09:17:46 crc kubenswrapper[4840]: I0311 09:17:46.113060 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:17:46 crc kubenswrapper[4840]: I0311 09:17:46.113074 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bcbd67b5c-tnrsf" event={"ID":"f86e8b7a-656b-423e-8cf0-6d1025486c46","Type":"ContainerStarted","Data":"f3bebdc55940246604327029b281fd265e4c4fc1e22413c991f75f60a392c2c0"} Mar 11 09:17:46 crc kubenswrapper[4840]: I0311 09:17:46.137857 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5bc9bbdff8-zczrp" podStartSLOduration=5.137830426 podStartE2EDuration="5.137830426s" podCreationTimestamp="2026-03-11 09:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:17:46.124791598 +0000 UTC m=+1264.790461413" watchObservedRunningTime="2026-03-11 09:17:46.137830426 +0000 UTC m=+1264.803500241" Mar 11 09:17:46 crc kubenswrapper[4840]: I0311 09:17:46.156262 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bcbd67b5c-tnrsf" podStartSLOduration=5.156239799 podStartE2EDuration="5.156239799s" podCreationTimestamp="2026-03-11 09:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:17:46.152936336 +0000 UTC m=+1264.818606151" watchObservedRunningTime="2026-03-11 09:17:46.156239799 +0000 UTC m=+1264.821909624" Mar 11 09:17:47 crc kubenswrapper[4840]: I0311 09:17:47.123477 4840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 09:17:47 crc kubenswrapper[4840]: I0311 09:17:47.123865 4840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 09:17:47 crc kubenswrapper[4840]: I0311 09:17:47.124907 4840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 09:17:47 crc kubenswrapper[4840]: I0311 09:17:47.124918 4840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 09:17:47 crc kubenswrapper[4840]: I0311 09:17:47.489613 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" Mar 11 09:17:47 crc kubenswrapper[4840]: I0311 09:17:47.510551 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 11 09:17:47 crc kubenswrapper[4840]: I0311 09:17:47.519252 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 11 09:17:47 crc kubenswrapper[4840]: I0311 09:17:47.571831 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 11 09:17:47 crc kubenswrapper[4840]: I0311 09:17:47.600534 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74b48649cc-qqk8g"] Mar 11 09:17:47 crc kubenswrapper[4840]: I0311 09:17:47.600815 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" podUID="1bcd01ea-4ad0-4aba-8acd-c509e2e5225d" containerName="dnsmasq-dns" containerID="cri-o://912b29bc7933f8813ea8a407e5436ed074c596d88ce388b752d3bede3b50533d" gracePeriod=10 Mar 11 09:17:47 crc kubenswrapper[4840]: I0311 09:17:47.934062 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 11 09:17:48 crc kubenswrapper[4840]: I0311 09:17:48.177993 4840 generic.go:334] "Generic (PLEG): container finished" podID="1bcd01ea-4ad0-4aba-8acd-c509e2e5225d" containerID="912b29bc7933f8813ea8a407e5436ed074c596d88ce388b752d3bede3b50533d" exitCode=0 Mar 11 09:17:48 crc kubenswrapper[4840]: I0311 09:17:48.178095 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" event={"ID":"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d","Type":"ContainerDied","Data":"912b29bc7933f8813ea8a407e5436ed074c596d88ce388b752d3bede3b50533d"} Mar 11 09:17:48 crc kubenswrapper[4840]: I0311 09:17:48.182220 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-r5wvc" event={"ID":"eb9a61ad-a8fb-4968-b32a-ff5756add27b","Type":"ContainerStarted","Data":"db1b4e68139a5c264cc99c225a5d17869429e390074a77b4d5a954cfda2abd92"} Mar 11 09:17:48 crc kubenswrapper[4840]: I0311 09:17:48.221079 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-r5wvc" podStartSLOduration=2.715505287 podStartE2EDuration="37.221060508s" podCreationTimestamp="2026-03-11 09:17:11 +0000 UTC" firstStartedPulling="2026-03-11 09:17:13.162736936 +0000 UTC m=+1231.828406751" lastFinishedPulling="2026-03-11 09:17:47.668292157 +0000 UTC m=+1266.333961972" observedRunningTime="2026-03-11 09:17:48.217656832 +0000 UTC m=+1266.883326647" watchObservedRunningTime="2026-03-11 09:17:48.221060508 +0000 UTC m=+1266.886730323" Mar 11 09:17:48 crc kubenswrapper[4840]: I0311 09:17:48.326271 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" Mar 11 09:17:48 crc kubenswrapper[4840]: I0311 09:17:48.422118 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcl9t\" (UniqueName: \"kubernetes.io/projected/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-kube-api-access-dcl9t\") pod \"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d\" (UID: \"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d\") " Mar 11 09:17:48 crc kubenswrapper[4840]: I0311 09:17:48.422172 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-ovsdbserver-nb\") pod \"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d\" (UID: \"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d\") " Mar 11 09:17:48 crc kubenswrapper[4840]: I0311 09:17:48.422261 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-dns-svc\") pod \"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d\" (UID: \"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d\") " Mar 11 09:17:48 crc kubenswrapper[4840]: I0311 09:17:48.422307 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-ovsdbserver-sb\") pod \"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d\" (UID: \"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d\") " Mar 11 09:17:48 crc kubenswrapper[4840]: I0311 09:17:48.422420 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-config\") pod \"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d\" (UID: \"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d\") " Mar 11 09:17:48 crc kubenswrapper[4840]: I0311 09:17:48.430300 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-kube-api-access-dcl9t" (OuterVolumeSpecName: "kube-api-access-dcl9t") pod "1bcd01ea-4ad0-4aba-8acd-c509e2e5225d" (UID: "1bcd01ea-4ad0-4aba-8acd-c509e2e5225d"). InnerVolumeSpecName "kube-api-access-dcl9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:17:48 crc kubenswrapper[4840]: I0311 09:17:48.484659 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-config" (OuterVolumeSpecName: "config") pod "1bcd01ea-4ad0-4aba-8acd-c509e2e5225d" (UID: "1bcd01ea-4ad0-4aba-8acd-c509e2e5225d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:48 crc kubenswrapper[4840]: I0311 09:17:48.484705 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1bcd01ea-4ad0-4aba-8acd-c509e2e5225d" (UID: "1bcd01ea-4ad0-4aba-8acd-c509e2e5225d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:48 crc kubenswrapper[4840]: I0311 09:17:48.494819 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1bcd01ea-4ad0-4aba-8acd-c509e2e5225d" (UID: "1bcd01ea-4ad0-4aba-8acd-c509e2e5225d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:48 crc kubenswrapper[4840]: I0311 09:17:48.503380 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1bcd01ea-4ad0-4aba-8acd-c509e2e5225d" (UID: "1bcd01ea-4ad0-4aba-8acd-c509e2e5225d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:17:48 crc kubenswrapper[4840]: I0311 09:17:48.531001 4840 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:48 crc kubenswrapper[4840]: I0311 09:17:48.531056 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:48 crc kubenswrapper[4840]: I0311 09:17:48.531072 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:48 crc kubenswrapper[4840]: I0311 09:17:48.531086 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcl9t\" (UniqueName: \"kubernetes.io/projected/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-kube-api-access-dcl9t\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:48 crc kubenswrapper[4840]: I0311 09:17:48.531099 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:49 crc kubenswrapper[4840]: I0311 09:17:49.197877 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" event={"ID":"1bcd01ea-4ad0-4aba-8acd-c509e2e5225d","Type":"ContainerDied","Data":"113d4b742dd8ae19b28f146f0585ccc5cc9c6536af5db33c3900fb0851a5daf1"} Mar 11 09:17:49 crc kubenswrapper[4840]: I0311 09:17:49.198425 4840 scope.go:117] "RemoveContainer" containerID="912b29bc7933f8813ea8a407e5436ed074c596d88ce388b752d3bede3b50533d" Mar 11 09:17:49 crc kubenswrapper[4840]: I0311 09:17:49.198207 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74b48649cc-qqk8g" Mar 11 09:17:49 crc kubenswrapper[4840]: I0311 09:17:49.263545 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74b48649cc-qqk8g"] Mar 11 09:17:49 crc kubenswrapper[4840]: I0311 09:17:49.267730 4840 scope.go:117] "RemoveContainer" containerID="930fdc17eea09a6c237d3c90b6dd35e9890981d6f7813e5ca2fd41393daedef2" Mar 11 09:17:49 crc kubenswrapper[4840]: I0311 09:17:49.275984 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74b48649cc-qqk8g"] Mar 11 09:17:50 crc kubenswrapper[4840]: I0311 09:17:50.073831 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bcd01ea-4ad0-4aba-8acd-c509e2e5225d" path="/var/lib/kubelet/pods/1bcd01ea-4ad0-4aba-8acd-c509e2e5225d/volumes" Mar 11 09:17:51 crc kubenswrapper[4840]: I0311 09:17:51.222281 4840 generic.go:334] "Generic (PLEG): container finished" podID="eb9a61ad-a8fb-4968-b32a-ff5756add27b" containerID="db1b4e68139a5c264cc99c225a5d17869429e390074a77b4d5a954cfda2abd92" exitCode=0 Mar 11 09:17:51 crc kubenswrapper[4840]: I0311 09:17:51.222396 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-r5wvc" event={"ID":"eb9a61ad-a8fb-4968-b32a-ff5756add27b","Type":"ContainerDied","Data":"db1b4e68139a5c264cc99c225a5d17869429e390074a77b4d5a954cfda2abd92"} Mar 11 09:17:52 crc kubenswrapper[4840]: I0311 09:17:52.808233 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-r5wvc" Mar 11 09:17:52 crc kubenswrapper[4840]: I0311 09:17:52.924862 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jgnb\" (UniqueName: \"kubernetes.io/projected/eb9a61ad-a8fb-4968-b32a-ff5756add27b-kube-api-access-2jgnb\") pod \"eb9a61ad-a8fb-4968-b32a-ff5756add27b\" (UID: \"eb9a61ad-a8fb-4968-b32a-ff5756add27b\") " Mar 11 09:17:52 crc kubenswrapper[4840]: I0311 09:17:52.924927 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb9a61ad-a8fb-4968-b32a-ff5756add27b-db-sync-config-data\") pod \"eb9a61ad-a8fb-4968-b32a-ff5756add27b\" (UID: \"eb9a61ad-a8fb-4968-b32a-ff5756add27b\") " Mar 11 09:17:52 crc kubenswrapper[4840]: I0311 09:17:52.925021 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb9a61ad-a8fb-4968-b32a-ff5756add27b-combined-ca-bundle\") pod \"eb9a61ad-a8fb-4968-b32a-ff5756add27b\" (UID: \"eb9a61ad-a8fb-4968-b32a-ff5756add27b\") " Mar 11 09:17:52 crc kubenswrapper[4840]: I0311 09:17:52.932604 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb9a61ad-a8fb-4968-b32a-ff5756add27b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "eb9a61ad-a8fb-4968-b32a-ff5756add27b" (UID: "eb9a61ad-a8fb-4968-b32a-ff5756add27b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:52 crc kubenswrapper[4840]: I0311 09:17:52.946107 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb9a61ad-a8fb-4968-b32a-ff5756add27b-kube-api-access-2jgnb" (OuterVolumeSpecName: "kube-api-access-2jgnb") pod "eb9a61ad-a8fb-4968-b32a-ff5756add27b" (UID: "eb9a61ad-a8fb-4968-b32a-ff5756add27b"). InnerVolumeSpecName "kube-api-access-2jgnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:17:52 crc kubenswrapper[4840]: I0311 09:17:52.958961 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb9a61ad-a8fb-4968-b32a-ff5756add27b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb9a61ad-a8fb-4968-b32a-ff5756add27b" (UID: "eb9a61ad-a8fb-4968-b32a-ff5756add27b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.027127 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jgnb\" (UniqueName: \"kubernetes.io/projected/eb9a61ad-a8fb-4968-b32a-ff5756add27b-kube-api-access-2jgnb\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.027164 4840 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eb9a61ad-a8fb-4968-b32a-ff5756add27b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.027175 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb9a61ad-a8fb-4968-b32a-ff5756add27b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.245600 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-r5wvc" event={"ID":"eb9a61ad-a8fb-4968-b32a-ff5756add27b","Type":"ContainerDied","Data":"458533b995fb6adc2119952be162e4cfec45297d75bb9ff07b988c0fb474e5a9"} Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.245639 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="458533b995fb6adc2119952be162e4cfec45297d75bb9ff07b988c0fb474e5a9" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.245694 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-r5wvc" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.506142 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-db9f7b9c-cdnm7"] Mar 11 09:17:53 crc kubenswrapper[4840]: E0311 09:17:53.506948 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb9a61ad-a8fb-4968-b32a-ff5756add27b" containerName="barbican-db-sync" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.506967 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb9a61ad-a8fb-4968-b32a-ff5756add27b" containerName="barbican-db-sync" Mar 11 09:17:53 crc kubenswrapper[4840]: E0311 09:17:53.506982 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bcd01ea-4ad0-4aba-8acd-c509e2e5225d" containerName="dnsmasq-dns" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.506990 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bcd01ea-4ad0-4aba-8acd-c509e2e5225d" containerName="dnsmasq-dns" Mar 11 09:17:53 crc kubenswrapper[4840]: E0311 09:17:53.507018 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bcd01ea-4ad0-4aba-8acd-c509e2e5225d" containerName="init" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.507024 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bcd01ea-4ad0-4aba-8acd-c509e2e5225d" containerName="init" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.507203 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb9a61ad-a8fb-4968-b32a-ff5756add27b" containerName="barbican-db-sync" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.507226 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bcd01ea-4ad0-4aba-8acd-c509e2e5225d" containerName="dnsmasq-dns" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.508285 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-db9f7b9c-cdnm7" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.512539 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.512804 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-4knzv" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.513015 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.524941 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-84c78b97c8-frfs9"] Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.528491 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.533137 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.535598 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-84c78b97c8-frfs9"] Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.553591 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-db9f7b9c-cdnm7"] Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.639310 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7acca3e-61e4-495b-adbf-36c435b4a7d2-config-data-custom\") pod \"barbican-keystone-listener-84c78b97c8-frfs9\" (UID: \"d7acca3e-61e4-495b-adbf-36c435b4a7d2\") " pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.639367 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ab14b76-8d70-44f2-b986-b3d600c73b60-config-data-custom\") pod \"barbican-worker-db9f7b9c-cdnm7\" (UID: \"0ab14b76-8d70-44f2-b986-b3d600c73b60\") " pod="openstack/barbican-worker-db9f7b9c-cdnm7" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.639406 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7acca3e-61e4-495b-adbf-36c435b4a7d2-config-data\") pod \"barbican-keystone-listener-84c78b97c8-frfs9\" (UID: \"d7acca3e-61e4-495b-adbf-36c435b4a7d2\") " pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.639433 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hb29\" (UniqueName: \"kubernetes.io/projected/d7acca3e-61e4-495b-adbf-36c435b4a7d2-kube-api-access-6hb29\") pod \"barbican-keystone-listener-84c78b97c8-frfs9\" (UID: \"d7acca3e-61e4-495b-adbf-36c435b4a7d2\") " pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.639549 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7acca3e-61e4-495b-adbf-36c435b4a7d2-logs\") pod \"barbican-keystone-listener-84c78b97c8-frfs9\" (UID: \"d7acca3e-61e4-495b-adbf-36c435b4a7d2\") " pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.639567 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ab14b76-8d70-44f2-b986-b3d600c73b60-logs\") pod \"barbican-worker-db9f7b9c-cdnm7\" (UID: \"0ab14b76-8d70-44f2-b986-b3d600c73b60\") " pod="openstack/barbican-worker-db9f7b9c-cdnm7" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.639593 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab14b76-8d70-44f2-b986-b3d600c73b60-combined-ca-bundle\") pod \"barbican-worker-db9f7b9c-cdnm7\" (UID: \"0ab14b76-8d70-44f2-b986-b3d600c73b60\") " pod="openstack/barbican-worker-db9f7b9c-cdnm7" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.639635 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7acca3e-61e4-495b-adbf-36c435b4a7d2-combined-ca-bundle\") pod \"barbican-keystone-listener-84c78b97c8-frfs9\" (UID: \"d7acca3e-61e4-495b-adbf-36c435b4a7d2\") " pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.639671 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab14b76-8d70-44f2-b986-b3d600c73b60-config-data\") pod \"barbican-worker-db9f7b9c-cdnm7\" (UID: \"0ab14b76-8d70-44f2-b986-b3d600c73b60\") " pod="openstack/barbican-worker-db9f7b9c-cdnm7" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.639693 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jphpq\" (UniqueName: \"kubernetes.io/projected/0ab14b76-8d70-44f2-b986-b3d600c73b60-kube-api-access-jphpq\") pod \"barbican-worker-db9f7b9c-cdnm7\" (UID: \"0ab14b76-8d70-44f2-b986-b3d600c73b60\") " pod="openstack/barbican-worker-db9f7b9c-cdnm7" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.666739 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8449d68f4f-bbl7j"] Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.671153 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.698178 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8449d68f4f-bbl7j"] Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.741444 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-dns-svc\") pod \"dnsmasq-dns-8449d68f4f-bbl7j\" (UID: \"1fd9aab6-5204-4f1d-9c99-36b158699e16\") " pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.741696 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7acca3e-61e4-495b-adbf-36c435b4a7d2-combined-ca-bundle\") pod \"barbican-keystone-listener-84c78b97c8-frfs9\" (UID: \"d7acca3e-61e4-495b-adbf-36c435b4a7d2\") " pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.741787 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-ovsdbserver-sb\") pod \"dnsmasq-dns-8449d68f4f-bbl7j\" (UID: \"1fd9aab6-5204-4f1d-9c99-36b158699e16\") " pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.741881 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab14b76-8d70-44f2-b986-b3d600c73b60-config-data\") pod \"barbican-worker-db9f7b9c-cdnm7\" (UID: \"0ab14b76-8d70-44f2-b986-b3d600c73b60\") " pod="openstack/barbican-worker-db9f7b9c-cdnm7" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.741968 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jphpq\" (UniqueName: \"kubernetes.io/projected/0ab14b76-8d70-44f2-b986-b3d600c73b60-kube-api-access-jphpq\") pod \"barbican-worker-db9f7b9c-cdnm7\" (UID: \"0ab14b76-8d70-44f2-b986-b3d600c73b60\") " pod="openstack/barbican-worker-db9f7b9c-cdnm7" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.742079 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-config\") pod \"dnsmasq-dns-8449d68f4f-bbl7j\" (UID: \"1fd9aab6-5204-4f1d-9c99-36b158699e16\") " pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.742147 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-dns-swift-storage-0\") pod \"dnsmasq-dns-8449d68f4f-bbl7j\" (UID: \"1fd9aab6-5204-4f1d-9c99-36b158699e16\") " pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.742231 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7acca3e-61e4-495b-adbf-36c435b4a7d2-config-data-custom\") pod \"barbican-keystone-listener-84c78b97c8-frfs9\" (UID: \"d7acca3e-61e4-495b-adbf-36c435b4a7d2\") " pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.742297 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-ovsdbserver-nb\") pod \"dnsmasq-dns-8449d68f4f-bbl7j\" (UID: \"1fd9aab6-5204-4f1d-9c99-36b158699e16\") " pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.742367 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ab14b76-8d70-44f2-b986-b3d600c73b60-config-data-custom\") pod \"barbican-worker-db9f7b9c-cdnm7\" (UID: \"0ab14b76-8d70-44f2-b986-b3d600c73b60\") " pod="openstack/barbican-worker-db9f7b9c-cdnm7" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.742462 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7acca3e-61e4-495b-adbf-36c435b4a7d2-config-data\") pod \"barbican-keystone-listener-84c78b97c8-frfs9\" (UID: \"d7acca3e-61e4-495b-adbf-36c435b4a7d2\") " pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.742557 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hb29\" (UniqueName: \"kubernetes.io/projected/d7acca3e-61e4-495b-adbf-36c435b4a7d2-kube-api-access-6hb29\") pod \"barbican-keystone-listener-84c78b97c8-frfs9\" (UID: \"d7acca3e-61e4-495b-adbf-36c435b4a7d2\") " pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.742704 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7acca3e-61e4-495b-adbf-36c435b4a7d2-logs\") pod \"barbican-keystone-listener-84c78b97c8-frfs9\" (UID: \"d7acca3e-61e4-495b-adbf-36c435b4a7d2\") " pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.743813 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ab14b76-8d70-44f2-b986-b3d600c73b60-logs\") pod \"barbican-worker-db9f7b9c-cdnm7\" (UID: \"0ab14b76-8d70-44f2-b986-b3d600c73b60\") " pod="openstack/barbican-worker-db9f7b9c-cdnm7" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.744403 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab14b76-8d70-44f2-b986-b3d600c73b60-combined-ca-bundle\") pod \"barbican-worker-db9f7b9c-cdnm7\" (UID: \"0ab14b76-8d70-44f2-b986-b3d600c73b60\") " pod="openstack/barbican-worker-db9f7b9c-cdnm7" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.744577 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsqkz\" (UniqueName: \"kubernetes.io/projected/1fd9aab6-5204-4f1d-9c99-36b158699e16-kube-api-access-hsqkz\") pod \"dnsmasq-dns-8449d68f4f-bbl7j\" (UID: \"1fd9aab6-5204-4f1d-9c99-36b158699e16\") " pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.743261 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7acca3e-61e4-495b-adbf-36c435b4a7d2-logs\") pod \"barbican-keystone-listener-84c78b97c8-frfs9\" (UID: \"d7acca3e-61e4-495b-adbf-36c435b4a7d2\") " pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.744351 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ab14b76-8d70-44f2-b986-b3d600c73b60-logs\") pod \"barbican-worker-db9f7b9c-cdnm7\" (UID: \"0ab14b76-8d70-44f2-b986-b3d600c73b60\") " pod="openstack/barbican-worker-db9f7b9c-cdnm7" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.748757 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab14b76-8d70-44f2-b986-b3d600c73b60-config-data\") pod \"barbican-worker-db9f7b9c-cdnm7\" (UID: \"0ab14b76-8d70-44f2-b986-b3d600c73b60\") " pod="openstack/barbican-worker-db9f7b9c-cdnm7" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.749091 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7acca3e-61e4-495b-adbf-36c435b4a7d2-config-data-custom\") pod \"barbican-keystone-listener-84c78b97c8-frfs9\" (UID: \"d7acca3e-61e4-495b-adbf-36c435b4a7d2\") " pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.749759 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7acca3e-61e4-495b-adbf-36c435b4a7d2-combined-ca-bundle\") pod \"barbican-keystone-listener-84c78b97c8-frfs9\" (UID: \"d7acca3e-61e4-495b-adbf-36c435b4a7d2\") " pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.755774 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7acca3e-61e4-495b-adbf-36c435b4a7d2-config-data\") pod \"barbican-keystone-listener-84c78b97c8-frfs9\" (UID: \"d7acca3e-61e4-495b-adbf-36c435b4a7d2\") " pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.759308 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ab14b76-8d70-44f2-b986-b3d600c73b60-config-data-custom\") pod \"barbican-worker-db9f7b9c-cdnm7\" (UID: \"0ab14b76-8d70-44f2-b986-b3d600c73b60\") " pod="openstack/barbican-worker-db9f7b9c-cdnm7" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.761441 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab14b76-8d70-44f2-b986-b3d600c73b60-combined-ca-bundle\") pod \"barbican-worker-db9f7b9c-cdnm7\" (UID: \"0ab14b76-8d70-44f2-b986-b3d600c73b60\") " pod="openstack/barbican-worker-db9f7b9c-cdnm7" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.762825 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jphpq\" (UniqueName: \"kubernetes.io/projected/0ab14b76-8d70-44f2-b986-b3d600c73b60-kube-api-access-jphpq\") pod \"barbican-worker-db9f7b9c-cdnm7\" (UID: \"0ab14b76-8d70-44f2-b986-b3d600c73b60\") " pod="openstack/barbican-worker-db9f7b9c-cdnm7" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.771992 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hb29\" (UniqueName: \"kubernetes.io/projected/d7acca3e-61e4-495b-adbf-36c435b4a7d2-kube-api-access-6hb29\") pod \"barbican-keystone-listener-84c78b97c8-frfs9\" (UID: \"d7acca3e-61e4-495b-adbf-36c435b4a7d2\") " pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.838011 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-db9f7b9c-cdnm7" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.847198 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsqkz\" (UniqueName: \"kubernetes.io/projected/1fd9aab6-5204-4f1d-9c99-36b158699e16-kube-api-access-hsqkz\") pod \"dnsmasq-dns-8449d68f4f-bbl7j\" (UID: \"1fd9aab6-5204-4f1d-9c99-36b158699e16\") " pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.847280 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-dns-svc\") pod \"dnsmasq-dns-8449d68f4f-bbl7j\" (UID: \"1fd9aab6-5204-4f1d-9c99-36b158699e16\") " pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.847320 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-ovsdbserver-sb\") pod \"dnsmasq-dns-8449d68f4f-bbl7j\" (UID: \"1fd9aab6-5204-4f1d-9c99-36b158699e16\") " pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.847549 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-config\") pod \"dnsmasq-dns-8449d68f4f-bbl7j\" (UID: \"1fd9aab6-5204-4f1d-9c99-36b158699e16\") " pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.847583 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-dns-swift-storage-0\") pod \"dnsmasq-dns-8449d68f4f-bbl7j\" (UID: \"1fd9aab6-5204-4f1d-9c99-36b158699e16\") " pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.847608 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-ovsdbserver-nb\") pod \"dnsmasq-dns-8449d68f4f-bbl7j\" (UID: \"1fd9aab6-5204-4f1d-9c99-36b158699e16\") " pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.848325 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-dns-svc\") pod \"dnsmasq-dns-8449d68f4f-bbl7j\" (UID: \"1fd9aab6-5204-4f1d-9c99-36b158699e16\") " pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.848640 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-config\") pod \"dnsmasq-dns-8449d68f4f-bbl7j\" (UID: \"1fd9aab6-5204-4f1d-9c99-36b158699e16\") " pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.849212 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-dns-swift-storage-0\") pod \"dnsmasq-dns-8449d68f4f-bbl7j\" (UID: \"1fd9aab6-5204-4f1d-9c99-36b158699e16\") " pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.855151 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-ovsdbserver-nb\") pod \"dnsmasq-dns-8449d68f4f-bbl7j\" (UID: \"1fd9aab6-5204-4f1d-9c99-36b158699e16\") " pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.855345 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.857556 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-ovsdbserver-sb\") pod \"dnsmasq-dns-8449d68f4f-bbl7j\" (UID: \"1fd9aab6-5204-4f1d-9c99-36b158699e16\") " pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.864387 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-774b7849d6-lz2xl"] Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.866674 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-774b7849d6-lz2xl" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.871962 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.900184 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsqkz\" (UniqueName: \"kubernetes.io/projected/1fd9aab6-5204-4f1d-9c99-36b158699e16-kube-api-access-hsqkz\") pod \"dnsmasq-dns-8449d68f4f-bbl7j\" (UID: \"1fd9aab6-5204-4f1d-9c99-36b158699e16\") " pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.900401 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-774b7849d6-lz2xl"] Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.949900 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b47f430-5dea-4714-bf85-8d6a7d6d8388-combined-ca-bundle\") pod \"barbican-api-774b7849d6-lz2xl\" (UID: \"5b47f430-5dea-4714-bf85-8d6a7d6d8388\") " pod="openstack/barbican-api-774b7849d6-lz2xl" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.949968 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b47f430-5dea-4714-bf85-8d6a7d6d8388-logs\") pod \"barbican-api-774b7849d6-lz2xl\" (UID: \"5b47f430-5dea-4714-bf85-8d6a7d6d8388\") " pod="openstack/barbican-api-774b7849d6-lz2xl" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.950342 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b47f430-5dea-4714-bf85-8d6a7d6d8388-config-data-custom\") pod \"barbican-api-774b7849d6-lz2xl\" (UID: \"5b47f430-5dea-4714-bf85-8d6a7d6d8388\") " pod="openstack/barbican-api-774b7849d6-lz2xl" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.950506 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmjmg\" (UniqueName: \"kubernetes.io/projected/5b47f430-5dea-4714-bf85-8d6a7d6d8388-kube-api-access-kmjmg\") pod \"barbican-api-774b7849d6-lz2xl\" (UID: \"5b47f430-5dea-4714-bf85-8d6a7d6d8388\") " pod="openstack/barbican-api-774b7849d6-lz2xl" Mar 11 09:17:53 crc kubenswrapper[4840]: I0311 09:17:53.950547 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b47f430-5dea-4714-bf85-8d6a7d6d8388-config-data\") pod \"barbican-api-774b7849d6-lz2xl\" (UID: \"5b47f430-5dea-4714-bf85-8d6a7d6d8388\") " pod="openstack/barbican-api-774b7849d6-lz2xl" Mar 11 09:17:54 crc kubenswrapper[4840]: I0311 09:17:54.005693 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" Mar 11 09:17:54 crc kubenswrapper[4840]: I0311 09:17:54.058390 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b47f430-5dea-4714-bf85-8d6a7d6d8388-combined-ca-bundle\") pod \"barbican-api-774b7849d6-lz2xl\" (UID: \"5b47f430-5dea-4714-bf85-8d6a7d6d8388\") " pod="openstack/barbican-api-774b7849d6-lz2xl" Mar 11 09:17:54 crc kubenswrapper[4840]: I0311 09:17:54.058489 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b47f430-5dea-4714-bf85-8d6a7d6d8388-logs\") pod \"barbican-api-774b7849d6-lz2xl\" (UID: \"5b47f430-5dea-4714-bf85-8d6a7d6d8388\") " pod="openstack/barbican-api-774b7849d6-lz2xl" Mar 11 09:17:54 crc kubenswrapper[4840]: I0311 09:17:54.058650 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b47f430-5dea-4714-bf85-8d6a7d6d8388-config-data-custom\") pod \"barbican-api-774b7849d6-lz2xl\" (UID: \"5b47f430-5dea-4714-bf85-8d6a7d6d8388\") " pod="openstack/barbican-api-774b7849d6-lz2xl" Mar 11 09:17:54 crc kubenswrapper[4840]: I0311 09:17:54.058768 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmjmg\" (UniqueName: \"kubernetes.io/projected/5b47f430-5dea-4714-bf85-8d6a7d6d8388-kube-api-access-kmjmg\") pod \"barbican-api-774b7849d6-lz2xl\" (UID: \"5b47f430-5dea-4714-bf85-8d6a7d6d8388\") " pod="openstack/barbican-api-774b7849d6-lz2xl" Mar 11 09:17:54 crc kubenswrapper[4840]: I0311 09:17:54.058814 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b47f430-5dea-4714-bf85-8d6a7d6d8388-config-data\") pod \"barbican-api-774b7849d6-lz2xl\" (UID: \"5b47f430-5dea-4714-bf85-8d6a7d6d8388\") " pod="openstack/barbican-api-774b7849d6-lz2xl" Mar 11 09:17:54 crc kubenswrapper[4840]: I0311 09:17:54.060155 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b47f430-5dea-4714-bf85-8d6a7d6d8388-logs\") pod \"barbican-api-774b7849d6-lz2xl\" (UID: \"5b47f430-5dea-4714-bf85-8d6a7d6d8388\") " pod="openstack/barbican-api-774b7849d6-lz2xl" Mar 11 09:17:54 crc kubenswrapper[4840]: I0311 09:17:54.064456 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b47f430-5dea-4714-bf85-8d6a7d6d8388-config-data-custom\") pod \"barbican-api-774b7849d6-lz2xl\" (UID: \"5b47f430-5dea-4714-bf85-8d6a7d6d8388\") " pod="openstack/barbican-api-774b7849d6-lz2xl" Mar 11 09:17:54 crc kubenswrapper[4840]: I0311 09:17:54.066481 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b47f430-5dea-4714-bf85-8d6a7d6d8388-config-data\") pod \"barbican-api-774b7849d6-lz2xl\" (UID: \"5b47f430-5dea-4714-bf85-8d6a7d6d8388\") " pod="openstack/barbican-api-774b7849d6-lz2xl" Mar 11 09:17:54 crc kubenswrapper[4840]: I0311 09:17:54.073037 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b47f430-5dea-4714-bf85-8d6a7d6d8388-combined-ca-bundle\") pod \"barbican-api-774b7849d6-lz2xl\" (UID: \"5b47f430-5dea-4714-bf85-8d6a7d6d8388\") " pod="openstack/barbican-api-774b7849d6-lz2xl" Mar 11 09:17:54 crc kubenswrapper[4840]: I0311 09:17:54.076453 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmjmg\" (UniqueName: \"kubernetes.io/projected/5b47f430-5dea-4714-bf85-8d6a7d6d8388-kube-api-access-kmjmg\") pod \"barbican-api-774b7849d6-lz2xl\" (UID: \"5b47f430-5dea-4714-bf85-8d6a7d6d8388\") " pod="openstack/barbican-api-774b7849d6-lz2xl" Mar 11 09:17:54 crc kubenswrapper[4840]: I0311 09:17:54.254111 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-774b7849d6-lz2xl" Mar 11 09:17:54 crc kubenswrapper[4840]: I0311 09:17:54.915447 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-db9f7b9c-cdnm7"] Mar 11 09:17:55 crc kubenswrapper[4840]: I0311 09:17:55.044197 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-84c78b97c8-frfs9"] Mar 11 09:17:55 crc kubenswrapper[4840]: W0311 09:17:55.059032 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7acca3e_61e4_495b_adbf_36c435b4a7d2.slice/crio-0d6a73c7be414144b5b39d108da60d4f4d294a3e15759f0745cc1e91bf77052c WatchSource:0}: Error finding container 0d6a73c7be414144b5b39d108da60d4f4d294a3e15759f0745cc1e91bf77052c: Status 404 returned error can't find the container with id 0d6a73c7be414144b5b39d108da60d4f4d294a3e15759f0745cc1e91bf77052c Mar 11 09:17:55 crc kubenswrapper[4840]: I0311 09:17:55.062132 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-774b7849d6-lz2xl"] Mar 11 09:17:55 crc kubenswrapper[4840]: I0311 09:17:55.072115 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8449d68f4f-bbl7j"] Mar 11 09:17:55 crc kubenswrapper[4840]: W0311 09:17:55.094487 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fd9aab6_5204_4f1d_9c99_36b158699e16.slice/crio-b4294f14b88183e458e1f34ee48b3f743077409b6bd045b8bf386352c05a8a90 WatchSource:0}: Error finding container b4294f14b88183e458e1f34ee48b3f743077409b6bd045b8bf386352c05a8a90: Status 404 returned error can't find the container with id b4294f14b88183e458e1f34ee48b3f743077409b6bd045b8bf386352c05a8a90 Mar 11 09:17:55 crc kubenswrapper[4840]: I0311 09:17:55.277062 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-774b7849d6-lz2xl" event={"ID":"5b47f430-5dea-4714-bf85-8d6a7d6d8388","Type":"ContainerStarted","Data":"a2c6cb9cbf16b21baf6834ae16f3852fb3159bbc951acda1768b074516a50682"} Mar 11 09:17:55 crc kubenswrapper[4840]: I0311 09:17:55.281866 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02a9854a-9978-4414-972d-37c0b579b03b","Type":"ContainerStarted","Data":"5a7becb22d302e48a71701b42b9fa17d4ff1c8ff1d45846d695e6fcc991eb75c"} Mar 11 09:17:55 crc kubenswrapper[4840]: I0311 09:17:55.282120 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 11 09:17:55 crc kubenswrapper[4840]: I0311 09:17:55.282033 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="02a9854a-9978-4414-972d-37c0b579b03b" containerName="ceilometer-notification-agent" containerID="cri-o://06059cb8a2046225ba5bd791cd76894815640d133945e7292e8f7f4aa3ee6dd2" gracePeriod=30 Mar 11 09:17:55 crc kubenswrapper[4840]: I0311 09:17:55.281979 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="02a9854a-9978-4414-972d-37c0b579b03b" containerName="ceilometer-central-agent" containerID="cri-o://9188e168bdbca78a271d83dc12ba7a2d0a3e57c143bfede6dc8fa4f26e41a614" gracePeriod=30 Mar 11 09:17:55 crc kubenswrapper[4840]: I0311 09:17:55.282036 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="02a9854a-9978-4414-972d-37c0b579b03b" containerName="proxy-httpd" containerID="cri-o://5a7becb22d302e48a71701b42b9fa17d4ff1c8ff1d45846d695e6fcc991eb75c" gracePeriod=30 Mar 11 09:17:55 crc kubenswrapper[4840]: I0311 09:17:55.282068 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="02a9854a-9978-4414-972d-37c0b579b03b" containerName="sg-core" containerID="cri-o://8195ff056506e76b1783f1fea390441b636fd4acf55fad53349f25d7bc0caf3b" gracePeriod=30 Mar 11 09:17:55 crc kubenswrapper[4840]: I0311 09:17:55.285397 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" event={"ID":"1fd9aab6-5204-4f1d-9c99-36b158699e16","Type":"ContainerStarted","Data":"b4294f14b88183e458e1f34ee48b3f743077409b6bd045b8bf386352c05a8a90"} Mar 11 09:17:55 crc kubenswrapper[4840]: I0311 09:17:55.297004 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-db9f7b9c-cdnm7" event={"ID":"0ab14b76-8d70-44f2-b986-b3d600c73b60","Type":"ContainerStarted","Data":"07de301082e0a2bae7b73eca53225371682f43e11308a0ffea360e54fd0fc0a5"} Mar 11 09:17:55 crc kubenswrapper[4840]: I0311 09:17:55.301242 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" event={"ID":"d7acca3e-61e4-495b-adbf-36c435b4a7d2","Type":"ContainerStarted","Data":"0d6a73c7be414144b5b39d108da60d4f4d294a3e15759f0745cc1e91bf77052c"} Mar 11 09:17:55 crc kubenswrapper[4840]: I0311 09:17:55.305898 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.819748879 podStartE2EDuration="44.305872328s" podCreationTimestamp="2026-03-11 09:17:11 +0000 UTC" firstStartedPulling="2026-03-11 09:17:12.958200892 +0000 UTC m=+1231.623870717" lastFinishedPulling="2026-03-11 09:17:54.444324351 +0000 UTC m=+1273.109994166" observedRunningTime="2026-03-11 09:17:55.303361955 +0000 UTC m=+1273.969031770" watchObservedRunningTime="2026-03-11 09:17:55.305872328 +0000 UTC m=+1273.971542143" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.348699 4840 generic.go:334] "Generic (PLEG): container finished" podID="02a9854a-9978-4414-972d-37c0b579b03b" containerID="5a7becb22d302e48a71701b42b9fa17d4ff1c8ff1d45846d695e6fcc991eb75c" exitCode=0 Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.349443 4840 generic.go:334] "Generic (PLEG): container finished" podID="02a9854a-9978-4414-972d-37c0b579b03b" containerID="8195ff056506e76b1783f1fea390441b636fd4acf55fad53349f25d7bc0caf3b" exitCode=2 Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.349454 4840 generic.go:334] "Generic (PLEG): container finished" podID="02a9854a-9978-4414-972d-37c0b579b03b" containerID="9188e168bdbca78a271d83dc12ba7a2d0a3e57c143bfede6dc8fa4f26e41a614" exitCode=0 Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.349612 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02a9854a-9978-4414-972d-37c0b579b03b","Type":"ContainerDied","Data":"5a7becb22d302e48a71701b42b9fa17d4ff1c8ff1d45846d695e6fcc991eb75c"} Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.349648 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02a9854a-9978-4414-972d-37c0b579b03b","Type":"ContainerDied","Data":"8195ff056506e76b1783f1fea390441b636fd4acf55fad53349f25d7bc0caf3b"} Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.349665 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02a9854a-9978-4414-972d-37c0b579b03b","Type":"ContainerDied","Data":"9188e168bdbca78a271d83dc12ba7a2d0a3e57c143bfede6dc8fa4f26e41a614"} Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.354594 4840 generic.go:334] "Generic (PLEG): container finished" podID="1fd9aab6-5204-4f1d-9c99-36b158699e16" containerID="65fc315784ea8f573b907740def2d54633d43c279ef6e6cc749e0e9040abeff1" exitCode=0 Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.354670 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" event={"ID":"1fd9aab6-5204-4f1d-9c99-36b158699e16","Type":"ContainerDied","Data":"65fc315784ea8f573b907740def2d54633d43c279ef6e6cc749e0e9040abeff1"} Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.358553 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-k7tgg" event={"ID":"adf5efde-cb19-4870-9c8e-e7e139523238","Type":"ContainerStarted","Data":"3e6fbcf31ada0f36ed6f518ad3411ae695bfb221a54e277c0e2faccd63610af3"} Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.362836 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-774b7849d6-lz2xl" event={"ID":"5b47f430-5dea-4714-bf85-8d6a7d6d8388","Type":"ContainerStarted","Data":"541f407730550efa0d1bb50aafd0a280133040f1dec1250d8312f5614a64fc5d"} Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.362886 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-774b7849d6-lz2xl" event={"ID":"5b47f430-5dea-4714-bf85-8d6a7d6d8388","Type":"ContainerStarted","Data":"08df7213186b32f6a70402ce4911cf6e5519b68d4e6d4d7512dd363ce46eaea7"} Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.363101 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-774b7849d6-lz2xl" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.364430 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-774b7849d6-lz2xl" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.406139 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-774b7849d6-lz2xl" podStartSLOduration=3.406118299 podStartE2EDuration="3.406118299s" podCreationTimestamp="2026-03-11 09:17:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:17:56.397765679 +0000 UTC m=+1275.063435504" watchObservedRunningTime="2026-03-11 09:17:56.406118299 +0000 UTC m=+1275.071788114" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.440146 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-k7tgg" podStartSLOduration=4.158130798 podStartE2EDuration="45.440121764s" podCreationTimestamp="2026-03-11 09:17:11 +0000 UTC" firstStartedPulling="2026-03-11 09:17:13.179811585 +0000 UTC m=+1231.845481400" lastFinishedPulling="2026-03-11 09:17:54.461802551 +0000 UTC m=+1273.127472366" observedRunningTime="2026-03-11 09:17:56.430341388 +0000 UTC m=+1275.096011203" watchObservedRunningTime="2026-03-11 09:17:56.440121764 +0000 UTC m=+1275.105791579" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.560201 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7dcc97bc9b-l7z2g"] Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.564373 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.568180 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.568239 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.576538 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7dcc97bc9b-l7z2g"] Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.717766 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl84b\" (UniqueName: \"kubernetes.io/projected/15a50bea-c32e-4aed-8fd2-7289e1694f6e-kube-api-access-rl84b\") pod \"barbican-api-7dcc97bc9b-l7z2g\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.717818 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-internal-tls-certs\") pod \"barbican-api-7dcc97bc9b-l7z2g\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.717863 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-config-data-custom\") pod \"barbican-api-7dcc97bc9b-l7z2g\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.717892 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-combined-ca-bundle\") pod \"barbican-api-7dcc97bc9b-l7z2g\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.717975 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-public-tls-certs\") pod \"barbican-api-7dcc97bc9b-l7z2g\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.718017 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-config-data\") pod \"barbican-api-7dcc97bc9b-l7z2g\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.718062 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15a50bea-c32e-4aed-8fd2-7289e1694f6e-logs\") pod \"barbican-api-7dcc97bc9b-l7z2g\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.821093 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-config-data-custom\") pod \"barbican-api-7dcc97bc9b-l7z2g\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.821143 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-combined-ca-bundle\") pod \"barbican-api-7dcc97bc9b-l7z2g\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.821211 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-public-tls-certs\") pod \"barbican-api-7dcc97bc9b-l7z2g\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.821256 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-config-data\") pod \"barbican-api-7dcc97bc9b-l7z2g\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.821300 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15a50bea-c32e-4aed-8fd2-7289e1694f6e-logs\") pod \"barbican-api-7dcc97bc9b-l7z2g\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.821332 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl84b\" (UniqueName: \"kubernetes.io/projected/15a50bea-c32e-4aed-8fd2-7289e1694f6e-kube-api-access-rl84b\") pod \"barbican-api-7dcc97bc9b-l7z2g\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.821360 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-internal-tls-certs\") pod \"barbican-api-7dcc97bc9b-l7z2g\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.822246 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15a50bea-c32e-4aed-8fd2-7289e1694f6e-logs\") pod \"barbican-api-7dcc97bc9b-l7z2g\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.825409 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-public-tls-certs\") pod \"barbican-api-7dcc97bc9b-l7z2g\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.826195 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-combined-ca-bundle\") pod \"barbican-api-7dcc97bc9b-l7z2g\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.826521 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-config-data-custom\") pod \"barbican-api-7dcc97bc9b-l7z2g\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.833708 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-config-data\") pod \"barbican-api-7dcc97bc9b-l7z2g\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.834064 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-internal-tls-certs\") pod \"barbican-api-7dcc97bc9b-l7z2g\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.839770 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl84b\" (UniqueName: \"kubernetes.io/projected/15a50bea-c32e-4aed-8fd2-7289e1694f6e-kube-api-access-rl84b\") pod \"barbican-api-7dcc97bc9b-l7z2g\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:17:56 crc kubenswrapper[4840]: I0311 09:17:56.894001 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:17:57 crc kubenswrapper[4840]: I0311 09:17:57.377707 4840 generic.go:334] "Generic (PLEG): container finished" podID="02a9854a-9978-4414-972d-37c0b579b03b" containerID="06059cb8a2046225ba5bd791cd76894815640d133945e7292e8f7f4aa3ee6dd2" exitCode=0 Mar 11 09:17:57 crc kubenswrapper[4840]: I0311 09:17:57.379263 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02a9854a-9978-4414-972d-37c0b579b03b","Type":"ContainerDied","Data":"06059cb8a2046225ba5bd791cd76894815640d133945e7292e8f7f4aa3ee6dd2"} Mar 11 09:17:57 crc kubenswrapper[4840]: I0311 09:17:57.590140 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:17:57 crc kubenswrapper[4840]: I0311 09:17:57.744195 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02a9854a-9978-4414-972d-37c0b579b03b-combined-ca-bundle\") pod \"02a9854a-9978-4414-972d-37c0b579b03b\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " Mar 11 09:17:57 crc kubenswrapper[4840]: I0311 09:17:57.744255 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02a9854a-9978-4414-972d-37c0b579b03b-scripts\") pod \"02a9854a-9978-4414-972d-37c0b579b03b\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " Mar 11 09:17:57 crc kubenswrapper[4840]: I0311 09:17:57.744310 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkj7g\" (UniqueName: \"kubernetes.io/projected/02a9854a-9978-4414-972d-37c0b579b03b-kube-api-access-tkj7g\") pod \"02a9854a-9978-4414-972d-37c0b579b03b\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " Mar 11 09:17:57 crc kubenswrapper[4840]: I0311 09:17:57.744342 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02a9854a-9978-4414-972d-37c0b579b03b-run-httpd\") pod \"02a9854a-9978-4414-972d-37c0b579b03b\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " Mar 11 09:17:57 crc kubenswrapper[4840]: I0311 09:17:57.744392 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02a9854a-9978-4414-972d-37c0b579b03b-config-data\") pod \"02a9854a-9978-4414-972d-37c0b579b03b\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " Mar 11 09:17:57 crc kubenswrapper[4840]: I0311 09:17:57.744712 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02a9854a-9978-4414-972d-37c0b579b03b-sg-core-conf-yaml\") pod \"02a9854a-9978-4414-972d-37c0b579b03b\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " Mar 11 09:17:57 crc kubenswrapper[4840]: I0311 09:17:57.744803 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02a9854a-9978-4414-972d-37c0b579b03b-log-httpd\") pod \"02a9854a-9978-4414-972d-37c0b579b03b\" (UID: \"02a9854a-9978-4414-972d-37c0b579b03b\") " Mar 11 09:17:57 crc kubenswrapper[4840]: I0311 09:17:57.745620 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02a9854a-9978-4414-972d-37c0b579b03b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "02a9854a-9978-4414-972d-37c0b579b03b" (UID: "02a9854a-9978-4414-972d-37c0b579b03b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:17:57 crc kubenswrapper[4840]: I0311 09:17:57.745771 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02a9854a-9978-4414-972d-37c0b579b03b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "02a9854a-9978-4414-972d-37c0b579b03b" (UID: "02a9854a-9978-4414-972d-37c0b579b03b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:17:57 crc kubenswrapper[4840]: I0311 09:17:57.749450 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02a9854a-9978-4414-972d-37c0b579b03b-scripts" (OuterVolumeSpecName: "scripts") pod "02a9854a-9978-4414-972d-37c0b579b03b" (UID: "02a9854a-9978-4414-972d-37c0b579b03b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:57 crc kubenswrapper[4840]: I0311 09:17:57.749576 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02a9854a-9978-4414-972d-37c0b579b03b-kube-api-access-tkj7g" (OuterVolumeSpecName: "kube-api-access-tkj7g") pod "02a9854a-9978-4414-972d-37c0b579b03b" (UID: "02a9854a-9978-4414-972d-37c0b579b03b"). InnerVolumeSpecName "kube-api-access-tkj7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:17:57 crc kubenswrapper[4840]: I0311 09:17:57.770075 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02a9854a-9978-4414-972d-37c0b579b03b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "02a9854a-9978-4414-972d-37c0b579b03b" (UID: "02a9854a-9978-4414-972d-37c0b579b03b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:57 crc kubenswrapper[4840]: I0311 09:17:57.846984 4840 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/02a9854a-9978-4414-972d-37c0b579b03b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:57 crc kubenswrapper[4840]: I0311 09:17:57.847011 4840 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02a9854a-9978-4414-972d-37c0b579b03b-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:57 crc kubenswrapper[4840]: I0311 09:17:57.847023 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02a9854a-9978-4414-972d-37c0b579b03b-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:57 crc kubenswrapper[4840]: I0311 09:17:57.847034 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkj7g\" (UniqueName: \"kubernetes.io/projected/02a9854a-9978-4414-972d-37c0b579b03b-kube-api-access-tkj7g\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:57 crc kubenswrapper[4840]: I0311 09:17:57.847045 4840 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/02a9854a-9978-4414-972d-37c0b579b03b-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:57 crc kubenswrapper[4840]: I0311 09:17:57.847399 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02a9854a-9978-4414-972d-37c0b579b03b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02a9854a-9978-4414-972d-37c0b579b03b" (UID: "02a9854a-9978-4414-972d-37c0b579b03b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:57 crc kubenswrapper[4840]: I0311 09:17:57.850054 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02a9854a-9978-4414-972d-37c0b579b03b-config-data" (OuterVolumeSpecName: "config-data") pod "02a9854a-9978-4414-972d-37c0b579b03b" (UID: "02a9854a-9978-4414-972d-37c0b579b03b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:17:57 crc kubenswrapper[4840]: I0311 09:17:57.852385 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7dcc97bc9b-l7z2g"] Mar 11 09:17:57 crc kubenswrapper[4840]: W0311 09:17:57.855892 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15a50bea_c32e_4aed_8fd2_7289e1694f6e.slice/crio-0e08d48e7eb1b3838bce9bb753d5e0af02f3c8a1f8c6831a0528536d3fa724ad WatchSource:0}: Error finding container 0e08d48e7eb1b3838bce9bb753d5e0af02f3c8a1f8c6831a0528536d3fa724ad: Status 404 returned error can't find the container with id 0e08d48e7eb1b3838bce9bb753d5e0af02f3c8a1f8c6831a0528536d3fa724ad Mar 11 09:17:57 crc kubenswrapper[4840]: I0311 09:17:57.948929 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02a9854a-9978-4414-972d-37c0b579b03b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:57 crc kubenswrapper[4840]: I0311 09:17:57.948958 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02a9854a-9978-4414-972d-37c0b579b03b-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.390243 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" event={"ID":"d7acca3e-61e4-495b-adbf-36c435b4a7d2","Type":"ContainerStarted","Data":"7ee049965601ca58aa215725536bdf7c36fb49e30446ec4221604a56f4d368d1"} Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.390476 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" event={"ID":"d7acca3e-61e4-495b-adbf-36c435b4a7d2","Type":"ContainerStarted","Data":"33e49e14fd2afa823c560d68be5b5e4ad273e3e82868e5aba42b52ba56655a7e"} Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.393358 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"02a9854a-9978-4414-972d-37c0b579b03b","Type":"ContainerDied","Data":"ee82f8bf6300d739d7f47e43679cfc6eb3d0f04478c9feb1e89fdac90b54a890"} Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.393411 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.393430 4840 scope.go:117] "RemoveContainer" containerID="5a7becb22d302e48a71701b42b9fa17d4ff1c8ff1d45846d695e6fcc991eb75c" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.398077 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" event={"ID":"1fd9aab6-5204-4f1d-9c99-36b158699e16","Type":"ContainerStarted","Data":"33afe53b124b9cef5893289cbebac257729d5d4e64d0f25f8a6c9f784b19e162"} Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.398614 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.402512 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-db9f7b9c-cdnm7" event={"ID":"0ab14b76-8d70-44f2-b986-b3d600c73b60","Type":"ContainerStarted","Data":"403197cd6e2f799a8e5d55b29a90f95a1d072287400393d0ce075725778b4fe2"} Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.402574 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-db9f7b9c-cdnm7" event={"ID":"0ab14b76-8d70-44f2-b986-b3d600c73b60","Type":"ContainerStarted","Data":"0a86a4d07fba011f02807e2ed6cc060d80c18aeab359b46ed4a060d9bc9539eb"} Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.413059 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" podStartSLOduration=3.21847417 podStartE2EDuration="5.413039093s" podCreationTimestamp="2026-03-11 09:17:53 +0000 UTC" firstStartedPulling="2026-03-11 09:17:55.073815622 +0000 UTC m=+1273.739485427" lastFinishedPulling="2026-03-11 09:17:57.268380535 +0000 UTC m=+1275.934050350" observedRunningTime="2026-03-11 09:17:58.409770581 +0000 UTC m=+1277.075440396" watchObservedRunningTime="2026-03-11 09:17:58.413039093 +0000 UTC m=+1277.078708918" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.421340 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dcc97bc9b-l7z2g" event={"ID":"15a50bea-c32e-4aed-8fd2-7289e1694f6e","Type":"ContainerStarted","Data":"6915adfd19dd7500fc5f1e3d97669f01dd9e52bc0540e0cbfa6ee88e4556faeb"} Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.423056 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dcc97bc9b-l7z2g" event={"ID":"15a50bea-c32e-4aed-8fd2-7289e1694f6e","Type":"ContainerStarted","Data":"70a8af150a750503de4de737257241b8eae85fdd8f6880b4e54352c0730d7435"} Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.423076 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dcc97bc9b-l7z2g" event={"ID":"15a50bea-c32e-4aed-8fd2-7289e1694f6e","Type":"ContainerStarted","Data":"0e08d48e7eb1b3838bce9bb753d5e0af02f3c8a1f8c6831a0528536d3fa724ad"} Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.423096 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.423111 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.430082 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-db9f7b9c-cdnm7" podStartSLOduration=3.083825994 podStartE2EDuration="5.430063391s" podCreationTimestamp="2026-03-11 09:17:53 +0000 UTC" firstStartedPulling="2026-03-11 09:17:54.939073273 +0000 UTC m=+1273.604743078" lastFinishedPulling="2026-03-11 09:17:57.28531066 +0000 UTC m=+1275.950980475" observedRunningTime="2026-03-11 09:17:58.42962756 +0000 UTC m=+1277.095297365" watchObservedRunningTime="2026-03-11 09:17:58.430063391 +0000 UTC m=+1277.095733206" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.431770 4840 scope.go:117] "RemoveContainer" containerID="8195ff056506e76b1783f1fea390441b636fd4acf55fad53349f25d7bc0caf3b" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.487197 4840 scope.go:117] "RemoveContainer" containerID="06059cb8a2046225ba5bd791cd76894815640d133945e7292e8f7f4aa3ee6dd2" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.510527 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.541175 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.568422 4840 scope.go:117] "RemoveContainer" containerID="9188e168bdbca78a271d83dc12ba7a2d0a3e57c143bfede6dc8fa4f26e41a614" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.577857 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:17:58 crc kubenswrapper[4840]: E0311 09:17:58.578361 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a9854a-9978-4414-972d-37c0b579b03b" containerName="ceilometer-notification-agent" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.578387 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a9854a-9978-4414-972d-37c0b579b03b" containerName="ceilometer-notification-agent" Mar 11 09:17:58 crc kubenswrapper[4840]: E0311 09:17:58.578438 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a9854a-9978-4414-972d-37c0b579b03b" containerName="proxy-httpd" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.578449 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a9854a-9978-4414-972d-37c0b579b03b" containerName="proxy-httpd" Mar 11 09:17:58 crc kubenswrapper[4840]: E0311 09:17:58.578547 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a9854a-9978-4414-972d-37c0b579b03b" containerName="ceilometer-central-agent" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.578559 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a9854a-9978-4414-972d-37c0b579b03b" containerName="ceilometer-central-agent" Mar 11 09:17:58 crc kubenswrapper[4840]: E0311 09:17:58.578579 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a9854a-9978-4414-972d-37c0b579b03b" containerName="sg-core" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.578589 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a9854a-9978-4414-972d-37c0b579b03b" containerName="sg-core" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.578812 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="02a9854a-9978-4414-972d-37c0b579b03b" containerName="ceilometer-central-agent" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.578835 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="02a9854a-9978-4414-972d-37c0b579b03b" containerName="proxy-httpd" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.578860 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="02a9854a-9978-4414-972d-37c0b579b03b" containerName="ceilometer-notification-agent" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.578873 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="02a9854a-9978-4414-972d-37c0b579b03b" containerName="sg-core" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.581275 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.587565 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.587578 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.600715 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" podStartSLOduration=5.600691271 podStartE2EDuration="5.600691271s" podCreationTimestamp="2026-03-11 09:17:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:17:58.506622615 +0000 UTC m=+1277.172292430" watchObservedRunningTime="2026-03-11 09:17:58.600691271 +0000 UTC m=+1277.266361086" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.613651 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.618618 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7dcc97bc9b-l7z2g" podStartSLOduration=2.618604312 podStartE2EDuration="2.618604312s" podCreationTimestamp="2026-03-11 09:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:17:58.540779324 +0000 UTC m=+1277.206449149" watchObservedRunningTime="2026-03-11 09:17:58.618604312 +0000 UTC m=+1277.284274117" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.672965 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hshwj\" (UniqueName: \"kubernetes.io/projected/74300cf6-cc28-45d0-ae1a-4099210bcbf1-kube-api-access-hshwj\") pod \"ceilometer-0\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " pod="openstack/ceilometer-0" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.673043 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74300cf6-cc28-45d0-ae1a-4099210bcbf1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " pod="openstack/ceilometer-0" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.673074 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74300cf6-cc28-45d0-ae1a-4099210bcbf1-config-data\") pod \"ceilometer-0\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " pod="openstack/ceilometer-0" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.673093 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74300cf6-cc28-45d0-ae1a-4099210bcbf1-scripts\") pod \"ceilometer-0\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " pod="openstack/ceilometer-0" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.673146 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74300cf6-cc28-45d0-ae1a-4099210bcbf1-log-httpd\") pod \"ceilometer-0\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " pod="openstack/ceilometer-0" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.673199 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74300cf6-cc28-45d0-ae1a-4099210bcbf1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " pod="openstack/ceilometer-0" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.673228 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74300cf6-cc28-45d0-ae1a-4099210bcbf1-run-httpd\") pod \"ceilometer-0\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " pod="openstack/ceilometer-0" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.774826 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74300cf6-cc28-45d0-ae1a-4099210bcbf1-log-httpd\") pod \"ceilometer-0\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " pod="openstack/ceilometer-0" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.774931 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74300cf6-cc28-45d0-ae1a-4099210bcbf1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " pod="openstack/ceilometer-0" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.774974 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74300cf6-cc28-45d0-ae1a-4099210bcbf1-run-httpd\") pod \"ceilometer-0\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " pod="openstack/ceilometer-0" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.775062 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hshwj\" (UniqueName: \"kubernetes.io/projected/74300cf6-cc28-45d0-ae1a-4099210bcbf1-kube-api-access-hshwj\") pod \"ceilometer-0\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " pod="openstack/ceilometer-0" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.775882 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74300cf6-cc28-45d0-ae1a-4099210bcbf1-log-httpd\") pod \"ceilometer-0\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " pod="openstack/ceilometer-0" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.775887 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74300cf6-cc28-45d0-ae1a-4099210bcbf1-run-httpd\") pod \"ceilometer-0\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " pod="openstack/ceilometer-0" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.775407 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74300cf6-cc28-45d0-ae1a-4099210bcbf1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " pod="openstack/ceilometer-0" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.776549 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74300cf6-cc28-45d0-ae1a-4099210bcbf1-config-data\") pod \"ceilometer-0\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " pod="openstack/ceilometer-0" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.776577 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74300cf6-cc28-45d0-ae1a-4099210bcbf1-scripts\") pod \"ceilometer-0\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " pod="openstack/ceilometer-0" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.781276 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74300cf6-cc28-45d0-ae1a-4099210bcbf1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " pod="openstack/ceilometer-0" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.782120 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74300cf6-cc28-45d0-ae1a-4099210bcbf1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " pod="openstack/ceilometer-0" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.782721 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74300cf6-cc28-45d0-ae1a-4099210bcbf1-config-data\") pod \"ceilometer-0\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " pod="openstack/ceilometer-0" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.782908 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74300cf6-cc28-45d0-ae1a-4099210bcbf1-scripts\") pod \"ceilometer-0\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " pod="openstack/ceilometer-0" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.808201 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hshwj\" (UniqueName: \"kubernetes.io/projected/74300cf6-cc28-45d0-ae1a-4099210bcbf1-kube-api-access-hshwj\") pod \"ceilometer-0\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " pod="openstack/ceilometer-0" Mar 11 09:17:58 crc kubenswrapper[4840]: I0311 09:17:58.905947 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:17:59 crc kubenswrapper[4840]: I0311 09:17:59.455838 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:18:00 crc kubenswrapper[4840]: I0311 09:18:00.071803 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02a9854a-9978-4414-972d-37c0b579b03b" path="/var/lib/kubelet/pods/02a9854a-9978-4414-972d-37c0b579b03b/volumes" Mar 11 09:18:00 crc kubenswrapper[4840]: I0311 09:18:00.150705 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553678-xn9kq"] Mar 11 09:18:00 crc kubenswrapper[4840]: I0311 09:18:00.153602 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553678-xn9kq" Mar 11 09:18:00 crc kubenswrapper[4840]: I0311 09:18:00.156421 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:18:00 crc kubenswrapper[4840]: I0311 09:18:00.156787 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:18:00 crc kubenswrapper[4840]: I0311 09:18:00.160087 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:18:00 crc kubenswrapper[4840]: I0311 09:18:00.162542 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553678-xn9kq"] Mar 11 09:18:00 crc kubenswrapper[4840]: I0311 09:18:00.206686 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4hrb\" (UniqueName: \"kubernetes.io/projected/f05f155b-14bd-45d4-9d0d-dd92fd7a3eb0-kube-api-access-g4hrb\") pod \"auto-csr-approver-29553678-xn9kq\" (UID: \"f05f155b-14bd-45d4-9d0d-dd92fd7a3eb0\") " pod="openshift-infra/auto-csr-approver-29553678-xn9kq" Mar 11 09:18:00 crc kubenswrapper[4840]: I0311 09:18:00.309088 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4hrb\" (UniqueName: \"kubernetes.io/projected/f05f155b-14bd-45d4-9d0d-dd92fd7a3eb0-kube-api-access-g4hrb\") pod \"auto-csr-approver-29553678-xn9kq\" (UID: \"f05f155b-14bd-45d4-9d0d-dd92fd7a3eb0\") " pod="openshift-infra/auto-csr-approver-29553678-xn9kq" Mar 11 09:18:00 crc kubenswrapper[4840]: I0311 09:18:00.328294 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4hrb\" (UniqueName: \"kubernetes.io/projected/f05f155b-14bd-45d4-9d0d-dd92fd7a3eb0-kube-api-access-g4hrb\") pod \"auto-csr-approver-29553678-xn9kq\" (UID: \"f05f155b-14bd-45d4-9d0d-dd92fd7a3eb0\") " pod="openshift-infra/auto-csr-approver-29553678-xn9kq" Mar 11 09:18:00 crc kubenswrapper[4840]: I0311 09:18:00.438132 4840 generic.go:334] "Generic (PLEG): container finished" podID="adf5efde-cb19-4870-9c8e-e7e139523238" containerID="3e6fbcf31ada0f36ed6f518ad3411ae695bfb221a54e277c0e2faccd63610af3" exitCode=0 Mar 11 09:18:00 crc kubenswrapper[4840]: I0311 09:18:00.438195 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-k7tgg" event={"ID":"adf5efde-cb19-4870-9c8e-e7e139523238","Type":"ContainerDied","Data":"3e6fbcf31ada0f36ed6f518ad3411ae695bfb221a54e277c0e2faccd63610af3"} Mar 11 09:18:00 crc kubenswrapper[4840]: I0311 09:18:00.441233 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74300cf6-cc28-45d0-ae1a-4099210bcbf1","Type":"ContainerStarted","Data":"9d63ed12400b7b22d7b3175f5ad574f240db1b75017314d820e4f1e64bf92d6c"} Mar 11 09:18:00 crc kubenswrapper[4840]: I0311 09:18:00.441278 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74300cf6-cc28-45d0-ae1a-4099210bcbf1","Type":"ContainerStarted","Data":"64382546f1f80ccaf995d2a923e32932d83117965f0e0c19629e493e51181006"} Mar 11 09:18:00 crc kubenswrapper[4840]: I0311 09:18:00.486393 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553678-xn9kq" Mar 11 09:18:00 crc kubenswrapper[4840]: I0311 09:18:00.980981 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553678-xn9kq"] Mar 11 09:18:01 crc kubenswrapper[4840]: I0311 09:18:01.458856 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553678-xn9kq" event={"ID":"f05f155b-14bd-45d4-9d0d-dd92fd7a3eb0","Type":"ContainerStarted","Data":"b76121dbd0998c057ede9c3821192d17a959490be532b770f436bc1c27e8a252"} Mar 11 09:18:01 crc kubenswrapper[4840]: I0311 09:18:01.466164 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74300cf6-cc28-45d0-ae1a-4099210bcbf1","Type":"ContainerStarted","Data":"131b4c047b76b60be17cc4859f36de4b964899f98daee4b5778c7093e955d6ff"} Mar 11 09:18:01 crc kubenswrapper[4840]: I0311 09:18:01.773850 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-k7tgg" Mar 11 09:18:01 crc kubenswrapper[4840]: I0311 09:18:01.872898 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvtvx\" (UniqueName: \"kubernetes.io/projected/adf5efde-cb19-4870-9c8e-e7e139523238-kube-api-access-gvtvx\") pod \"adf5efde-cb19-4870-9c8e-e7e139523238\" (UID: \"adf5efde-cb19-4870-9c8e-e7e139523238\") " Mar 11 09:18:01 crc kubenswrapper[4840]: I0311 09:18:01.872970 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf5efde-cb19-4870-9c8e-e7e139523238-config-data\") pod \"adf5efde-cb19-4870-9c8e-e7e139523238\" (UID: \"adf5efde-cb19-4870-9c8e-e7e139523238\") " Mar 11 09:18:01 crc kubenswrapper[4840]: I0311 09:18:01.873050 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adf5efde-cb19-4870-9c8e-e7e139523238-scripts\") pod \"adf5efde-cb19-4870-9c8e-e7e139523238\" (UID: \"adf5efde-cb19-4870-9c8e-e7e139523238\") " Mar 11 09:18:01 crc kubenswrapper[4840]: I0311 09:18:01.873078 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/adf5efde-cb19-4870-9c8e-e7e139523238-db-sync-config-data\") pod \"adf5efde-cb19-4870-9c8e-e7e139523238\" (UID: \"adf5efde-cb19-4870-9c8e-e7e139523238\") " Mar 11 09:18:01 crc kubenswrapper[4840]: I0311 09:18:01.873173 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/adf5efde-cb19-4870-9c8e-e7e139523238-etc-machine-id\") pod \"adf5efde-cb19-4870-9c8e-e7e139523238\" (UID: \"adf5efde-cb19-4870-9c8e-e7e139523238\") " Mar 11 09:18:01 crc kubenswrapper[4840]: I0311 09:18:01.873239 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf5efde-cb19-4870-9c8e-e7e139523238-combined-ca-bundle\") pod \"adf5efde-cb19-4870-9c8e-e7e139523238\" (UID: \"adf5efde-cb19-4870-9c8e-e7e139523238\") " Mar 11 09:18:01 crc kubenswrapper[4840]: I0311 09:18:01.877938 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adf5efde-cb19-4870-9c8e-e7e139523238-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "adf5efde-cb19-4870-9c8e-e7e139523238" (UID: "adf5efde-cb19-4870-9c8e-e7e139523238"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:18:01 crc kubenswrapper[4840]: I0311 09:18:01.878808 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adf5efde-cb19-4870-9c8e-e7e139523238-kube-api-access-gvtvx" (OuterVolumeSpecName: "kube-api-access-gvtvx") pod "adf5efde-cb19-4870-9c8e-e7e139523238" (UID: "adf5efde-cb19-4870-9c8e-e7e139523238"). InnerVolumeSpecName "kube-api-access-gvtvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:18:01 crc kubenswrapper[4840]: I0311 09:18:01.889675 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf5efde-cb19-4870-9c8e-e7e139523238-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "adf5efde-cb19-4870-9c8e-e7e139523238" (UID: "adf5efde-cb19-4870-9c8e-e7e139523238"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:01 crc kubenswrapper[4840]: I0311 09:18:01.889728 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf5efde-cb19-4870-9c8e-e7e139523238-scripts" (OuterVolumeSpecName: "scripts") pod "adf5efde-cb19-4870-9c8e-e7e139523238" (UID: "adf5efde-cb19-4870-9c8e-e7e139523238"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:01 crc kubenswrapper[4840]: I0311 09:18:01.901352 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf5efde-cb19-4870-9c8e-e7e139523238-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "adf5efde-cb19-4870-9c8e-e7e139523238" (UID: "adf5efde-cb19-4870-9c8e-e7e139523238"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:01 crc kubenswrapper[4840]: I0311 09:18:01.937596 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adf5efde-cb19-4870-9c8e-e7e139523238-config-data" (OuterVolumeSpecName: "config-data") pod "adf5efde-cb19-4870-9c8e-e7e139523238" (UID: "adf5efde-cb19-4870-9c8e-e7e139523238"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:01 crc kubenswrapper[4840]: I0311 09:18:01.975483 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvtvx\" (UniqueName: \"kubernetes.io/projected/adf5efde-cb19-4870-9c8e-e7e139523238-kube-api-access-gvtvx\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:01 crc kubenswrapper[4840]: I0311 09:18:01.975818 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adf5efde-cb19-4870-9c8e-e7e139523238-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:01 crc kubenswrapper[4840]: I0311 09:18:01.975831 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adf5efde-cb19-4870-9c8e-e7e139523238-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:01 crc kubenswrapper[4840]: I0311 09:18:01.975841 4840 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/adf5efde-cb19-4870-9c8e-e7e139523238-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:01 crc kubenswrapper[4840]: I0311 09:18:01.975851 4840 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/adf5efde-cb19-4870-9c8e-e7e139523238-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:01 crc kubenswrapper[4840]: I0311 09:18:01.975860 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adf5efde-cb19-4870-9c8e-e7e139523238-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:02 crc kubenswrapper[4840]: I0311 09:18:02.490108 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74300cf6-cc28-45d0-ae1a-4099210bcbf1","Type":"ContainerStarted","Data":"c426a86a3c4f9c6b20511a27db1b07c12494776e3cf3dd145b14972a6713291d"} Mar 11 09:18:02 crc kubenswrapper[4840]: I0311 09:18:02.492435 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-k7tgg" event={"ID":"adf5efde-cb19-4870-9c8e-e7e139523238","Type":"ContainerDied","Data":"f91e898c850f5653fa3835a7ec57434f65344b67721200e7cc0a02b60d136a97"} Mar 11 09:18:02 crc kubenswrapper[4840]: I0311 09:18:02.492563 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f91e898c850f5653fa3835a7ec57434f65344b67721200e7cc0a02b60d136a97" Mar 11 09:18:02 crc kubenswrapper[4840]: I0311 09:18:02.492633 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-k7tgg" Mar 11 09:18:02 crc kubenswrapper[4840]: I0311 09:18:02.766597 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 11 09:18:02 crc kubenswrapper[4840]: E0311 09:18:02.767096 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adf5efde-cb19-4870-9c8e-e7e139523238" containerName="cinder-db-sync" Mar 11 09:18:02 crc kubenswrapper[4840]: I0311 09:18:02.767114 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="adf5efde-cb19-4870-9c8e-e7e139523238" containerName="cinder-db-sync" Mar 11 09:18:02 crc kubenswrapper[4840]: I0311 09:18:02.767318 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="adf5efde-cb19-4870-9c8e-e7e139523238" containerName="cinder-db-sync" Mar 11 09:18:02 crc kubenswrapper[4840]: I0311 09:18:02.778744 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 11 09:18:02 crc kubenswrapper[4840]: I0311 09:18:02.783118 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 11 09:18:02 crc kubenswrapper[4840]: I0311 09:18:02.783423 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7djjf" Mar 11 09:18:02 crc kubenswrapper[4840]: I0311 09:18:02.783600 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 11 09:18:02 crc kubenswrapper[4840]: I0311 09:18:02.783686 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 11 09:18:02 crc kubenswrapper[4840]: I0311 09:18:02.802299 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 11 09:18:02 crc kubenswrapper[4840]: I0311 09:18:02.879121 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8449d68f4f-bbl7j"] Mar 11 09:18:02 crc kubenswrapper[4840]: I0311 09:18:02.879400 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" podUID="1fd9aab6-5204-4f1d-9c99-36b158699e16" containerName="dnsmasq-dns" containerID="cri-o://33afe53b124b9cef5893289cbebac257729d5d4e64d0f25f8a6c9f784b19e162" gracePeriod=10 Mar 11 09:18:02 crc kubenswrapper[4840]: I0311 09:18:02.882452 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" Mar 11 09:18:02 crc kubenswrapper[4840]: I0311 09:18:02.897759 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56c00f10-03a9-4308-98b1-e648d962a60f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"56c00f10-03a9-4308-98b1-e648d962a60f\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:02 crc kubenswrapper[4840]: I0311 09:18:02.897890 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/56c00f10-03a9-4308-98b1-e648d962a60f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"56c00f10-03a9-4308-98b1-e648d962a60f\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:02 crc kubenswrapper[4840]: I0311 09:18:02.897927 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c00f10-03a9-4308-98b1-e648d962a60f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"56c00f10-03a9-4308-98b1-e648d962a60f\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:02 crc kubenswrapper[4840]: I0311 09:18:02.897966 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56c00f10-03a9-4308-98b1-e648d962a60f-config-data\") pod \"cinder-scheduler-0\" (UID: \"56c00f10-03a9-4308-98b1-e648d962a60f\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:02 crc kubenswrapper[4840]: I0311 09:18:02.898037 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56c00f10-03a9-4308-98b1-e648d962a60f-scripts\") pod \"cinder-scheduler-0\" (UID: \"56c00f10-03a9-4308-98b1-e648d962a60f\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:02 crc kubenswrapper[4840]: I0311 09:18:02.898088 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kpgw\" (UniqueName: \"kubernetes.io/projected/56c00f10-03a9-4308-98b1-e648d962a60f-kube-api-access-6kpgw\") pod \"cinder-scheduler-0\" (UID: \"56c00f10-03a9-4308-98b1-e648d962a60f\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:02 crc kubenswrapper[4840]: I0311 09:18:02.904488 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b8fcc65cc-psbl7"] Mar 11 09:18:02 crc kubenswrapper[4840]: I0311 09:18:02.905989 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" Mar 11 09:18:02 crc kubenswrapper[4840]: I0311 09:18:02.981437 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b8fcc65cc-psbl7"] Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.003783 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/56c00f10-03a9-4308-98b1-e648d962a60f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"56c00f10-03a9-4308-98b1-e648d962a60f\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.003834 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c00f10-03a9-4308-98b1-e648d962a60f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"56c00f10-03a9-4308-98b1-e648d962a60f\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.003880 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56c00f10-03a9-4308-98b1-e648d962a60f-config-data\") pod \"cinder-scheduler-0\" (UID: \"56c00f10-03a9-4308-98b1-e648d962a60f\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.003928 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-dns-svc\") pod \"dnsmasq-dns-7b8fcc65cc-psbl7\" (UID: \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.003966 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56c00f10-03a9-4308-98b1-e648d962a60f-scripts\") pod \"cinder-scheduler-0\" (UID: \"56c00f10-03a9-4308-98b1-e648d962a60f\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.003999 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-dns-swift-storage-0\") pod \"dnsmasq-dns-7b8fcc65cc-psbl7\" (UID: \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.004025 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kpgw\" (UniqueName: \"kubernetes.io/projected/56c00f10-03a9-4308-98b1-e648d962a60f-kube-api-access-6kpgw\") pod \"cinder-scheduler-0\" (UID: \"56c00f10-03a9-4308-98b1-e648d962a60f\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.004066 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56c00f10-03a9-4308-98b1-e648d962a60f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"56c00f10-03a9-4308-98b1-e648d962a60f\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.004100 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-ovsdbserver-nb\") pod \"dnsmasq-dns-7b8fcc65cc-psbl7\" (UID: \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.004124 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-ovsdbserver-sb\") pod \"dnsmasq-dns-7b8fcc65cc-psbl7\" (UID: \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.004155 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b294z\" (UniqueName: \"kubernetes.io/projected/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-kube-api-access-b294z\") pod \"dnsmasq-dns-7b8fcc65cc-psbl7\" (UID: \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.004180 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-config\") pod \"dnsmasq-dns-7b8fcc65cc-psbl7\" (UID: \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.004355 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/56c00f10-03a9-4308-98b1-e648d962a60f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"56c00f10-03a9-4308-98b1-e648d962a60f\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.021851 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56c00f10-03a9-4308-98b1-e648d962a60f-config-data\") pod \"cinder-scheduler-0\" (UID: \"56c00f10-03a9-4308-98b1-e648d962a60f\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.024192 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c00f10-03a9-4308-98b1-e648d962a60f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"56c00f10-03a9-4308-98b1-e648d962a60f\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.036581 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56c00f10-03a9-4308-98b1-e648d962a60f-scripts\") pod \"cinder-scheduler-0\" (UID: \"56c00f10-03a9-4308-98b1-e648d962a60f\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.045100 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kpgw\" (UniqueName: \"kubernetes.io/projected/56c00f10-03a9-4308-98b1-e648d962a60f-kube-api-access-6kpgw\") pod \"cinder-scheduler-0\" (UID: \"56c00f10-03a9-4308-98b1-e648d962a60f\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.048798 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56c00f10-03a9-4308-98b1-e648d962a60f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"56c00f10-03a9-4308-98b1-e648d962a60f\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.082575 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.084781 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.088870 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.093888 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.112589 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-ovsdbserver-nb\") pod \"dnsmasq-dns-7b8fcc65cc-psbl7\" (UID: \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.112842 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-ovsdbserver-sb\") pod \"dnsmasq-dns-7b8fcc65cc-psbl7\" (UID: \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.112968 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b294z\" (UniqueName: \"kubernetes.io/projected/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-kube-api-access-b294z\") pod \"dnsmasq-dns-7b8fcc65cc-psbl7\" (UID: \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.113074 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-config\") pod \"dnsmasq-dns-7b8fcc65cc-psbl7\" (UID: \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.113217 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-dns-svc\") pod \"dnsmasq-dns-7b8fcc65cc-psbl7\" (UID: \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.113355 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-dns-swift-storage-0\") pod \"dnsmasq-dns-7b8fcc65cc-psbl7\" (UID: \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.113567 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-ovsdbserver-nb\") pod \"dnsmasq-dns-7b8fcc65cc-psbl7\" (UID: \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.113712 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-ovsdbserver-sb\") pod \"dnsmasq-dns-7b8fcc65cc-psbl7\" (UID: \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.112672 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.114354 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-config\") pod \"dnsmasq-dns-7b8fcc65cc-psbl7\" (UID: \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.114926 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-dns-svc\") pod \"dnsmasq-dns-7b8fcc65cc-psbl7\" (UID: \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.115680 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-dns-swift-storage-0\") pod \"dnsmasq-dns-7b8fcc65cc-psbl7\" (UID: \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.137822 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b294z\" (UniqueName: \"kubernetes.io/projected/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-kube-api-access-b294z\") pod \"dnsmasq-dns-7b8fcc65cc-psbl7\" (UID: \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\") " pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.215761 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/163d1a95-e48e-4781-bdde-9d48741f0213-logs\") pod \"cinder-api-0\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " pod="openstack/cinder-api-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.215832 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/163d1a95-e48e-4781-bdde-9d48741f0213-config-data-custom\") pod \"cinder-api-0\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " pod="openstack/cinder-api-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.215890 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/163d1a95-e48e-4781-bdde-9d48741f0213-config-data\") pod \"cinder-api-0\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " pod="openstack/cinder-api-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.215925 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/163d1a95-e48e-4781-bdde-9d48741f0213-etc-machine-id\") pod \"cinder-api-0\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " pod="openstack/cinder-api-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.215961 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9jz7\" (UniqueName: \"kubernetes.io/projected/163d1a95-e48e-4781-bdde-9d48741f0213-kube-api-access-n9jz7\") pod \"cinder-api-0\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " pod="openstack/cinder-api-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.215996 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/163d1a95-e48e-4781-bdde-9d48741f0213-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " pod="openstack/cinder-api-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.216059 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/163d1a95-e48e-4781-bdde-9d48741f0213-scripts\") pod \"cinder-api-0\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " pod="openstack/cinder-api-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.232445 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.322865 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/163d1a95-e48e-4781-bdde-9d48741f0213-config-data-custom\") pod \"cinder-api-0\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " pod="openstack/cinder-api-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.323179 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/163d1a95-e48e-4781-bdde-9d48741f0213-config-data\") pod \"cinder-api-0\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " pod="openstack/cinder-api-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.323204 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/163d1a95-e48e-4781-bdde-9d48741f0213-etc-machine-id\") pod \"cinder-api-0\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " pod="openstack/cinder-api-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.323228 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9jz7\" (UniqueName: \"kubernetes.io/projected/163d1a95-e48e-4781-bdde-9d48741f0213-kube-api-access-n9jz7\") pod \"cinder-api-0\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " pod="openstack/cinder-api-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.323256 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/163d1a95-e48e-4781-bdde-9d48741f0213-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " pod="openstack/cinder-api-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.323296 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/163d1a95-e48e-4781-bdde-9d48741f0213-scripts\") pod \"cinder-api-0\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " pod="openstack/cinder-api-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.323376 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/163d1a95-e48e-4781-bdde-9d48741f0213-logs\") pod \"cinder-api-0\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " pod="openstack/cinder-api-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.323798 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/163d1a95-e48e-4781-bdde-9d48741f0213-logs\") pod \"cinder-api-0\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " pod="openstack/cinder-api-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.327854 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/163d1a95-e48e-4781-bdde-9d48741f0213-etc-machine-id\") pod \"cinder-api-0\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " pod="openstack/cinder-api-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.331648 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/163d1a95-e48e-4781-bdde-9d48741f0213-config-data-custom\") pod \"cinder-api-0\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " pod="openstack/cinder-api-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.333799 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/163d1a95-e48e-4781-bdde-9d48741f0213-scripts\") pod \"cinder-api-0\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " pod="openstack/cinder-api-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.336052 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/163d1a95-e48e-4781-bdde-9d48741f0213-config-data\") pod \"cinder-api-0\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " pod="openstack/cinder-api-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.340159 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/163d1a95-e48e-4781-bdde-9d48741f0213-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " pod="openstack/cinder-api-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.349118 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9jz7\" (UniqueName: \"kubernetes.io/projected/163d1a95-e48e-4781-bdde-9d48741f0213-kube-api-access-n9jz7\") pod \"cinder-api-0\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " pod="openstack/cinder-api-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.439819 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.518639 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553678-xn9kq" event={"ID":"f05f155b-14bd-45d4-9d0d-dd92fd7a3eb0","Type":"ContainerStarted","Data":"272158c7dca509954e755fb757e669d86d1ba331be7966e095475284418f966b"} Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.705252 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.814951 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b8fcc65cc-psbl7"] Mar 11 09:18:03 crc kubenswrapper[4840]: W0311 09:18:03.990477 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod163d1a95_e48e_4781_bdde_9d48741f0213.slice/crio-5964acba78ec9f73b8ee4d878bd5ecc149adf27f7165752cb9efddbb39a3232f WatchSource:0}: Error finding container 5964acba78ec9f73b8ee4d878bd5ecc149adf27f7165752cb9efddbb39a3232f: Status 404 returned error can't find the container with id 5964acba78ec9f73b8ee4d878bd5ecc149adf27f7165752cb9efddbb39a3232f Mar 11 09:18:03 crc kubenswrapper[4840]: I0311 09:18:03.998028 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 11 09:18:04 crc kubenswrapper[4840]: I0311 09:18:04.006438 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" podUID="1fd9aab6-5204-4f1d-9c99-36b158699e16" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: connect: connection refused" Mar 11 09:18:04 crc kubenswrapper[4840]: I0311 09:18:04.540732 4840 generic.go:334] "Generic (PLEG): container finished" podID="1fd9aab6-5204-4f1d-9c99-36b158699e16" containerID="33afe53b124b9cef5893289cbebac257729d5d4e64d0f25f8a6c9f784b19e162" exitCode=0 Mar 11 09:18:04 crc kubenswrapper[4840]: I0311 09:18:04.540985 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" event={"ID":"1fd9aab6-5204-4f1d-9c99-36b158699e16","Type":"ContainerDied","Data":"33afe53b124b9cef5893289cbebac257729d5d4e64d0f25f8a6c9f784b19e162"} Mar 11 09:18:04 crc kubenswrapper[4840]: I0311 09:18:04.550006 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74300cf6-cc28-45d0-ae1a-4099210bcbf1","Type":"ContainerStarted","Data":"f24a2427d9c1e97f86c996783eeb38003294c9ce55e7512876fa072d844f2651"} Mar 11 09:18:04 crc kubenswrapper[4840]: I0311 09:18:04.550673 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 11 09:18:04 crc kubenswrapper[4840]: I0311 09:18:04.557335 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"163d1a95-e48e-4781-bdde-9d48741f0213","Type":"ContainerStarted","Data":"5964acba78ec9f73b8ee4d878bd5ecc149adf27f7165752cb9efddbb39a3232f"} Mar 11 09:18:04 crc kubenswrapper[4840]: I0311 09:18:04.564851 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"56c00f10-03a9-4308-98b1-e648d962a60f","Type":"ContainerStarted","Data":"a3c7ac698cdebabc8739ec2616b0964dd795e08f0e88aded4db552a62c19269a"} Mar 11 09:18:04 crc kubenswrapper[4840]: I0311 09:18:04.571795 4840 generic.go:334] "Generic (PLEG): container finished" podID="4ab6bbb2-998d-4f33-831b-71f0e06d1b8b" containerID="24f634bed346f88226d4efdf7e8364f489e5b583386d6617f0c0933893d176c6" exitCode=0 Mar 11 09:18:04 crc kubenswrapper[4840]: I0311 09:18:04.571873 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" event={"ID":"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b","Type":"ContainerDied","Data":"24f634bed346f88226d4efdf7e8364f489e5b583386d6617f0c0933893d176c6"} Mar 11 09:18:04 crc kubenswrapper[4840]: I0311 09:18:04.571901 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" event={"ID":"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b","Type":"ContainerStarted","Data":"93a36fe2e07b42131c9be4238e93f248d40c69f158419ec3f3bdbf3e2e3b2e83"} Mar 11 09:18:04 crc kubenswrapper[4840]: I0311 09:18:04.580719 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.3581969320000002 podStartE2EDuration="6.580700167s" podCreationTimestamp="2026-03-11 09:17:58 +0000 UTC" firstStartedPulling="2026-03-11 09:17:59.461574122 +0000 UTC m=+1278.127243937" lastFinishedPulling="2026-03-11 09:18:03.684077357 +0000 UTC m=+1282.349747172" observedRunningTime="2026-03-11 09:18:04.57249651 +0000 UTC m=+1283.238166315" watchObservedRunningTime="2026-03-11 09:18:04.580700167 +0000 UTC m=+1283.246369982" Mar 11 09:18:04 crc kubenswrapper[4840]: I0311 09:18:04.584827 4840 generic.go:334] "Generic (PLEG): container finished" podID="f05f155b-14bd-45d4-9d0d-dd92fd7a3eb0" containerID="272158c7dca509954e755fb757e669d86d1ba331be7966e095475284418f966b" exitCode=0 Mar 11 09:18:04 crc kubenswrapper[4840]: I0311 09:18:04.584866 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553678-xn9kq" event={"ID":"f05f155b-14bd-45d4-9d0d-dd92fd7a3eb0","Type":"ContainerDied","Data":"272158c7dca509954e755fb757e669d86d1ba331be7966e095475284418f966b"} Mar 11 09:18:04 crc kubenswrapper[4840]: I0311 09:18:04.826329 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" Mar 11 09:18:04 crc kubenswrapper[4840]: I0311 09:18:04.976449 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-ovsdbserver-nb\") pod \"1fd9aab6-5204-4f1d-9c99-36b158699e16\" (UID: \"1fd9aab6-5204-4f1d-9c99-36b158699e16\") " Mar 11 09:18:04 crc kubenswrapper[4840]: I0311 09:18:04.977059 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-config\") pod \"1fd9aab6-5204-4f1d-9c99-36b158699e16\" (UID: \"1fd9aab6-5204-4f1d-9c99-36b158699e16\") " Mar 11 09:18:04 crc kubenswrapper[4840]: I0311 09:18:04.977146 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-dns-svc\") pod \"1fd9aab6-5204-4f1d-9c99-36b158699e16\" (UID: \"1fd9aab6-5204-4f1d-9c99-36b158699e16\") " Mar 11 09:18:04 crc kubenswrapper[4840]: I0311 09:18:04.977237 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-ovsdbserver-sb\") pod \"1fd9aab6-5204-4f1d-9c99-36b158699e16\" (UID: \"1fd9aab6-5204-4f1d-9c99-36b158699e16\") " Mar 11 09:18:04 crc kubenswrapper[4840]: I0311 09:18:04.977299 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-dns-swift-storage-0\") pod \"1fd9aab6-5204-4f1d-9c99-36b158699e16\" (UID: \"1fd9aab6-5204-4f1d-9c99-36b158699e16\") " Mar 11 09:18:04 crc kubenswrapper[4840]: I0311 09:18:04.977381 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsqkz\" (UniqueName: \"kubernetes.io/projected/1fd9aab6-5204-4f1d-9c99-36b158699e16-kube-api-access-hsqkz\") pod \"1fd9aab6-5204-4f1d-9c99-36b158699e16\" (UID: \"1fd9aab6-5204-4f1d-9c99-36b158699e16\") " Mar 11 09:18:04 crc kubenswrapper[4840]: I0311 09:18:04.998690 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fd9aab6-5204-4f1d-9c99-36b158699e16-kube-api-access-hsqkz" (OuterVolumeSpecName: "kube-api-access-hsqkz") pod "1fd9aab6-5204-4f1d-9c99-36b158699e16" (UID: "1fd9aab6-5204-4f1d-9c99-36b158699e16"). InnerVolumeSpecName "kube-api-access-hsqkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:18:05 crc kubenswrapper[4840]: I0311 09:18:05.052279 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1fd9aab6-5204-4f1d-9c99-36b158699e16" (UID: "1fd9aab6-5204-4f1d-9c99-36b158699e16"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:18:05 crc kubenswrapper[4840]: I0311 09:18:05.060267 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1fd9aab6-5204-4f1d-9c99-36b158699e16" (UID: "1fd9aab6-5204-4f1d-9c99-36b158699e16"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:18:05 crc kubenswrapper[4840]: I0311 09:18:05.076383 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-config" (OuterVolumeSpecName: "config") pod "1fd9aab6-5204-4f1d-9c99-36b158699e16" (UID: "1fd9aab6-5204-4f1d-9c99-36b158699e16"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:18:05 crc kubenswrapper[4840]: I0311 09:18:05.080995 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1fd9aab6-5204-4f1d-9c99-36b158699e16" (UID: "1fd9aab6-5204-4f1d-9c99-36b158699e16"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:18:05 crc kubenswrapper[4840]: I0311 09:18:05.085357 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:05 crc kubenswrapper[4840]: I0311 09:18:05.085395 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:05 crc kubenswrapper[4840]: I0311 09:18:05.085407 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:05 crc kubenswrapper[4840]: I0311 09:18:05.085422 4840 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:05 crc kubenswrapper[4840]: I0311 09:18:05.085434 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsqkz\" (UniqueName: \"kubernetes.io/projected/1fd9aab6-5204-4f1d-9c99-36b158699e16-kube-api-access-hsqkz\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:05 crc kubenswrapper[4840]: I0311 09:18:05.088866 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1fd9aab6-5204-4f1d-9c99-36b158699e16" (UID: "1fd9aab6-5204-4f1d-9c99-36b158699e16"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:18:05 crc kubenswrapper[4840]: I0311 09:18:05.187357 4840 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1fd9aab6-5204-4f1d-9c99-36b158699e16-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:05 crc kubenswrapper[4840]: I0311 09:18:05.398970 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 11 09:18:05 crc kubenswrapper[4840]: I0311 09:18:05.663937 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" event={"ID":"1fd9aab6-5204-4f1d-9c99-36b158699e16","Type":"ContainerDied","Data":"b4294f14b88183e458e1f34ee48b3f743077409b6bd045b8bf386352c05a8a90"} Mar 11 09:18:05 crc kubenswrapper[4840]: I0311 09:18:05.664195 4840 scope.go:117] "RemoveContainer" containerID="33afe53b124b9cef5893289cbebac257729d5d4e64d0f25f8a6c9f784b19e162" Mar 11 09:18:05 crc kubenswrapper[4840]: I0311 09:18:05.664394 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8449d68f4f-bbl7j" Mar 11 09:18:05 crc kubenswrapper[4840]: I0311 09:18:05.669086 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"163d1a95-e48e-4781-bdde-9d48741f0213","Type":"ContainerStarted","Data":"8b2b2ee94f02e4c91965819ff19ab228b357fae41649aef16658ee14d3ffc1d9"} Mar 11 09:18:05 crc kubenswrapper[4840]: I0311 09:18:05.674684 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" event={"ID":"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b","Type":"ContainerStarted","Data":"05a6df8771e5d1d9bd2b29e2e53f4bf1d24dbb28545aad896202cfeadea22534"} Mar 11 09:18:05 crc kubenswrapper[4840]: I0311 09:18:05.675566 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" Mar 11 09:18:05 crc kubenswrapper[4840]: I0311 09:18:05.723818 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" podStartSLOduration=3.7237975739999998 podStartE2EDuration="3.723797574s" podCreationTimestamp="2026-03-11 09:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:18:05.702897819 +0000 UTC m=+1284.368567644" watchObservedRunningTime="2026-03-11 09:18:05.723797574 +0000 UTC m=+1284.389467389" Mar 11 09:18:05 crc kubenswrapper[4840]: I0311 09:18:05.743882 4840 scope.go:117] "RemoveContainer" containerID="65fc315784ea8f573b907740def2d54633d43c279ef6e6cc749e0e9040abeff1" Mar 11 09:18:05 crc kubenswrapper[4840]: I0311 09:18:05.766775 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8449d68f4f-bbl7j"] Mar 11 09:18:05 crc kubenswrapper[4840]: I0311 09:18:05.778435 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8449d68f4f-bbl7j"] Mar 11 09:18:06 crc kubenswrapper[4840]: I0311 09:18:06.101328 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fd9aab6-5204-4f1d-9c99-36b158699e16" path="/var/lib/kubelet/pods/1fd9aab6-5204-4f1d-9c99-36b158699e16/volumes" Mar 11 09:18:06 crc kubenswrapper[4840]: I0311 09:18:06.128977 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553678-xn9kq" Mar 11 09:18:06 crc kubenswrapper[4840]: I0311 09:18:06.222387 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4hrb\" (UniqueName: \"kubernetes.io/projected/f05f155b-14bd-45d4-9d0d-dd92fd7a3eb0-kube-api-access-g4hrb\") pod \"f05f155b-14bd-45d4-9d0d-dd92fd7a3eb0\" (UID: \"f05f155b-14bd-45d4-9d0d-dd92fd7a3eb0\") " Mar 11 09:18:06 crc kubenswrapper[4840]: I0311 09:18:06.254447 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f05f155b-14bd-45d4-9d0d-dd92fd7a3eb0-kube-api-access-g4hrb" (OuterVolumeSpecName: "kube-api-access-g4hrb") pod "f05f155b-14bd-45d4-9d0d-dd92fd7a3eb0" (UID: "f05f155b-14bd-45d4-9d0d-dd92fd7a3eb0"). InnerVolumeSpecName "kube-api-access-g4hrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:18:06 crc kubenswrapper[4840]: I0311 09:18:06.332531 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4hrb\" (UniqueName: \"kubernetes.io/projected/f05f155b-14bd-45d4-9d0d-dd92fd7a3eb0-kube-api-access-g4hrb\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:06 crc kubenswrapper[4840]: I0311 09:18:06.602874 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-774b7849d6-lz2xl" Mar 11 09:18:06 crc kubenswrapper[4840]: I0311 09:18:06.709708 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"56c00f10-03a9-4308-98b1-e648d962a60f","Type":"ContainerStarted","Data":"aada553c7a57328e9ad3773280281a27544885d1e9c1300be8b3fe00dcf8d3e6"} Mar 11 09:18:06 crc kubenswrapper[4840]: I0311 09:18:06.738693 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553678-xn9kq" event={"ID":"f05f155b-14bd-45d4-9d0d-dd92fd7a3eb0","Type":"ContainerDied","Data":"b76121dbd0998c057ede9c3821192d17a959490be532b770f436bc1c27e8a252"} Mar 11 09:18:06 crc kubenswrapper[4840]: I0311 09:18:06.738733 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b76121dbd0998c057ede9c3821192d17a959490be532b770f436bc1c27e8a252" Mar 11 09:18:06 crc kubenswrapper[4840]: I0311 09:18:06.738833 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553678-xn9kq" Mar 11 09:18:06 crc kubenswrapper[4840]: I0311 09:18:06.750682 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="163d1a95-e48e-4781-bdde-9d48741f0213" containerName="cinder-api-log" containerID="cri-o://8b2b2ee94f02e4c91965819ff19ab228b357fae41649aef16658ee14d3ffc1d9" gracePeriod=30 Mar 11 09:18:06 crc kubenswrapper[4840]: I0311 09:18:06.751122 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="163d1a95-e48e-4781-bdde-9d48741f0213" containerName="cinder-api" containerID="cri-o://18332a9e3a8500c2684c92fac89d19d55c5bbb997962d31c8f43e7f357714025" gracePeriod=30 Mar 11 09:18:06 crc kubenswrapper[4840]: I0311 09:18:06.750704 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"163d1a95-e48e-4781-bdde-9d48741f0213","Type":"ContainerStarted","Data":"18332a9e3a8500c2684c92fac89d19d55c5bbb997962d31c8f43e7f357714025"} Mar 11 09:18:06 crc kubenswrapper[4840]: I0311 09:18:06.751258 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 11 09:18:06 crc kubenswrapper[4840]: I0311 09:18:06.793270 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.793246761 podStartE2EDuration="3.793246761s" podCreationTimestamp="2026-03-11 09:18:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:18:06.779846894 +0000 UTC m=+1285.445516729" watchObservedRunningTime="2026-03-11 09:18:06.793246761 +0000 UTC m=+1285.458916586" Mar 11 09:18:07 crc kubenswrapper[4840]: I0311 09:18:07.044153 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-774b7849d6-lz2xl" Mar 11 09:18:07 crc kubenswrapper[4840]: I0311 09:18:07.301536 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553672-mwv76"] Mar 11 09:18:07 crc kubenswrapper[4840]: I0311 09:18:07.321945 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553672-mwv76"] Mar 11 09:18:07 crc kubenswrapper[4840]: I0311 09:18:07.753160 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-9dc6d5c86-nszp2" Mar 11 09:18:07 crc kubenswrapper[4840]: I0311 09:18:07.765550 4840 generic.go:334] "Generic (PLEG): container finished" podID="163d1a95-e48e-4781-bdde-9d48741f0213" containerID="8b2b2ee94f02e4c91965819ff19ab228b357fae41649aef16658ee14d3ffc1d9" exitCode=143 Mar 11 09:18:07 crc kubenswrapper[4840]: I0311 09:18:07.765939 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"163d1a95-e48e-4781-bdde-9d48741f0213","Type":"ContainerDied","Data":"8b2b2ee94f02e4c91965819ff19ab228b357fae41649aef16658ee14d3ffc1d9"} Mar 11 09:18:07 crc kubenswrapper[4840]: I0311 09:18:07.769403 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"56c00f10-03a9-4308-98b1-e648d962a60f","Type":"ContainerStarted","Data":"d04fdffc7603acb51ce67d4cd7e4ec8aa48d99e39dc1f5427b49e8cef88f407f"} Mar 11 09:18:07 crc kubenswrapper[4840]: I0311 09:18:07.834141 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.375239958 podStartE2EDuration="5.834117689s" podCreationTimestamp="2026-03-11 09:18:02 +0000 UTC" firstStartedPulling="2026-03-11 09:18:03.710687006 +0000 UTC m=+1282.376356821" lastFinishedPulling="2026-03-11 09:18:05.169564737 +0000 UTC m=+1283.835234552" observedRunningTime="2026-03-11 09:18:07.809050108 +0000 UTC m=+1286.474719923" watchObservedRunningTime="2026-03-11 09:18:07.834117689 +0000 UTC m=+1286.499787504" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.006076 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-656d6c4465-srbrx"] Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.006406 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-656d6c4465-srbrx" podUID="9c84bd84-b44d-4d4a-a6ee-0b85e349bc69" containerName="neutron-api" containerID="cri-o://058efdd9153e199b2781bb56a2799093fa38fbf8d86f0fa7912fbc7f671e203f" gracePeriod=30 Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.006622 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-656d6c4465-srbrx" podUID="9c84bd84-b44d-4d4a-a6ee-0b85e349bc69" containerName="neutron-httpd" containerID="cri-o://820aa259a8b7e35284681d77e3f29d049e162d8ecbf5d0f603d802b8267103a7" gracePeriod=30 Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.031718 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-656d6c4465-srbrx" podUID="9c84bd84-b44d-4d4a-a6ee-0b85e349bc69" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.160:9696/\": EOF" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.044517 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-58769b4545-5q2fv"] Mar 11 09:18:08 crc kubenswrapper[4840]: E0311 09:18:08.045198 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f05f155b-14bd-45d4-9d0d-dd92fd7a3eb0" containerName="oc" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.045272 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f05f155b-14bd-45d4-9d0d-dd92fd7a3eb0" containerName="oc" Mar 11 09:18:08 crc kubenswrapper[4840]: E0311 09:18:08.045333 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fd9aab6-5204-4f1d-9c99-36b158699e16" containerName="dnsmasq-dns" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.045388 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fd9aab6-5204-4f1d-9c99-36b158699e16" containerName="dnsmasq-dns" Mar 11 09:18:08 crc kubenswrapper[4840]: E0311 09:18:08.045443 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fd9aab6-5204-4f1d-9c99-36b158699e16" containerName="init" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.047565 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fd9aab6-5204-4f1d-9c99-36b158699e16" containerName="init" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.047966 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fd9aab6-5204-4f1d-9c99-36b158699e16" containerName="dnsmasq-dns" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.048104 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f05f155b-14bd-45d4-9d0d-dd92fd7a3eb0" containerName="oc" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.049399 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.102065 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cef34d8-a6f9-4ee3-b869-686486675141" path="/var/lib/kubelet/pods/1cef34d8-a6f9-4ee3-b869-686486675141/volumes" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.108223 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58769b4545-5q2fv"] Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.113033 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.233673 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-config\") pod \"neutron-58769b4545-5q2fv\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.234126 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-combined-ca-bundle\") pod \"neutron-58769b4545-5q2fv\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.234151 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-public-tls-certs\") pod \"neutron-58769b4545-5q2fv\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.234169 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-ovndb-tls-certs\") pod \"neutron-58769b4545-5q2fv\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.234185 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m25fw\" (UniqueName: \"kubernetes.io/projected/550bab70-eacb-4c56-98fd-460c20f22dcc-kube-api-access-m25fw\") pod \"neutron-58769b4545-5q2fv\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.234240 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-httpd-config\") pod \"neutron-58769b4545-5q2fv\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.234325 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-internal-tls-certs\") pod \"neutron-58769b4545-5q2fv\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.336055 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-config\") pod \"neutron-58769b4545-5q2fv\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.336158 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-combined-ca-bundle\") pod \"neutron-58769b4545-5q2fv\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.336185 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-public-tls-certs\") pod \"neutron-58769b4545-5q2fv\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.336205 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-ovndb-tls-certs\") pod \"neutron-58769b4545-5q2fv\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.336223 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m25fw\" (UniqueName: \"kubernetes.io/projected/550bab70-eacb-4c56-98fd-460c20f22dcc-kube-api-access-m25fw\") pod \"neutron-58769b4545-5q2fv\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.336271 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-httpd-config\") pod \"neutron-58769b4545-5q2fv\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.336357 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-internal-tls-certs\") pod \"neutron-58769b4545-5q2fv\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.345418 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-httpd-config\") pod \"neutron-58769b4545-5q2fv\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.350106 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-combined-ca-bundle\") pod \"neutron-58769b4545-5q2fv\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.355742 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-public-tls-certs\") pod \"neutron-58769b4545-5q2fv\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.357861 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-ovndb-tls-certs\") pod \"neutron-58769b4545-5q2fv\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.373646 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-config\") pod \"neutron-58769b4545-5q2fv\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.379457 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-internal-tls-certs\") pod \"neutron-58769b4545-5q2fv\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.385539 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m25fw\" (UniqueName: \"kubernetes.io/projected/550bab70-eacb-4c56-98fd-460c20f22dcc-kube-api-access-m25fw\") pod \"neutron-58769b4545-5q2fv\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.401484 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.814630 4840 generic.go:334] "Generic (PLEG): container finished" podID="9c84bd84-b44d-4d4a-a6ee-0b85e349bc69" containerID="820aa259a8b7e35284681d77e3f29d049e162d8ecbf5d0f603d802b8267103a7" exitCode=0 Mar 11 09:18:08 crc kubenswrapper[4840]: I0311 09:18:08.815581 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-656d6c4465-srbrx" event={"ID":"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69","Type":"ContainerDied","Data":"820aa259a8b7e35284681d77e3f29d049e162d8ecbf5d0f603d802b8267103a7"} Mar 11 09:18:09 crc kubenswrapper[4840]: I0311 09:18:09.136882 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:18:09 crc kubenswrapper[4840]: I0311 09:18:09.208634 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58769b4545-5q2fv"] Mar 11 09:18:09 crc kubenswrapper[4840]: W0311 09:18:09.214372 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod550bab70_eacb_4c56_98fd_460c20f22dcc.slice/crio-df9306f884966228ea14fe38bf8378c7b4461529db12ece11c70d9a1214b3365 WatchSource:0}: Error finding container df9306f884966228ea14fe38bf8378c7b4461529db12ece11c70d9a1214b3365: Status 404 returned error can't find the container with id df9306f884966228ea14fe38bf8378c7b4461529db12ece11c70d9a1214b3365 Mar 11 09:18:09 crc kubenswrapper[4840]: I0311 09:18:09.401956 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:18:09 crc kubenswrapper[4840]: I0311 09:18:09.490216 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-774b7849d6-lz2xl"] Mar 11 09:18:09 crc kubenswrapper[4840]: I0311 09:18:09.490504 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-774b7849d6-lz2xl" podUID="5b47f430-5dea-4714-bf85-8d6a7d6d8388" containerName="barbican-api-log" containerID="cri-o://08df7213186b32f6a70402ce4911cf6e5519b68d4e6d4d7512dd363ce46eaea7" gracePeriod=30 Mar 11 09:18:09 crc kubenswrapper[4840]: I0311 09:18:09.490763 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-774b7849d6-lz2xl" podUID="5b47f430-5dea-4714-bf85-8d6a7d6d8388" containerName="barbican-api" containerID="cri-o://541f407730550efa0d1bb50aafd0a280133040f1dec1250d8312f5614a64fc5d" gracePeriod=30 Mar 11 09:18:09 crc kubenswrapper[4840]: I0311 09:18:09.837577 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58769b4545-5q2fv" event={"ID":"550bab70-eacb-4c56-98fd-460c20f22dcc","Type":"ContainerStarted","Data":"990342c797ffb42187c174e55be3c0f82e87b7deb7f1882a35e7f82eeaf9dc84"} Mar 11 09:18:09 crc kubenswrapper[4840]: I0311 09:18:09.837941 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58769b4545-5q2fv" event={"ID":"550bab70-eacb-4c56-98fd-460c20f22dcc","Type":"ContainerStarted","Data":"df9306f884966228ea14fe38bf8378c7b4461529db12ece11c70d9a1214b3365"} Mar 11 09:18:09 crc kubenswrapper[4840]: I0311 09:18:09.845681 4840 generic.go:334] "Generic (PLEG): container finished" podID="5b47f430-5dea-4714-bf85-8d6a7d6d8388" containerID="08df7213186b32f6a70402ce4911cf6e5519b68d4e6d4d7512dd363ce46eaea7" exitCode=143 Mar 11 09:18:09 crc kubenswrapper[4840]: I0311 09:18:09.845772 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-774b7849d6-lz2xl" event={"ID":"5b47f430-5dea-4714-bf85-8d6a7d6d8388","Type":"ContainerDied","Data":"08df7213186b32f6a70402ce4911cf6e5519b68d4e6d4d7512dd363ce46eaea7"} Mar 11 09:18:09 crc kubenswrapper[4840]: I0311 09:18:09.856621 4840 generic.go:334] "Generic (PLEG): container finished" podID="9c84bd84-b44d-4d4a-a6ee-0b85e349bc69" containerID="058efdd9153e199b2781bb56a2799093fa38fbf8d86f0fa7912fbc7f671e203f" exitCode=0 Mar 11 09:18:09 crc kubenswrapper[4840]: I0311 09:18:09.857213 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-656d6c4465-srbrx" event={"ID":"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69","Type":"ContainerDied","Data":"058efdd9153e199b2781bb56a2799093fa38fbf8d86f0fa7912fbc7f671e203f"} Mar 11 09:18:09 crc kubenswrapper[4840]: I0311 09:18:09.857288 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-656d6c4465-srbrx" event={"ID":"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69","Type":"ContainerDied","Data":"ba33e48969beb305fce56209396ea4df724febced0cc98ad88ab4387b68e228f"} Mar 11 09:18:09 crc kubenswrapper[4840]: I0311 09:18:09.857311 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba33e48969beb305fce56209396ea4df724febced0cc98ad88ab4387b68e228f" Mar 11 09:18:09 crc kubenswrapper[4840]: I0311 09:18:09.882066 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:18:09 crc kubenswrapper[4840]: I0311 09:18:09.988044 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dtdx\" (UniqueName: \"kubernetes.io/projected/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-kube-api-access-9dtdx\") pod \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " Mar 11 09:18:09 crc kubenswrapper[4840]: I0311 09:18:09.988108 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-config\") pod \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " Mar 11 09:18:09 crc kubenswrapper[4840]: I0311 09:18:09.988164 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-ovndb-tls-certs\") pod \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " Mar 11 09:18:09 crc kubenswrapper[4840]: I0311 09:18:09.988488 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-combined-ca-bundle\") pod \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " Mar 11 09:18:09 crc kubenswrapper[4840]: I0311 09:18:09.988573 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-public-tls-certs\") pod \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " Mar 11 09:18:09 crc kubenswrapper[4840]: I0311 09:18:09.988617 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-httpd-config\") pod \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " Mar 11 09:18:09 crc kubenswrapper[4840]: I0311 09:18:09.988682 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-internal-tls-certs\") pod \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\" (UID: \"9c84bd84-b44d-4d4a-a6ee-0b85e349bc69\") " Mar 11 09:18:10 crc kubenswrapper[4840]: I0311 09:18:10.000634 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "9c84bd84-b44d-4d4a-a6ee-0b85e349bc69" (UID: "9c84bd84-b44d-4d4a-a6ee-0b85e349bc69"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:10 crc kubenswrapper[4840]: I0311 09:18:10.005677 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-kube-api-access-9dtdx" (OuterVolumeSpecName: "kube-api-access-9dtdx") pod "9c84bd84-b44d-4d4a-a6ee-0b85e349bc69" (UID: "9c84bd84-b44d-4d4a-a6ee-0b85e349bc69"). InnerVolumeSpecName "kube-api-access-9dtdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:18:10 crc kubenswrapper[4840]: I0311 09:18:10.094672 4840 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:10 crc kubenswrapper[4840]: I0311 09:18:10.094739 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dtdx\" (UniqueName: \"kubernetes.io/projected/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-kube-api-access-9dtdx\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:10 crc kubenswrapper[4840]: I0311 09:18:10.107238 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-config" (OuterVolumeSpecName: "config") pod "9c84bd84-b44d-4d4a-a6ee-0b85e349bc69" (UID: "9c84bd84-b44d-4d4a-a6ee-0b85e349bc69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:10 crc kubenswrapper[4840]: I0311 09:18:10.117037 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9c84bd84-b44d-4d4a-a6ee-0b85e349bc69" (UID: "9c84bd84-b44d-4d4a-a6ee-0b85e349bc69"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:10 crc kubenswrapper[4840]: I0311 09:18:10.149977 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9c84bd84-b44d-4d4a-a6ee-0b85e349bc69" (UID: "9c84bd84-b44d-4d4a-a6ee-0b85e349bc69"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:10 crc kubenswrapper[4840]: I0311 09:18:10.155629 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c84bd84-b44d-4d4a-a6ee-0b85e349bc69" (UID: "9c84bd84-b44d-4d4a-a6ee-0b85e349bc69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:10 crc kubenswrapper[4840]: I0311 09:18:10.159607 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "9c84bd84-b44d-4d4a-a6ee-0b85e349bc69" (UID: "9c84bd84-b44d-4d4a-a6ee-0b85e349bc69"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:10 crc kubenswrapper[4840]: I0311 09:18:10.195617 4840 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:10 crc kubenswrapper[4840]: I0311 09:18:10.195663 4840 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:10 crc kubenswrapper[4840]: I0311 09:18:10.195676 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:10 crc kubenswrapper[4840]: I0311 09:18:10.195691 4840 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:10 crc kubenswrapper[4840]: I0311 09:18:10.195701 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:10 crc kubenswrapper[4840]: I0311 09:18:10.868935 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-656d6c4465-srbrx" Mar 11 09:18:10 crc kubenswrapper[4840]: I0311 09:18:10.870318 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58769b4545-5q2fv" event={"ID":"550bab70-eacb-4c56-98fd-460c20f22dcc","Type":"ContainerStarted","Data":"de14b2ab95a92c30b8e51cfa0b41eeb1219f8858689935e5b0e0d5e56fbb4fc9"} Mar 11 09:18:10 crc kubenswrapper[4840]: I0311 09:18:10.870701 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:18:10 crc kubenswrapper[4840]: I0311 09:18:10.905861 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-58769b4545-5q2fv" podStartSLOduration=2.9058385810000003 podStartE2EDuration="2.905838581s" podCreationTimestamp="2026-03-11 09:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:18:10.903640686 +0000 UTC m=+1289.569310501" watchObservedRunningTime="2026-03-11 09:18:10.905838581 +0000 UTC m=+1289.571508396" Mar 11 09:18:10 crc kubenswrapper[4840]: I0311 09:18:10.961793 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-656d6c4465-srbrx"] Mar 11 09:18:10 crc kubenswrapper[4840]: I0311 09:18:10.973152 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-656d6c4465-srbrx"] Mar 11 09:18:11 crc kubenswrapper[4840]: I0311 09:18:11.712966 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:18:11 crc kubenswrapper[4840]: I0311 09:18:11.726859 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:18:12 crc kubenswrapper[4840]: I0311 09:18:12.078554 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c84bd84-b44d-4d4a-a6ee-0b85e349bc69" path="/var/lib/kubelet/pods/9c84bd84-b44d-4d4a-a6ee-0b85e349bc69/volumes" Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.236890 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.306786 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7859c7799c-2ksc5"] Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.307090 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" podUID="dfe1c60c-ed7c-4a0a-a85e-82261146409d" containerName="dnsmasq-dns" containerID="cri-o://79e1c66334965e63af538892b3410f5e2ff8b5aabb04319e2e2c1653a24b1496" gracePeriod=10 Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.334210 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.493022 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.543138 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.596296 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.636425 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.656130 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-774b7849d6-lz2xl" Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.706176 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmjmg\" (UniqueName: \"kubernetes.io/projected/5b47f430-5dea-4714-bf85-8d6a7d6d8388-kube-api-access-kmjmg\") pod \"5b47f430-5dea-4714-bf85-8d6a7d6d8388\" (UID: \"5b47f430-5dea-4714-bf85-8d6a7d6d8388\") " Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.706299 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b47f430-5dea-4714-bf85-8d6a7d6d8388-config-data-custom\") pod \"5b47f430-5dea-4714-bf85-8d6a7d6d8388\" (UID: \"5b47f430-5dea-4714-bf85-8d6a7d6d8388\") " Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.706388 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b47f430-5dea-4714-bf85-8d6a7d6d8388-combined-ca-bundle\") pod \"5b47f430-5dea-4714-bf85-8d6a7d6d8388\" (UID: \"5b47f430-5dea-4714-bf85-8d6a7d6d8388\") " Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.706421 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b47f430-5dea-4714-bf85-8d6a7d6d8388-config-data\") pod \"5b47f430-5dea-4714-bf85-8d6a7d6d8388\" (UID: \"5b47f430-5dea-4714-bf85-8d6a7d6d8388\") " Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.706444 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b47f430-5dea-4714-bf85-8d6a7d6d8388-logs\") pod \"5b47f430-5dea-4714-bf85-8d6a7d6d8388\" (UID: \"5b47f430-5dea-4714-bf85-8d6a7d6d8388\") " Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.708713 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-75cbbc8dcd-sc25j"] Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.708980 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-75cbbc8dcd-sc25j" podUID="fccf048e-d3b5-4e8d-a940-ec306fe071a0" containerName="placement-log" containerID="cri-o://7fcf04b257aa6ca622ecd00debacd364f42e749d25c27411e1e49c6c6d470dfe" gracePeriod=30 Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.709407 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-75cbbc8dcd-sc25j" podUID="fccf048e-d3b5-4e8d-a940-ec306fe071a0" containerName="placement-api" containerID="cri-o://191fc6b9a70ef6981941d87ed503a96b3604cf4bb58e3fb543512122c2c8cfe6" gracePeriod=30 Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.712927 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b47f430-5dea-4714-bf85-8d6a7d6d8388-logs" (OuterVolumeSpecName: "logs") pod "5b47f430-5dea-4714-bf85-8d6a7d6d8388" (UID: "5b47f430-5dea-4714-bf85-8d6a7d6d8388"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.734641 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b47f430-5dea-4714-bf85-8d6a7d6d8388-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5b47f430-5dea-4714-bf85-8d6a7d6d8388" (UID: "5b47f430-5dea-4714-bf85-8d6a7d6d8388"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.739379 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b47f430-5dea-4714-bf85-8d6a7d6d8388-kube-api-access-kmjmg" (OuterVolumeSpecName: "kube-api-access-kmjmg") pod "5b47f430-5dea-4714-bf85-8d6a7d6d8388" (UID: "5b47f430-5dea-4714-bf85-8d6a7d6d8388"). InnerVolumeSpecName "kube-api-access-kmjmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.766674 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b47f430-5dea-4714-bf85-8d6a7d6d8388-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b47f430-5dea-4714-bf85-8d6a7d6d8388" (UID: "5b47f430-5dea-4714-bf85-8d6a7d6d8388"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.798615 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b47f430-5dea-4714-bf85-8d6a7d6d8388-config-data" (OuterVolumeSpecName: "config-data") pod "5b47f430-5dea-4714-bf85-8d6a7d6d8388" (UID: "5b47f430-5dea-4714-bf85-8d6a7d6d8388"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.824042 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmjmg\" (UniqueName: \"kubernetes.io/projected/5b47f430-5dea-4714-bf85-8d6a7d6d8388-kube-api-access-kmjmg\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.824351 4840 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b47f430-5dea-4714-bf85-8d6a7d6d8388-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.824413 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b47f430-5dea-4714-bf85-8d6a7d6d8388-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.824503 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b47f430-5dea-4714-bf85-8d6a7d6d8388-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.824566 4840 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b47f430-5dea-4714-bf85-8d6a7d6d8388-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.942487 4840 generic.go:334] "Generic (PLEG): container finished" podID="dfe1c60c-ed7c-4a0a-a85e-82261146409d" containerID="79e1c66334965e63af538892b3410f5e2ff8b5aabb04319e2e2c1653a24b1496" exitCode=0 Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.942881 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" event={"ID":"dfe1c60c-ed7c-4a0a-a85e-82261146409d","Type":"ContainerDied","Data":"79e1c66334965e63af538892b3410f5e2ff8b5aabb04319e2e2c1653a24b1496"} Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.945248 4840 generic.go:334] "Generic (PLEG): container finished" podID="5b47f430-5dea-4714-bf85-8d6a7d6d8388" containerID="541f407730550efa0d1bb50aafd0a280133040f1dec1250d8312f5614a64fc5d" exitCode=0 Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.945306 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-774b7849d6-lz2xl" event={"ID":"5b47f430-5dea-4714-bf85-8d6a7d6d8388","Type":"ContainerDied","Data":"541f407730550efa0d1bb50aafd0a280133040f1dec1250d8312f5614a64fc5d"} Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.945335 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-774b7849d6-lz2xl" event={"ID":"5b47f430-5dea-4714-bf85-8d6a7d6d8388","Type":"ContainerDied","Data":"a2c6cb9cbf16b21baf6834ae16f3852fb3159bbc951acda1768b074516a50682"} Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.945355 4840 scope.go:117] "RemoveContainer" containerID="541f407730550efa0d1bb50aafd0a280133040f1dec1250d8312f5614a64fc5d" Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.945546 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-774b7849d6-lz2xl" Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.956989 4840 generic.go:334] "Generic (PLEG): container finished" podID="fccf048e-d3b5-4e8d-a940-ec306fe071a0" containerID="7fcf04b257aa6ca622ecd00debacd364f42e749d25c27411e1e49c6c6d470dfe" exitCode=143 Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.959242 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75cbbc8dcd-sc25j" event={"ID":"fccf048e-d3b5-4e8d-a940-ec306fe071a0","Type":"ContainerDied","Data":"7fcf04b257aa6ca622ecd00debacd364f42e749d25c27411e1e49c6c6d470dfe"} Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.959490 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="56c00f10-03a9-4308-98b1-e648d962a60f" containerName="cinder-scheduler" containerID="cri-o://aada553c7a57328e9ad3773280281a27544885d1e9c1300be8b3fe00dcf8d3e6" gracePeriod=30 Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.959680 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="56c00f10-03a9-4308-98b1-e648d962a60f" containerName="probe" containerID="cri-o://d04fdffc7603acb51ce67d4cd7e4ec8aa48d99e39dc1f5427b49e8cef88f407f" gracePeriod=30 Mar 11 09:18:13 crc kubenswrapper[4840]: I0311 09:18:13.982215 4840 scope.go:117] "RemoveContainer" containerID="08df7213186b32f6a70402ce4911cf6e5519b68d4e6d4d7512dd363ce46eaea7" Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.014117 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-774b7849d6-lz2xl"] Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.016646 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-774b7849d6-lz2xl"] Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.024704 4840 scope.go:117] "RemoveContainer" containerID="541f407730550efa0d1bb50aafd0a280133040f1dec1250d8312f5614a64fc5d" Mar 11 09:18:14 crc kubenswrapper[4840]: E0311 09:18:14.038721 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"541f407730550efa0d1bb50aafd0a280133040f1dec1250d8312f5614a64fc5d\": container with ID starting with 541f407730550efa0d1bb50aafd0a280133040f1dec1250d8312f5614a64fc5d not found: ID does not exist" containerID="541f407730550efa0d1bb50aafd0a280133040f1dec1250d8312f5614a64fc5d" Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.038786 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"541f407730550efa0d1bb50aafd0a280133040f1dec1250d8312f5614a64fc5d"} err="failed to get container status \"541f407730550efa0d1bb50aafd0a280133040f1dec1250d8312f5614a64fc5d\": rpc error: code = NotFound desc = could not find container \"541f407730550efa0d1bb50aafd0a280133040f1dec1250d8312f5614a64fc5d\": container with ID starting with 541f407730550efa0d1bb50aafd0a280133040f1dec1250d8312f5614a64fc5d not found: ID does not exist" Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.038821 4840 scope.go:117] "RemoveContainer" containerID="08df7213186b32f6a70402ce4911cf6e5519b68d4e6d4d7512dd363ce46eaea7" Mar 11 09:18:14 crc kubenswrapper[4840]: E0311 09:18:14.042070 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08df7213186b32f6a70402ce4911cf6e5519b68d4e6d4d7512dd363ce46eaea7\": container with ID starting with 08df7213186b32f6a70402ce4911cf6e5519b68d4e6d4d7512dd363ce46eaea7 not found: ID does not exist" containerID="08df7213186b32f6a70402ce4911cf6e5519b68d4e6d4d7512dd363ce46eaea7" Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.042168 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08df7213186b32f6a70402ce4911cf6e5519b68d4e6d4d7512dd363ce46eaea7"} err="failed to get container status \"08df7213186b32f6a70402ce4911cf6e5519b68d4e6d4d7512dd363ce46eaea7\": rpc error: code = NotFound desc = could not find container \"08df7213186b32f6a70402ce4911cf6e5519b68d4e6d4d7512dd363ce46eaea7\": container with ID starting with 08df7213186b32f6a70402ce4911cf6e5519b68d4e6d4d7512dd363ce46eaea7 not found: ID does not exist" Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.083426 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b47f430-5dea-4714-bf85-8d6a7d6d8388" path="/var/lib/kubelet/pods/5b47f430-5dea-4714-bf85-8d6a7d6d8388/volumes" Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.398606 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.450837 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-ovsdbserver-nb\") pod \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\" (UID: \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\") " Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.450938 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-config\") pod \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\" (UID: \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\") " Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.451029 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjg28\" (UniqueName: \"kubernetes.io/projected/dfe1c60c-ed7c-4a0a-a85e-82261146409d-kube-api-access-fjg28\") pod \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\" (UID: \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\") " Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.451075 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-ovsdbserver-sb\") pod \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\" (UID: \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\") " Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.451165 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-dns-svc\") pod \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\" (UID: \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\") " Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.451229 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-dns-swift-storage-0\") pod \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\" (UID: \"dfe1c60c-ed7c-4a0a-a85e-82261146409d\") " Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.502153 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfe1c60c-ed7c-4a0a-a85e-82261146409d-kube-api-access-fjg28" (OuterVolumeSpecName: "kube-api-access-fjg28") pod "dfe1c60c-ed7c-4a0a-a85e-82261146409d" (UID: "dfe1c60c-ed7c-4a0a-a85e-82261146409d"). InnerVolumeSpecName "kube-api-access-fjg28". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.545558 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dfe1c60c-ed7c-4a0a-a85e-82261146409d" (UID: "dfe1c60c-ed7c-4a0a-a85e-82261146409d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.552121 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-config" (OuterVolumeSpecName: "config") pod "dfe1c60c-ed7c-4a0a-a85e-82261146409d" (UID: "dfe1c60c-ed7c-4a0a-a85e-82261146409d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.554052 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.554085 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjg28\" (UniqueName: \"kubernetes.io/projected/dfe1c60c-ed7c-4a0a-a85e-82261146409d-kube-api-access-fjg28\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.554096 4840 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.579429 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dfe1c60c-ed7c-4a0a-a85e-82261146409d" (UID: "dfe1c60c-ed7c-4a0a-a85e-82261146409d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.596947 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dfe1c60c-ed7c-4a0a-a85e-82261146409d" (UID: "dfe1c60c-ed7c-4a0a-a85e-82261146409d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.605825 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dfe1c60c-ed7c-4a0a-a85e-82261146409d" (UID: "dfe1c60c-ed7c-4a0a-a85e-82261146409d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.655948 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.655990 4840 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.655999 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfe1c60c-ed7c-4a0a-a85e-82261146409d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.971307 4840 generic.go:334] "Generic (PLEG): container finished" podID="56c00f10-03a9-4308-98b1-e648d962a60f" containerID="d04fdffc7603acb51ce67d4cd7e4ec8aa48d99e39dc1f5427b49e8cef88f407f" exitCode=0 Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.971396 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"56c00f10-03a9-4308-98b1-e648d962a60f","Type":"ContainerDied","Data":"d04fdffc7603acb51ce67d4cd7e4ec8aa48d99e39dc1f5427b49e8cef88f407f"} Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.975415 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" event={"ID":"dfe1c60c-ed7c-4a0a-a85e-82261146409d","Type":"ContainerDied","Data":"427015fc66421f7d2372e9d9640e6873336ad2d1d5fcf966a0f6277b980f499a"} Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.975490 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7859c7799c-2ksc5" Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.975496 4840 scope.go:117] "RemoveContainer" containerID="79e1c66334965e63af538892b3410f5e2ff8b5aabb04319e2e2c1653a24b1496" Mar 11 09:18:14 crc kubenswrapper[4840]: I0311 09:18:14.999756 4840 scope.go:117] "RemoveContainer" containerID="00a2e783db3d806c9ff2d6b8406e19b84ed9d4280233512b98ebce497eb3d170" Mar 11 09:18:15 crc kubenswrapper[4840]: I0311 09:18:15.028530 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7859c7799c-2ksc5"] Mar 11 09:18:15 crc kubenswrapper[4840]: I0311 09:18:15.038103 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7859c7799c-2ksc5"] Mar 11 09:18:16 crc kubenswrapper[4840]: I0311 09:18:16.071573 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfe1c60c-ed7c-4a0a-a85e-82261146409d" path="/var/lib/kubelet/pods/dfe1c60c-ed7c-4a0a-a85e-82261146409d/volumes" Mar 11 09:18:16 crc kubenswrapper[4840]: I0311 09:18:16.274643 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.011762 4840 generic.go:334] "Generic (PLEG): container finished" podID="56c00f10-03a9-4308-98b1-e648d962a60f" containerID="aada553c7a57328e9ad3773280281a27544885d1e9c1300be8b3fe00dcf8d3e6" exitCode=0 Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.011862 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"56c00f10-03a9-4308-98b1-e648d962a60f","Type":"ContainerDied","Data":"aada553c7a57328e9ad3773280281a27544885d1e9c1300be8b3fe00dcf8d3e6"} Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.025046 4840 generic.go:334] "Generic (PLEG): container finished" podID="fccf048e-d3b5-4e8d-a940-ec306fe071a0" containerID="191fc6b9a70ef6981941d87ed503a96b3604cf4bb58e3fb543512122c2c8cfe6" exitCode=0 Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.025092 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75cbbc8dcd-sc25j" event={"ID":"fccf048e-d3b5-4e8d-a940-ec306fe071a0","Type":"ContainerDied","Data":"191fc6b9a70ef6981941d87ed503a96b3604cf4bb58e3fb543512122c2c8cfe6"} Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.409228 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.514276 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-public-tls-certs\") pod \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.514389 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-internal-tls-certs\") pod \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.514451 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-combined-ca-bundle\") pod \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.514585 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-scripts\") pod \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.514652 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-config-data\") pod \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.514801 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb5xd\" (UniqueName: \"kubernetes.io/projected/fccf048e-d3b5-4e8d-a940-ec306fe071a0-kube-api-access-tb5xd\") pod \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.514925 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fccf048e-d3b5-4e8d-a940-ec306fe071a0-logs\") pod \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\" (UID: \"fccf048e-d3b5-4e8d-a940-ec306fe071a0\") " Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.518046 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fccf048e-d3b5-4e8d-a940-ec306fe071a0-logs" (OuterVolumeSpecName: "logs") pod "fccf048e-d3b5-4e8d-a940-ec306fe071a0" (UID: "fccf048e-d3b5-4e8d-a940-ec306fe071a0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.522740 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fccf048e-d3b5-4e8d-a940-ec306fe071a0-kube-api-access-tb5xd" (OuterVolumeSpecName: "kube-api-access-tb5xd") pod "fccf048e-d3b5-4e8d-a940-ec306fe071a0" (UID: "fccf048e-d3b5-4e8d-a940-ec306fe071a0"). InnerVolumeSpecName "kube-api-access-tb5xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.522866 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-scripts" (OuterVolumeSpecName: "scripts") pod "fccf048e-d3b5-4e8d-a940-ec306fe071a0" (UID: "fccf048e-d3b5-4e8d-a940-ec306fe071a0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.572771 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fccf048e-d3b5-4e8d-a940-ec306fe071a0" (UID: "fccf048e-d3b5-4e8d-a940-ec306fe071a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.611207 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-config-data" (OuterVolumeSpecName: "config-data") pod "fccf048e-d3b5-4e8d-a940-ec306fe071a0" (UID: "fccf048e-d3b5-4e8d-a940-ec306fe071a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.617180 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.617216 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.617230 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.617240 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb5xd\" (UniqueName: \"kubernetes.io/projected/fccf048e-d3b5-4e8d-a940-ec306fe071a0-kube-api-access-tb5xd\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.617254 4840 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fccf048e-d3b5-4e8d-a940-ec306fe071a0-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.653797 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fccf048e-d3b5-4e8d-a940-ec306fe071a0" (UID: "fccf048e-d3b5-4e8d-a940-ec306fe071a0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.672104 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fccf048e-d3b5-4e8d-a940-ec306fe071a0" (UID: "fccf048e-d3b5-4e8d-a940-ec306fe071a0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.718707 4840 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.718767 4840 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fccf048e-d3b5-4e8d-a940-ec306fe071a0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.759528 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.819691 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c00f10-03a9-4308-98b1-e648d962a60f-combined-ca-bundle\") pod \"56c00f10-03a9-4308-98b1-e648d962a60f\" (UID: \"56c00f10-03a9-4308-98b1-e648d962a60f\") " Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.819850 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56c00f10-03a9-4308-98b1-e648d962a60f-config-data-custom\") pod \"56c00f10-03a9-4308-98b1-e648d962a60f\" (UID: \"56c00f10-03a9-4308-98b1-e648d962a60f\") " Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.819879 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/56c00f10-03a9-4308-98b1-e648d962a60f-etc-machine-id\") pod \"56c00f10-03a9-4308-98b1-e648d962a60f\" (UID: \"56c00f10-03a9-4308-98b1-e648d962a60f\") " Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.819899 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kpgw\" (UniqueName: \"kubernetes.io/projected/56c00f10-03a9-4308-98b1-e648d962a60f-kube-api-access-6kpgw\") pod \"56c00f10-03a9-4308-98b1-e648d962a60f\" (UID: \"56c00f10-03a9-4308-98b1-e648d962a60f\") " Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.819917 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56c00f10-03a9-4308-98b1-e648d962a60f-scripts\") pod \"56c00f10-03a9-4308-98b1-e648d962a60f\" (UID: \"56c00f10-03a9-4308-98b1-e648d962a60f\") " Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.819981 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c00f10-03a9-4308-98b1-e648d962a60f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "56c00f10-03a9-4308-98b1-e648d962a60f" (UID: "56c00f10-03a9-4308-98b1-e648d962a60f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.820000 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56c00f10-03a9-4308-98b1-e648d962a60f-config-data\") pod \"56c00f10-03a9-4308-98b1-e648d962a60f\" (UID: \"56c00f10-03a9-4308-98b1-e648d962a60f\") " Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.820357 4840 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/56c00f10-03a9-4308-98b1-e648d962a60f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.825655 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c00f10-03a9-4308-98b1-e648d962a60f-scripts" (OuterVolumeSpecName: "scripts") pod "56c00f10-03a9-4308-98b1-e648d962a60f" (UID: "56c00f10-03a9-4308-98b1-e648d962a60f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.826015 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c00f10-03a9-4308-98b1-e648d962a60f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "56c00f10-03a9-4308-98b1-e648d962a60f" (UID: "56c00f10-03a9-4308-98b1-e648d962a60f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.829290 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56c00f10-03a9-4308-98b1-e648d962a60f-kube-api-access-6kpgw" (OuterVolumeSpecName: "kube-api-access-6kpgw") pod "56c00f10-03a9-4308-98b1-e648d962a60f" (UID: "56c00f10-03a9-4308-98b1-e648d962a60f"). InnerVolumeSpecName "kube-api-access-6kpgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.874343 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c00f10-03a9-4308-98b1-e648d962a60f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56c00f10-03a9-4308-98b1-e648d962a60f" (UID: "56c00f10-03a9-4308-98b1-e648d962a60f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.910185 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c00f10-03a9-4308-98b1-e648d962a60f-config-data" (OuterVolumeSpecName: "config-data") pod "56c00f10-03a9-4308-98b1-e648d962a60f" (UID: "56c00f10-03a9-4308-98b1-e648d962a60f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.922166 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c00f10-03a9-4308-98b1-e648d962a60f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.922199 4840 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56c00f10-03a9-4308-98b1-e648d962a60f-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.922209 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kpgw\" (UniqueName: \"kubernetes.io/projected/56c00f10-03a9-4308-98b1-e648d962a60f-kube-api-access-6kpgw\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.922221 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56c00f10-03a9-4308-98b1-e648d962a60f-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:17 crc kubenswrapper[4840]: I0311 09:18:17.922230 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56c00f10-03a9-4308-98b1-e648d962a60f-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.038279 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.038258 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"56c00f10-03a9-4308-98b1-e648d962a60f","Type":"ContainerDied","Data":"a3c7ac698cdebabc8739ec2616b0964dd795e08f0e88aded4db552a62c19269a"} Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.038442 4840 scope.go:117] "RemoveContainer" containerID="d04fdffc7603acb51ce67d4cd7e4ec8aa48d99e39dc1f5427b49e8cef88f407f" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.042422 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-75cbbc8dcd-sc25j" event={"ID":"fccf048e-d3b5-4e8d-a940-ec306fe071a0","Type":"ContainerDied","Data":"279ec998b5c3d891654bcaaf1b5077282d98422d63225443a41a25a7c71ea3ce"} Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.042525 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-75cbbc8dcd-sc25j" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.063988 4840 scope.go:117] "RemoveContainer" containerID="aada553c7a57328e9ad3773280281a27544885d1e9c1300be8b3fe00dcf8d3e6" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.086694 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-75cbbc8dcd-sc25j"] Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.091307 4840 scope.go:117] "RemoveContainer" containerID="191fc6b9a70ef6981941d87ed503a96b3604cf4bb58e3fb543512122c2c8cfe6" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.095714 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-75cbbc8dcd-sc25j"] Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.118745 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.126613 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.129242 4840 scope.go:117] "RemoveContainer" containerID="7fcf04b257aa6ca622ecd00debacd364f42e749d25c27411e1e49c6c6d470dfe" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.143239 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 11 09:18:18 crc kubenswrapper[4840]: E0311 09:18:18.145295 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c00f10-03a9-4308-98b1-e648d962a60f" containerName="probe" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.145320 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c00f10-03a9-4308-98b1-e648d962a60f" containerName="probe" Mar 11 09:18:18 crc kubenswrapper[4840]: E0311 09:18:18.145335 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c00f10-03a9-4308-98b1-e648d962a60f" containerName="cinder-scheduler" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.145342 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c00f10-03a9-4308-98b1-e648d962a60f" containerName="cinder-scheduler" Mar 11 09:18:18 crc kubenswrapper[4840]: E0311 09:18:18.145352 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c84bd84-b44d-4d4a-a6ee-0b85e349bc69" containerName="neutron-api" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.145360 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c84bd84-b44d-4d4a-a6ee-0b85e349bc69" containerName="neutron-api" Mar 11 09:18:18 crc kubenswrapper[4840]: E0311 09:18:18.145373 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b47f430-5dea-4714-bf85-8d6a7d6d8388" containerName="barbican-api" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.145379 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b47f430-5dea-4714-bf85-8d6a7d6d8388" containerName="barbican-api" Mar 11 09:18:18 crc kubenswrapper[4840]: E0311 09:18:18.145395 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c84bd84-b44d-4d4a-a6ee-0b85e349bc69" containerName="neutron-httpd" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.145402 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c84bd84-b44d-4d4a-a6ee-0b85e349bc69" containerName="neutron-httpd" Mar 11 09:18:18 crc kubenswrapper[4840]: E0311 09:18:18.145417 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fccf048e-d3b5-4e8d-a940-ec306fe071a0" containerName="placement-log" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.145425 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="fccf048e-d3b5-4e8d-a940-ec306fe071a0" containerName="placement-log" Mar 11 09:18:18 crc kubenswrapper[4840]: E0311 09:18:18.145437 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b47f430-5dea-4714-bf85-8d6a7d6d8388" containerName="barbican-api-log" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.145444 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b47f430-5dea-4714-bf85-8d6a7d6d8388" containerName="barbican-api-log" Mar 11 09:18:18 crc kubenswrapper[4840]: E0311 09:18:18.145480 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fccf048e-d3b5-4e8d-a940-ec306fe071a0" containerName="placement-api" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.145486 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="fccf048e-d3b5-4e8d-a940-ec306fe071a0" containerName="placement-api" Mar 11 09:18:18 crc kubenswrapper[4840]: E0311 09:18:18.145499 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfe1c60c-ed7c-4a0a-a85e-82261146409d" containerName="init" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.145504 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe1c60c-ed7c-4a0a-a85e-82261146409d" containerName="init" Mar 11 09:18:18 crc kubenswrapper[4840]: E0311 09:18:18.145514 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfe1c60c-ed7c-4a0a-a85e-82261146409d" containerName="dnsmasq-dns" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.145519 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfe1c60c-ed7c-4a0a-a85e-82261146409d" containerName="dnsmasq-dns" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.145689 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c00f10-03a9-4308-98b1-e648d962a60f" containerName="cinder-scheduler" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.145709 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c84bd84-b44d-4d4a-a6ee-0b85e349bc69" containerName="neutron-api" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.145719 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c00f10-03a9-4308-98b1-e648d962a60f" containerName="probe" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.145734 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b47f430-5dea-4714-bf85-8d6a7d6d8388" containerName="barbican-api-log" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.145743 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="fccf048e-d3b5-4e8d-a940-ec306fe071a0" containerName="placement-api" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.145752 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c84bd84-b44d-4d4a-a6ee-0b85e349bc69" containerName="neutron-httpd" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.145762 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="fccf048e-d3b5-4e8d-a940-ec306fe071a0" containerName="placement-log" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.145769 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b47f430-5dea-4714-bf85-8d6a7d6d8388" containerName="barbican-api" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.145780 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfe1c60c-ed7c-4a0a-a85e-82261146409d" containerName="dnsmasq-dns" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.147442 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.155768 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.157578 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.227496 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-config-data\") pod \"cinder-scheduler-0\" (UID: \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.227652 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.227733 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.227756 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f26f\" (UniqueName: \"kubernetes.io/projected/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-kube-api-access-7f26f\") pod \"cinder-scheduler-0\" (UID: \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.227779 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-scripts\") pod \"cinder-scheduler-0\" (UID: \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.227828 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.249692 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.251320 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.256401 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.257360 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.257672 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-6zkhw" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.257891 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.329890 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f26f\" (UniqueName: \"kubernetes.io/projected/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-kube-api-access-7f26f\") pod \"cinder-scheduler-0\" (UID: \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.329962 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/794eb074-baf3-46dc-8f9d-8a92fc9240fd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"794eb074-baf3-46dc-8f9d-8a92fc9240fd\") " pod="openstack/openstackclient" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.329991 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-scripts\") pod \"cinder-scheduler-0\" (UID: \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.330181 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v6h4\" (UniqueName: \"kubernetes.io/projected/794eb074-baf3-46dc-8f9d-8a92fc9240fd-kube-api-access-7v6h4\") pod \"openstackclient\" (UID: \"794eb074-baf3-46dc-8f9d-8a92fc9240fd\") " pod="openstack/openstackclient" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.330245 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.330392 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-config-data\") pod \"cinder-scheduler-0\" (UID: \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.330629 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.330668 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/794eb074-baf3-46dc-8f9d-8a92fc9240fd-openstack-config\") pod \"openstackclient\" (UID: \"794eb074-baf3-46dc-8f9d-8a92fc9240fd\") " pod="openstack/openstackclient" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.330721 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/794eb074-baf3-46dc-8f9d-8a92fc9240fd-openstack-config-secret\") pod \"openstackclient\" (UID: \"794eb074-baf3-46dc-8f9d-8a92fc9240fd\") " pod="openstack/openstackclient" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.330852 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.330969 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.334908 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-scripts\") pod \"cinder-scheduler-0\" (UID: \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.335138 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-config-data\") pod \"cinder-scheduler-0\" (UID: \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.335542 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.336230 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.351065 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f26f\" (UniqueName: \"kubernetes.io/projected/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-kube-api-access-7f26f\") pod \"cinder-scheduler-0\" (UID: \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\") " pod="openstack/cinder-scheduler-0" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.432491 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/794eb074-baf3-46dc-8f9d-8a92fc9240fd-openstack-config\") pod \"openstackclient\" (UID: \"794eb074-baf3-46dc-8f9d-8a92fc9240fd\") " pod="openstack/openstackclient" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.432558 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/794eb074-baf3-46dc-8f9d-8a92fc9240fd-openstack-config-secret\") pod \"openstackclient\" (UID: \"794eb074-baf3-46dc-8f9d-8a92fc9240fd\") " pod="openstack/openstackclient" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.432628 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/794eb074-baf3-46dc-8f9d-8a92fc9240fd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"794eb074-baf3-46dc-8f9d-8a92fc9240fd\") " pod="openstack/openstackclient" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.432670 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v6h4\" (UniqueName: \"kubernetes.io/projected/794eb074-baf3-46dc-8f9d-8a92fc9240fd-kube-api-access-7v6h4\") pod \"openstackclient\" (UID: \"794eb074-baf3-46dc-8f9d-8a92fc9240fd\") " pod="openstack/openstackclient" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.434211 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/794eb074-baf3-46dc-8f9d-8a92fc9240fd-openstack-config\") pod \"openstackclient\" (UID: \"794eb074-baf3-46dc-8f9d-8a92fc9240fd\") " pod="openstack/openstackclient" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.437218 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/794eb074-baf3-46dc-8f9d-8a92fc9240fd-openstack-config-secret\") pod \"openstackclient\" (UID: \"794eb074-baf3-46dc-8f9d-8a92fc9240fd\") " pod="openstack/openstackclient" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.439981 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/794eb074-baf3-46dc-8f9d-8a92fc9240fd-combined-ca-bundle\") pod \"openstackclient\" (UID: \"794eb074-baf3-46dc-8f9d-8a92fc9240fd\") " pod="openstack/openstackclient" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.449793 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v6h4\" (UniqueName: \"kubernetes.io/projected/794eb074-baf3-46dc-8f9d-8a92fc9240fd-kube-api-access-7v6h4\") pod \"openstackclient\" (UID: \"794eb074-baf3-46dc-8f9d-8a92fc9240fd\") " pod="openstack/openstackclient" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.506309 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 11 09:18:18 crc kubenswrapper[4840]: I0311 09:18:18.568935 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 11 09:18:19 crc kubenswrapper[4840]: I0311 09:18:19.076905 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 11 09:18:19 crc kubenswrapper[4840]: I0311 09:18:19.087684 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 11 09:18:19 crc kubenswrapper[4840]: W0311 09:18:19.101708 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod794eb074_baf3_46dc_8f9d_8a92fc9240fd.slice/crio-1b666050adae6c5d4d0aeee0e9b6bdbe6ec78fe2699783172e6c914b49458749 WatchSource:0}: Error finding container 1b666050adae6c5d4d0aeee0e9b6bdbe6ec78fe2699783172e6c914b49458749: Status 404 returned error can't find the container with id 1b666050adae6c5d4d0aeee0e9b6bdbe6ec78fe2699783172e6c914b49458749 Mar 11 09:18:19 crc kubenswrapper[4840]: W0311 09:18:19.102537 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd8b15e6_0bb6_4d79_99aa_765ded51af1d.slice/crio-9e1ca780ae5076ba8ed6b9609e1dbd287e07c1fce69d80f6d061f0edda2c5b2d WatchSource:0}: Error finding container 9e1ca780ae5076ba8ed6b9609e1dbd287e07c1fce69d80f6d061f0edda2c5b2d: Status 404 returned error can't find the container with id 9e1ca780ae5076ba8ed6b9609e1dbd287e07c1fce69d80f6d061f0edda2c5b2d Mar 11 09:18:20 crc kubenswrapper[4840]: I0311 09:18:20.072172 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56c00f10-03a9-4308-98b1-e648d962a60f" path="/var/lib/kubelet/pods/56c00f10-03a9-4308-98b1-e648d962a60f/volumes" Mar 11 09:18:20 crc kubenswrapper[4840]: I0311 09:18:20.073376 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fccf048e-d3b5-4e8d-a940-ec306fe071a0" path="/var/lib/kubelet/pods/fccf048e-d3b5-4e8d-a940-ec306fe071a0/volumes" Mar 11 09:18:20 crc kubenswrapper[4840]: I0311 09:18:20.082771 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"794eb074-baf3-46dc-8f9d-8a92fc9240fd","Type":"ContainerStarted","Data":"1b666050adae6c5d4d0aeee0e9b6bdbe6ec78fe2699783172e6c914b49458749"} Mar 11 09:18:20 crc kubenswrapper[4840]: I0311 09:18:20.100928 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fd8b15e6-0bb6-4d79-99aa-765ded51af1d","Type":"ContainerStarted","Data":"7bb1063b1c325078ccba724db26480966f82fe2b0184c38551e103081dc02f90"} Mar 11 09:18:20 crc kubenswrapper[4840]: I0311 09:18:20.100983 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fd8b15e6-0bb6-4d79-99aa-765ded51af1d","Type":"ContainerStarted","Data":"9e1ca780ae5076ba8ed6b9609e1dbd287e07c1fce69d80f6d061f0edda2c5b2d"} Mar 11 09:18:20 crc kubenswrapper[4840]: I0311 09:18:20.746089 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-97696577-2mh8q"] Mar 11 09:18:20 crc kubenswrapper[4840]: I0311 09:18:20.748362 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:20 crc kubenswrapper[4840]: I0311 09:18:20.757894 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 11 09:18:20 crc kubenswrapper[4840]: I0311 09:18:20.758034 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 11 09:18:20 crc kubenswrapper[4840]: I0311 09:18:20.758091 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 11 09:18:20 crc kubenswrapper[4840]: I0311 09:18:20.759327 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-97696577-2mh8q"] Mar 11 09:18:20 crc kubenswrapper[4840]: I0311 09:18:20.903033 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10cd3a6-0a55-4957-b861-678d9af3c338-log-httpd\") pod \"swift-proxy-97696577-2mh8q\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:20 crc kubenswrapper[4840]: I0311 09:18:20.903078 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-public-tls-certs\") pod \"swift-proxy-97696577-2mh8q\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:20 crc kubenswrapper[4840]: I0311 09:18:20.903127 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f10cd3a6-0a55-4957-b861-678d9af3c338-etc-swift\") pod \"swift-proxy-97696577-2mh8q\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:20 crc kubenswrapper[4840]: I0311 09:18:20.903154 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-combined-ca-bundle\") pod \"swift-proxy-97696577-2mh8q\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:20 crc kubenswrapper[4840]: I0311 09:18:20.903255 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wf8r\" (UniqueName: \"kubernetes.io/projected/f10cd3a6-0a55-4957-b861-678d9af3c338-kube-api-access-9wf8r\") pod \"swift-proxy-97696577-2mh8q\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:20 crc kubenswrapper[4840]: I0311 09:18:20.903278 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-internal-tls-certs\") pod \"swift-proxy-97696577-2mh8q\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:20 crc kubenswrapper[4840]: I0311 09:18:20.903321 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10cd3a6-0a55-4957-b861-678d9af3c338-run-httpd\") pod \"swift-proxy-97696577-2mh8q\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:20 crc kubenswrapper[4840]: I0311 09:18:20.903606 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-config-data\") pod \"swift-proxy-97696577-2mh8q\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:21 crc kubenswrapper[4840]: I0311 09:18:21.006070 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wf8r\" (UniqueName: \"kubernetes.io/projected/f10cd3a6-0a55-4957-b861-678d9af3c338-kube-api-access-9wf8r\") pod \"swift-proxy-97696577-2mh8q\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:21 crc kubenswrapper[4840]: I0311 09:18:21.006598 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-internal-tls-certs\") pod \"swift-proxy-97696577-2mh8q\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:21 crc kubenswrapper[4840]: I0311 09:18:21.007882 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10cd3a6-0a55-4957-b861-678d9af3c338-run-httpd\") pod \"swift-proxy-97696577-2mh8q\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:21 crc kubenswrapper[4840]: I0311 09:18:21.007949 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-config-data\") pod \"swift-proxy-97696577-2mh8q\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:21 crc kubenswrapper[4840]: I0311 09:18:21.007986 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10cd3a6-0a55-4957-b861-678d9af3c338-log-httpd\") pod \"swift-proxy-97696577-2mh8q\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:21 crc kubenswrapper[4840]: I0311 09:18:21.008006 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-public-tls-certs\") pod \"swift-proxy-97696577-2mh8q\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:21 crc kubenswrapper[4840]: I0311 09:18:21.008053 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f10cd3a6-0a55-4957-b861-678d9af3c338-etc-swift\") pod \"swift-proxy-97696577-2mh8q\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:21 crc kubenswrapper[4840]: I0311 09:18:21.008084 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-combined-ca-bundle\") pod \"swift-proxy-97696577-2mh8q\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:21 crc kubenswrapper[4840]: I0311 09:18:21.009453 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10cd3a6-0a55-4957-b861-678d9af3c338-log-httpd\") pod \"swift-proxy-97696577-2mh8q\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:21 crc kubenswrapper[4840]: I0311 09:18:21.009868 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10cd3a6-0a55-4957-b861-678d9af3c338-run-httpd\") pod \"swift-proxy-97696577-2mh8q\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:21 crc kubenswrapper[4840]: I0311 09:18:21.013749 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-internal-tls-certs\") pod \"swift-proxy-97696577-2mh8q\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:21 crc kubenswrapper[4840]: I0311 09:18:21.015129 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-public-tls-certs\") pod \"swift-proxy-97696577-2mh8q\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:21 crc kubenswrapper[4840]: I0311 09:18:21.015249 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-combined-ca-bundle\") pod \"swift-proxy-97696577-2mh8q\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:21 crc kubenswrapper[4840]: I0311 09:18:21.016079 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-config-data\") pod \"swift-proxy-97696577-2mh8q\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:21 crc kubenswrapper[4840]: I0311 09:18:21.017991 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f10cd3a6-0a55-4957-b861-678d9af3c338-etc-swift\") pod \"swift-proxy-97696577-2mh8q\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:21 crc kubenswrapper[4840]: I0311 09:18:21.026572 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wf8r\" (UniqueName: \"kubernetes.io/projected/f10cd3a6-0a55-4957-b861-678d9af3c338-kube-api-access-9wf8r\") pod \"swift-proxy-97696577-2mh8q\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:21 crc kubenswrapper[4840]: I0311 09:18:21.077251 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:21 crc kubenswrapper[4840]: I0311 09:18:21.124310 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fd8b15e6-0bb6-4d79-99aa-765ded51af1d","Type":"ContainerStarted","Data":"8d27b219acdbf1ed0f6cb152b86c9b44e8ed6f976378d28441b06575714a48ed"} Mar 11 09:18:21 crc kubenswrapper[4840]: I0311 09:18:21.155028 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.155009643 podStartE2EDuration="3.155009643s" podCreationTimestamp="2026-03-11 09:18:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:18:21.145242427 +0000 UTC m=+1299.810912242" watchObservedRunningTime="2026-03-11 09:18:21.155009643 +0000 UTC m=+1299.820679458" Mar 11 09:18:21 crc kubenswrapper[4840]: I0311 09:18:21.677222 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-97696577-2mh8q"] Mar 11 09:18:22 crc kubenswrapper[4840]: I0311 09:18:22.143728 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-97696577-2mh8q" event={"ID":"f10cd3a6-0a55-4957-b861-678d9af3c338","Type":"ContainerStarted","Data":"7ec25a62f5da025330f39922a9766d18ca04cc7a4cbe3b4c61c31b0ef51d49f5"} Mar 11 09:18:22 crc kubenswrapper[4840]: I0311 09:18:22.144173 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-97696577-2mh8q" event={"ID":"f10cd3a6-0a55-4957-b861-678d9af3c338","Type":"ContainerStarted","Data":"b1c44fbc8505b3dbc12f02047d18c60d764862d674aa170ce783dcff52172d01"} Mar 11 09:18:22 crc kubenswrapper[4840]: I0311 09:18:22.266496 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:18:22 crc kubenswrapper[4840]: I0311 09:18:22.272065 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="74300cf6-cc28-45d0-ae1a-4099210bcbf1" containerName="ceilometer-central-agent" containerID="cri-o://9d63ed12400b7b22d7b3175f5ad574f240db1b75017314d820e4f1e64bf92d6c" gracePeriod=30 Mar 11 09:18:22 crc kubenswrapper[4840]: I0311 09:18:22.272085 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="74300cf6-cc28-45d0-ae1a-4099210bcbf1" containerName="proxy-httpd" containerID="cri-o://f24a2427d9c1e97f86c996783eeb38003294c9ce55e7512876fa072d844f2651" gracePeriod=30 Mar 11 09:18:22 crc kubenswrapper[4840]: I0311 09:18:22.272139 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="74300cf6-cc28-45d0-ae1a-4099210bcbf1" containerName="sg-core" containerID="cri-o://c426a86a3c4f9c6b20511a27db1b07c12494776e3cf3dd145b14972a6713291d" gracePeriod=30 Mar 11 09:18:22 crc kubenswrapper[4840]: I0311 09:18:22.272260 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="74300cf6-cc28-45d0-ae1a-4099210bcbf1" containerName="ceilometer-notification-agent" containerID="cri-o://131b4c047b76b60be17cc4859f36de4b964899f98daee4b5778c7093e955d6ff" gracePeriod=30 Mar 11 09:18:22 crc kubenswrapper[4840]: I0311 09:18:22.287054 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="74300cf6-cc28-45d0-ae1a-4099210bcbf1" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.168:3000/\": read tcp 10.217.0.2:37088->10.217.0.168:3000: read: connection reset by peer" Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.168362 4840 generic.go:334] "Generic (PLEG): container finished" podID="74300cf6-cc28-45d0-ae1a-4099210bcbf1" containerID="f24a2427d9c1e97f86c996783eeb38003294c9ce55e7512876fa072d844f2651" exitCode=0 Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.168670 4840 generic.go:334] "Generic (PLEG): container finished" podID="74300cf6-cc28-45d0-ae1a-4099210bcbf1" containerID="c426a86a3c4f9c6b20511a27db1b07c12494776e3cf3dd145b14972a6713291d" exitCode=2 Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.168679 4840 generic.go:334] "Generic (PLEG): container finished" podID="74300cf6-cc28-45d0-ae1a-4099210bcbf1" containerID="131b4c047b76b60be17cc4859f36de4b964899f98daee4b5778c7093e955d6ff" exitCode=0 Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.168685 4840 generic.go:334] "Generic (PLEG): container finished" podID="74300cf6-cc28-45d0-ae1a-4099210bcbf1" containerID="9d63ed12400b7b22d7b3175f5ad574f240db1b75017314d820e4f1e64bf92d6c" exitCode=0 Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.168721 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74300cf6-cc28-45d0-ae1a-4099210bcbf1","Type":"ContainerDied","Data":"f24a2427d9c1e97f86c996783eeb38003294c9ce55e7512876fa072d844f2651"} Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.168750 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74300cf6-cc28-45d0-ae1a-4099210bcbf1","Type":"ContainerDied","Data":"c426a86a3c4f9c6b20511a27db1b07c12494776e3cf3dd145b14972a6713291d"} Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.168760 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74300cf6-cc28-45d0-ae1a-4099210bcbf1","Type":"ContainerDied","Data":"131b4c047b76b60be17cc4859f36de4b964899f98daee4b5778c7093e955d6ff"} Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.168768 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74300cf6-cc28-45d0-ae1a-4099210bcbf1","Type":"ContainerDied","Data":"9d63ed12400b7b22d7b3175f5ad574f240db1b75017314d820e4f1e64bf92d6c"} Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.170886 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-97696577-2mh8q" event={"ID":"f10cd3a6-0a55-4957-b861-678d9af3c338","Type":"ContainerStarted","Data":"e59d16a9086fcb2441c89141dd63cbc53bf993e3952c1f25bf481d63fd3390fa"} Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.172055 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.172083 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.196012 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-97696577-2mh8q" podStartSLOduration=3.195913801 podStartE2EDuration="3.195913801s" podCreationTimestamp="2026-03-11 09:18:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:18:23.187577491 +0000 UTC m=+1301.853247306" watchObservedRunningTime="2026-03-11 09:18:23.195913801 +0000 UTC m=+1301.861583616" Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.231808 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.363307 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74300cf6-cc28-45d0-ae1a-4099210bcbf1-log-httpd\") pod \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.363386 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74300cf6-cc28-45d0-ae1a-4099210bcbf1-combined-ca-bundle\") pod \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.363649 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hshwj\" (UniqueName: \"kubernetes.io/projected/74300cf6-cc28-45d0-ae1a-4099210bcbf1-kube-api-access-hshwj\") pod \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.363713 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74300cf6-cc28-45d0-ae1a-4099210bcbf1-config-data\") pod \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.363761 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74300cf6-cc28-45d0-ae1a-4099210bcbf1-scripts\") pod \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.363784 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74300cf6-cc28-45d0-ae1a-4099210bcbf1-run-httpd\") pod \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.364398 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74300cf6-cc28-45d0-ae1a-4099210bcbf1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "74300cf6-cc28-45d0-ae1a-4099210bcbf1" (UID: "74300cf6-cc28-45d0-ae1a-4099210bcbf1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.364506 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74300cf6-cc28-45d0-ae1a-4099210bcbf1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "74300cf6-cc28-45d0-ae1a-4099210bcbf1" (UID: "74300cf6-cc28-45d0-ae1a-4099210bcbf1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.364544 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74300cf6-cc28-45d0-ae1a-4099210bcbf1-sg-core-conf-yaml\") pod \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\" (UID: \"74300cf6-cc28-45d0-ae1a-4099210bcbf1\") " Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.365137 4840 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74300cf6-cc28-45d0-ae1a-4099210bcbf1-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.365161 4840 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74300cf6-cc28-45d0-ae1a-4099210bcbf1-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.369736 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74300cf6-cc28-45d0-ae1a-4099210bcbf1-kube-api-access-hshwj" (OuterVolumeSpecName: "kube-api-access-hshwj") pod "74300cf6-cc28-45d0-ae1a-4099210bcbf1" (UID: "74300cf6-cc28-45d0-ae1a-4099210bcbf1"). InnerVolumeSpecName "kube-api-access-hshwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.388652 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74300cf6-cc28-45d0-ae1a-4099210bcbf1-scripts" (OuterVolumeSpecName: "scripts") pod "74300cf6-cc28-45d0-ae1a-4099210bcbf1" (UID: "74300cf6-cc28-45d0-ae1a-4099210bcbf1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.406604 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74300cf6-cc28-45d0-ae1a-4099210bcbf1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "74300cf6-cc28-45d0-ae1a-4099210bcbf1" (UID: "74300cf6-cc28-45d0-ae1a-4099210bcbf1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.469758 4840 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74300cf6-cc28-45d0-ae1a-4099210bcbf1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.469800 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hshwj\" (UniqueName: \"kubernetes.io/projected/74300cf6-cc28-45d0-ae1a-4099210bcbf1-kube-api-access-hshwj\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.469813 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74300cf6-cc28-45d0-ae1a-4099210bcbf1-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.474579 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74300cf6-cc28-45d0-ae1a-4099210bcbf1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74300cf6-cc28-45d0-ae1a-4099210bcbf1" (UID: "74300cf6-cc28-45d0-ae1a-4099210bcbf1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.507699 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.508672 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74300cf6-cc28-45d0-ae1a-4099210bcbf1-config-data" (OuterVolumeSpecName: "config-data") pod "74300cf6-cc28-45d0-ae1a-4099210bcbf1" (UID: "74300cf6-cc28-45d0-ae1a-4099210bcbf1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.572165 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74300cf6-cc28-45d0-ae1a-4099210bcbf1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:23 crc kubenswrapper[4840]: I0311 09:18:23.572208 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74300cf6-cc28-45d0-ae1a-4099210bcbf1-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.186067 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74300cf6-cc28-45d0-ae1a-4099210bcbf1","Type":"ContainerDied","Data":"64382546f1f80ccaf995d2a923e32932d83117965f0e0c19629e493e51181006"} Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.186122 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.186379 4840 scope.go:117] "RemoveContainer" containerID="f24a2427d9c1e97f86c996783eeb38003294c9ce55e7512876fa072d844f2651" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.217753 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.228237 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.246423 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:18:24 crc kubenswrapper[4840]: E0311 09:18:24.246942 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74300cf6-cc28-45d0-ae1a-4099210bcbf1" containerName="proxy-httpd" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.246959 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="74300cf6-cc28-45d0-ae1a-4099210bcbf1" containerName="proxy-httpd" Mar 11 09:18:24 crc kubenswrapper[4840]: E0311 09:18:24.246970 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74300cf6-cc28-45d0-ae1a-4099210bcbf1" containerName="ceilometer-notification-agent" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.246978 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="74300cf6-cc28-45d0-ae1a-4099210bcbf1" containerName="ceilometer-notification-agent" Mar 11 09:18:24 crc kubenswrapper[4840]: E0311 09:18:24.246987 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74300cf6-cc28-45d0-ae1a-4099210bcbf1" containerName="sg-core" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.246994 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="74300cf6-cc28-45d0-ae1a-4099210bcbf1" containerName="sg-core" Mar 11 09:18:24 crc kubenswrapper[4840]: E0311 09:18:24.247013 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74300cf6-cc28-45d0-ae1a-4099210bcbf1" containerName="ceilometer-central-agent" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.247019 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="74300cf6-cc28-45d0-ae1a-4099210bcbf1" containerName="ceilometer-central-agent" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.247216 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="74300cf6-cc28-45d0-ae1a-4099210bcbf1" containerName="sg-core" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.247235 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="74300cf6-cc28-45d0-ae1a-4099210bcbf1" containerName="ceilometer-notification-agent" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.247247 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="74300cf6-cc28-45d0-ae1a-4099210bcbf1" containerName="ceilometer-central-agent" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.247258 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="74300cf6-cc28-45d0-ae1a-4099210bcbf1" containerName="proxy-httpd" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.248900 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.253964 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.254947 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.257198 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.304109 4840 scope.go:117] "RemoveContainer" containerID="c426a86a3c4f9c6b20511a27db1b07c12494776e3cf3dd145b14972a6713291d" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.337358 4840 scope.go:117] "RemoveContainer" containerID="131b4c047b76b60be17cc4859f36de4b964899f98daee4b5778c7093e955d6ff" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.366436 4840 scope.go:117] "RemoveContainer" containerID="9d63ed12400b7b22d7b3175f5ad574f240db1b75017314d820e4f1e64bf92d6c" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.386077 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20e05035-a26a-4828-8067-5406560cabe8-config-data\") pod \"ceilometer-0\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " pod="openstack/ceilometer-0" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.386271 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20e05035-a26a-4828-8067-5406560cabe8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " pod="openstack/ceilometer-0" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.386442 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20e05035-a26a-4828-8067-5406560cabe8-scripts\") pod \"ceilometer-0\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " pod="openstack/ceilometer-0" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.386502 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e05035-a26a-4828-8067-5406560cabe8-log-httpd\") pod \"ceilometer-0\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " pod="openstack/ceilometer-0" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.386527 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20e05035-a26a-4828-8067-5406560cabe8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " pod="openstack/ceilometer-0" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.386654 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e05035-a26a-4828-8067-5406560cabe8-run-httpd\") pod \"ceilometer-0\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " pod="openstack/ceilometer-0" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.386788 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ldtr\" (UniqueName: \"kubernetes.io/projected/20e05035-a26a-4828-8067-5406560cabe8-kube-api-access-2ldtr\") pod \"ceilometer-0\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " pod="openstack/ceilometer-0" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.490033 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ldtr\" (UniqueName: \"kubernetes.io/projected/20e05035-a26a-4828-8067-5406560cabe8-kube-api-access-2ldtr\") pod \"ceilometer-0\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " pod="openstack/ceilometer-0" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.490088 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20e05035-a26a-4828-8067-5406560cabe8-config-data\") pod \"ceilometer-0\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " pod="openstack/ceilometer-0" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.490122 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20e05035-a26a-4828-8067-5406560cabe8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " pod="openstack/ceilometer-0" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.490186 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20e05035-a26a-4828-8067-5406560cabe8-scripts\") pod \"ceilometer-0\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " pod="openstack/ceilometer-0" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.490430 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e05035-a26a-4828-8067-5406560cabe8-log-httpd\") pod \"ceilometer-0\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " pod="openstack/ceilometer-0" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.490539 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20e05035-a26a-4828-8067-5406560cabe8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " pod="openstack/ceilometer-0" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.490885 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e05035-a26a-4828-8067-5406560cabe8-run-httpd\") pod \"ceilometer-0\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " pod="openstack/ceilometer-0" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.490998 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e05035-a26a-4828-8067-5406560cabe8-log-httpd\") pod \"ceilometer-0\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " pod="openstack/ceilometer-0" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.491200 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e05035-a26a-4828-8067-5406560cabe8-run-httpd\") pod \"ceilometer-0\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " pod="openstack/ceilometer-0" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.499219 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20e05035-a26a-4828-8067-5406560cabe8-scripts\") pod \"ceilometer-0\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " pod="openstack/ceilometer-0" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.500650 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20e05035-a26a-4828-8067-5406560cabe8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " pod="openstack/ceilometer-0" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.508809 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20e05035-a26a-4828-8067-5406560cabe8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " pod="openstack/ceilometer-0" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.513587 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20e05035-a26a-4828-8067-5406560cabe8-config-data\") pod \"ceilometer-0\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " pod="openstack/ceilometer-0" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.517158 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ldtr\" (UniqueName: \"kubernetes.io/projected/20e05035-a26a-4828-8067-5406560cabe8-kube-api-access-2ldtr\") pod \"ceilometer-0\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " pod="openstack/ceilometer-0" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.595090 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:18:24 crc kubenswrapper[4840]: I0311 09:18:24.603741 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:18:25 crc kubenswrapper[4840]: I0311 09:18:25.118830 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:18:26 crc kubenswrapper[4840]: I0311 09:18:26.076282 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74300cf6-cc28-45d0-ae1a-4099210bcbf1" path="/var/lib/kubelet/pods/74300cf6-cc28-45d0-ae1a-4099210bcbf1/volumes" Mar 11 09:18:26 crc kubenswrapper[4840]: I0311 09:18:26.093224 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:28 crc kubenswrapper[4840]: I0311 09:18:28.806913 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 11 09:18:31 crc kubenswrapper[4840]: I0311 09:18:31.083901 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:18:33 crc kubenswrapper[4840]: W0311 09:18:33.198979 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20e05035_a26a_4828_8067_5406560cabe8.slice/crio-a106320a9e73e6245e440b5084460c035694bffe9c3d467fc2ac3a8c1661b12c WatchSource:0}: Error finding container a106320a9e73e6245e440b5084460c035694bffe9c3d467fc2ac3a8c1661b12c: Status 404 returned error can't find the container with id a106320a9e73e6245e440b5084460c035694bffe9c3d467fc2ac3a8c1661b12c Mar 11 09:18:33 crc kubenswrapper[4840]: I0311 09:18:33.320798 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e05035-a26a-4828-8067-5406560cabe8","Type":"ContainerStarted","Data":"a106320a9e73e6245e440b5084460c035694bffe9c3d467fc2ac3a8c1661b12c"} Mar 11 09:18:34 crc kubenswrapper[4840]: I0311 09:18:34.331268 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e05035-a26a-4828-8067-5406560cabe8","Type":"ContainerStarted","Data":"6fd91ebbfc8cbae432e74ef45f69b373f98dbc8c5489b6a909c0a88fc8e7d31e"} Mar 11 09:18:34 crc kubenswrapper[4840]: I0311 09:18:34.333296 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"794eb074-baf3-46dc-8f9d-8a92fc9240fd","Type":"ContainerStarted","Data":"037417d9a07e2291989661f88eef7dd45b3d6fb5f3c8e6af6e31e40973d67031"} Mar 11 09:18:34 crc kubenswrapper[4840]: I0311 09:18:34.356313 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.200753722 podStartE2EDuration="16.356288519s" podCreationTimestamp="2026-03-11 09:18:18 +0000 UTC" firstStartedPulling="2026-03-11 09:18:19.104523814 +0000 UTC m=+1297.770193619" lastFinishedPulling="2026-03-11 09:18:33.260058601 +0000 UTC m=+1311.925728416" observedRunningTime="2026-03-11 09:18:34.348017931 +0000 UTC m=+1313.013687736" watchObservedRunningTime="2026-03-11 09:18:34.356288519 +0000 UTC m=+1313.021958334" Mar 11 09:18:35 crc kubenswrapper[4840]: I0311 09:18:35.343183 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e05035-a26a-4828-8067-5406560cabe8","Type":"ContainerStarted","Data":"138f3c925f9fe88369bef687b6f1184806749d5859d5f0b52c284192e63f8c53"} Mar 11 09:18:36 crc kubenswrapper[4840]: I0311 09:18:36.355582 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e05035-a26a-4828-8067-5406560cabe8","Type":"ContainerStarted","Data":"fb8d6e0d75c7e81975620f94f9139bab168e2150be79170daa6948fe6d4abb90"} Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.296729 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.375105 4840 generic.go:334] "Generic (PLEG): container finished" podID="163d1a95-e48e-4781-bdde-9d48741f0213" containerID="18332a9e3a8500c2684c92fac89d19d55c5bbb997962d31c8f43e7f357714025" exitCode=137 Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.375223 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.375267 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"163d1a95-e48e-4781-bdde-9d48741f0213","Type":"ContainerDied","Data":"18332a9e3a8500c2684c92fac89d19d55c5bbb997962d31c8f43e7f357714025"} Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.375309 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"163d1a95-e48e-4781-bdde-9d48741f0213","Type":"ContainerDied","Data":"5964acba78ec9f73b8ee4d878bd5ecc149adf27f7165752cb9efddbb39a3232f"} Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.375328 4840 scope.go:117] "RemoveContainer" containerID="18332a9e3a8500c2684c92fac89d19d55c5bbb997962d31c8f43e7f357714025" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.385388 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/163d1a95-e48e-4781-bdde-9d48741f0213-scripts\") pod \"163d1a95-e48e-4781-bdde-9d48741f0213\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.385826 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/163d1a95-e48e-4781-bdde-9d48741f0213-config-data-custom\") pod \"163d1a95-e48e-4781-bdde-9d48741f0213\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.385867 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9jz7\" (UniqueName: \"kubernetes.io/projected/163d1a95-e48e-4781-bdde-9d48741f0213-kube-api-access-n9jz7\") pod \"163d1a95-e48e-4781-bdde-9d48741f0213\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.385919 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/163d1a95-e48e-4781-bdde-9d48741f0213-logs\") pod \"163d1a95-e48e-4781-bdde-9d48741f0213\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.385961 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/163d1a95-e48e-4781-bdde-9d48741f0213-etc-machine-id\") pod \"163d1a95-e48e-4781-bdde-9d48741f0213\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.386049 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/163d1a95-e48e-4781-bdde-9d48741f0213-combined-ca-bundle\") pod \"163d1a95-e48e-4781-bdde-9d48741f0213\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.386131 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/163d1a95-e48e-4781-bdde-9d48741f0213-config-data\") pod \"163d1a95-e48e-4781-bdde-9d48741f0213\" (UID: \"163d1a95-e48e-4781-bdde-9d48741f0213\") " Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.386609 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/163d1a95-e48e-4781-bdde-9d48741f0213-logs" (OuterVolumeSpecName: "logs") pod "163d1a95-e48e-4781-bdde-9d48741f0213" (UID: "163d1a95-e48e-4781-bdde-9d48741f0213"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.386894 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/163d1a95-e48e-4781-bdde-9d48741f0213-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "163d1a95-e48e-4781-bdde-9d48741f0213" (UID: "163d1a95-e48e-4781-bdde-9d48741f0213"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.390071 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/163d1a95-e48e-4781-bdde-9d48741f0213-scripts" (OuterVolumeSpecName: "scripts") pod "163d1a95-e48e-4781-bdde-9d48741f0213" (UID: "163d1a95-e48e-4781-bdde-9d48741f0213"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.394704 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/163d1a95-e48e-4781-bdde-9d48741f0213-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "163d1a95-e48e-4781-bdde-9d48741f0213" (UID: "163d1a95-e48e-4781-bdde-9d48741f0213"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.419869 4840 scope.go:117] "RemoveContainer" containerID="8b2b2ee94f02e4c91965819ff19ab228b357fae41649aef16658ee14d3ffc1d9" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.423862 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/163d1a95-e48e-4781-bdde-9d48741f0213-kube-api-access-n9jz7" (OuterVolumeSpecName: "kube-api-access-n9jz7") pod "163d1a95-e48e-4781-bdde-9d48741f0213" (UID: "163d1a95-e48e-4781-bdde-9d48741f0213"). InnerVolumeSpecName "kube-api-access-n9jz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.445355 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/163d1a95-e48e-4781-bdde-9d48741f0213-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "163d1a95-e48e-4781-bdde-9d48741f0213" (UID: "163d1a95-e48e-4781-bdde-9d48741f0213"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.459791 4840 scope.go:117] "RemoveContainer" containerID="18332a9e3a8500c2684c92fac89d19d55c5bbb997962d31c8f43e7f357714025" Mar 11 09:18:37 crc kubenswrapper[4840]: E0311 09:18:37.460879 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18332a9e3a8500c2684c92fac89d19d55c5bbb997962d31c8f43e7f357714025\": container with ID starting with 18332a9e3a8500c2684c92fac89d19d55c5bbb997962d31c8f43e7f357714025 not found: ID does not exist" containerID="18332a9e3a8500c2684c92fac89d19d55c5bbb997962d31c8f43e7f357714025" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.460911 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18332a9e3a8500c2684c92fac89d19d55c5bbb997962d31c8f43e7f357714025"} err="failed to get container status \"18332a9e3a8500c2684c92fac89d19d55c5bbb997962d31c8f43e7f357714025\": rpc error: code = NotFound desc = could not find container \"18332a9e3a8500c2684c92fac89d19d55c5bbb997962d31c8f43e7f357714025\": container with ID starting with 18332a9e3a8500c2684c92fac89d19d55c5bbb997962d31c8f43e7f357714025 not found: ID does not exist" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.460933 4840 scope.go:117] "RemoveContainer" containerID="8b2b2ee94f02e4c91965819ff19ab228b357fae41649aef16658ee14d3ffc1d9" Mar 11 09:18:37 crc kubenswrapper[4840]: E0311 09:18:37.462579 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b2b2ee94f02e4c91965819ff19ab228b357fae41649aef16658ee14d3ffc1d9\": container with ID starting with 8b2b2ee94f02e4c91965819ff19ab228b357fae41649aef16658ee14d3ffc1d9 not found: ID does not exist" containerID="8b2b2ee94f02e4c91965819ff19ab228b357fae41649aef16658ee14d3ffc1d9" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.462629 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b2b2ee94f02e4c91965819ff19ab228b357fae41649aef16658ee14d3ffc1d9"} err="failed to get container status \"8b2b2ee94f02e4c91965819ff19ab228b357fae41649aef16658ee14d3ffc1d9\": rpc error: code = NotFound desc = could not find container \"8b2b2ee94f02e4c91965819ff19ab228b357fae41649aef16658ee14d3ffc1d9\": container with ID starting with 8b2b2ee94f02e4c91965819ff19ab228b357fae41649aef16658ee14d3ffc1d9 not found: ID does not exist" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.463020 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/163d1a95-e48e-4781-bdde-9d48741f0213-config-data" (OuterVolumeSpecName: "config-data") pod "163d1a95-e48e-4781-bdde-9d48741f0213" (UID: "163d1a95-e48e-4781-bdde-9d48741f0213"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.489457 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/163d1a95-e48e-4781-bdde-9d48741f0213-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.489522 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/163d1a95-e48e-4781-bdde-9d48741f0213-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.489531 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/163d1a95-e48e-4781-bdde-9d48741f0213-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.489539 4840 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/163d1a95-e48e-4781-bdde-9d48741f0213-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.489548 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9jz7\" (UniqueName: \"kubernetes.io/projected/163d1a95-e48e-4781-bdde-9d48741f0213-kube-api-access-n9jz7\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.489557 4840 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/163d1a95-e48e-4781-bdde-9d48741f0213-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.489565 4840 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/163d1a95-e48e-4781-bdde-9d48741f0213-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.709081 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.755830 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.780553 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 11 09:18:37 crc kubenswrapper[4840]: E0311 09:18:37.781022 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="163d1a95-e48e-4781-bdde-9d48741f0213" containerName="cinder-api-log" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.781045 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="163d1a95-e48e-4781-bdde-9d48741f0213" containerName="cinder-api-log" Mar 11 09:18:37 crc kubenswrapper[4840]: E0311 09:18:37.781063 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="163d1a95-e48e-4781-bdde-9d48741f0213" containerName="cinder-api" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.781071 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="163d1a95-e48e-4781-bdde-9d48741f0213" containerName="cinder-api" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.781269 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="163d1a95-e48e-4781-bdde-9d48741f0213" containerName="cinder-api" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.781306 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="163d1a95-e48e-4781-bdde-9d48741f0213" containerName="cinder-api-log" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.782440 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.791630 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.791903 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.792073 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.814109 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.895618 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-config-data-custom\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.895698 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86c17dbf-d890-4de3-bf5d-29e0aea4d968-etc-machine-id\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.895735 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-public-tls-certs\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.896032 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.896098 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2vr8\" (UniqueName: \"kubernetes.io/projected/86c17dbf-d890-4de3-bf5d-29e0aea4d968-kube-api-access-l2vr8\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.896216 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-scripts\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.896274 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.896493 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86c17dbf-d890-4de3-bf5d-29e0aea4d968-logs\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.896603 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-config-data\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.998982 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86c17dbf-d890-4de3-bf5d-29e0aea4d968-logs\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.999432 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86c17dbf-d890-4de3-bf5d-29e0aea4d968-logs\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:37 crc kubenswrapper[4840]: I0311 09:18:37.999549 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-config-data\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.000139 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-config-data-custom\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.000214 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86c17dbf-d890-4de3-bf5d-29e0aea4d968-etc-machine-id\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.000264 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-public-tls-certs\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.000335 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.000412 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2vr8\" (UniqueName: \"kubernetes.io/projected/86c17dbf-d890-4de3-bf5d-29e0aea4d968-kube-api-access-l2vr8\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.000494 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-scripts\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.000527 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.000351 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86c17dbf-d890-4de3-bf5d-29e0aea4d968-etc-machine-id\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.005309 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.005354 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-public-tls-certs\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.005621 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-config-data-custom\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.007075 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-scripts\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.008372 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-config-data\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.020015 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.022591 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2vr8\" (UniqueName: \"kubernetes.io/projected/86c17dbf-d890-4de3-bf5d-29e0aea4d968-kube-api-access-l2vr8\") pod \"cinder-api-0\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " pod="openstack/cinder-api-0" Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.071568 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="163d1a95-e48e-4781-bdde-9d48741f0213" path="/var/lib/kubelet/pods/163d1a95-e48e-4781-bdde-9d48741f0213/volumes" Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.102998 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.410935 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e05035-a26a-4828-8067-5406560cabe8","Type":"ContainerStarted","Data":"59f266b51d6867709606041420e753652ef6708584b05b1ec13f9c7ff9e33c69"} Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.411073 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20e05035-a26a-4828-8067-5406560cabe8" containerName="ceilometer-central-agent" containerID="cri-o://6fd91ebbfc8cbae432e74ef45f69b373f98dbc8c5489b6a909c0a88fc8e7d31e" gracePeriod=30 Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.411461 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.411763 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20e05035-a26a-4828-8067-5406560cabe8" containerName="proxy-httpd" containerID="cri-o://59f266b51d6867709606041420e753652ef6708584b05b1ec13f9c7ff9e33c69" gracePeriod=30 Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.411859 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20e05035-a26a-4828-8067-5406560cabe8" containerName="sg-core" containerID="cri-o://fb8d6e0d75c7e81975620f94f9139bab168e2150be79170daa6948fe6d4abb90" gracePeriod=30 Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.411929 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20e05035-a26a-4828-8067-5406560cabe8" containerName="ceilometer-notification-agent" containerID="cri-o://138f3c925f9fe88369bef687b6f1184806749d5859d5f0b52c284192e63f8c53" gracePeriod=30 Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.454869 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=10.537810703 podStartE2EDuration="14.454839595s" podCreationTimestamp="2026-03-11 09:18:24 +0000 UTC" firstStartedPulling="2026-03-11 09:18:33.201110989 +0000 UTC m=+1311.866780804" lastFinishedPulling="2026-03-11 09:18:37.118139881 +0000 UTC m=+1315.783809696" observedRunningTime="2026-03-11 09:18:38.435003746 +0000 UTC m=+1317.100673561" watchObservedRunningTime="2026-03-11 09:18:38.454839595 +0000 UTC m=+1317.120509410" Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.467434 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.544818 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9dc6d5c86-nszp2"] Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.545135 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-9dc6d5c86-nszp2" podUID="5e3d8e86-db0a-40d7-bfc2-47253da00ec7" containerName="neutron-api" containerID="cri-o://1ecbc4daf8b0a4f8ae2ccee73e5aa09fda6b5019b1f4313630e50f513c537cb4" gracePeriod=30 Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.545602 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-9dc6d5c86-nszp2" podUID="5e3d8e86-db0a-40d7-bfc2-47253da00ec7" containerName="neutron-httpd" containerID="cri-o://9a4896b371fb249686089f6c512e0572e3d7f61f6c18b54d57a46d90437bb7a1" gracePeriod=30 Mar 11 09:18:38 crc kubenswrapper[4840]: I0311 09:18:38.603200 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 11 09:18:39 crc kubenswrapper[4840]: I0311 09:18:39.051743 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:18:39 crc kubenswrapper[4840]: I0311 09:18:39.052382 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8ea45272-7dc6-4227-ba52-e506fd81c0b4" containerName="glance-log" containerID="cri-o://12da96256bafcbc710e6afeaa00bdcd0372baf66eb1a6d2439ecdbd9e02d7b40" gracePeriod=30 Mar 11 09:18:39 crc kubenswrapper[4840]: I0311 09:18:39.052827 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8ea45272-7dc6-4227-ba52-e506fd81c0b4" containerName="glance-httpd" containerID="cri-o://a4dd624ea8792381b07e9ab369608d7d8ed8a0f685ccd5a7af208b7afb64530d" gracePeriod=30 Mar 11 09:18:39 crc kubenswrapper[4840]: I0311 09:18:39.450581 4840 generic.go:334] "Generic (PLEG): container finished" podID="20e05035-a26a-4828-8067-5406560cabe8" containerID="59f266b51d6867709606041420e753652ef6708584b05b1ec13f9c7ff9e33c69" exitCode=0 Mar 11 09:18:39 crc kubenswrapper[4840]: I0311 09:18:39.450819 4840 generic.go:334] "Generic (PLEG): container finished" podID="20e05035-a26a-4828-8067-5406560cabe8" containerID="fb8d6e0d75c7e81975620f94f9139bab168e2150be79170daa6948fe6d4abb90" exitCode=2 Mar 11 09:18:39 crc kubenswrapper[4840]: I0311 09:18:39.450827 4840 generic.go:334] "Generic (PLEG): container finished" podID="20e05035-a26a-4828-8067-5406560cabe8" containerID="138f3c925f9fe88369bef687b6f1184806749d5859d5f0b52c284192e63f8c53" exitCode=0 Mar 11 09:18:39 crc kubenswrapper[4840]: I0311 09:18:39.450863 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e05035-a26a-4828-8067-5406560cabe8","Type":"ContainerDied","Data":"59f266b51d6867709606041420e753652ef6708584b05b1ec13f9c7ff9e33c69"} Mar 11 09:18:39 crc kubenswrapper[4840]: I0311 09:18:39.450891 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e05035-a26a-4828-8067-5406560cabe8","Type":"ContainerDied","Data":"fb8d6e0d75c7e81975620f94f9139bab168e2150be79170daa6948fe6d4abb90"} Mar 11 09:18:39 crc kubenswrapper[4840]: I0311 09:18:39.450900 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e05035-a26a-4828-8067-5406560cabe8","Type":"ContainerDied","Data":"138f3c925f9fe88369bef687b6f1184806749d5859d5f0b52c284192e63f8c53"} Mar 11 09:18:39 crc kubenswrapper[4840]: I0311 09:18:39.455920 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"86c17dbf-d890-4de3-bf5d-29e0aea4d968","Type":"ContainerStarted","Data":"de2608ad4638642b2ed36b2182d3eea8ca8dc51168010aa67653e3ba968f01af"} Mar 11 09:18:39 crc kubenswrapper[4840]: I0311 09:18:39.455962 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"86c17dbf-d890-4de3-bf5d-29e0aea4d968","Type":"ContainerStarted","Data":"84bbbbbba13debeb9cdd8757772784ee81cb2bdcff0f703a2bf3ea42f422a04c"} Mar 11 09:18:39 crc kubenswrapper[4840]: I0311 09:18:39.460390 4840 generic.go:334] "Generic (PLEG): container finished" podID="8ea45272-7dc6-4227-ba52-e506fd81c0b4" containerID="12da96256bafcbc710e6afeaa00bdcd0372baf66eb1a6d2439ecdbd9e02d7b40" exitCode=143 Mar 11 09:18:39 crc kubenswrapper[4840]: I0311 09:18:39.460483 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8ea45272-7dc6-4227-ba52-e506fd81c0b4","Type":"ContainerDied","Data":"12da96256bafcbc710e6afeaa00bdcd0372baf66eb1a6d2439ecdbd9e02d7b40"} Mar 11 09:18:39 crc kubenswrapper[4840]: I0311 09:18:39.478591 4840 generic.go:334] "Generic (PLEG): container finished" podID="5e3d8e86-db0a-40d7-bfc2-47253da00ec7" containerID="9a4896b371fb249686089f6c512e0572e3d7f61f6c18b54d57a46d90437bb7a1" exitCode=0 Mar 11 09:18:39 crc kubenswrapper[4840]: I0311 09:18:39.478644 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9dc6d5c86-nszp2" event={"ID":"5e3d8e86-db0a-40d7-bfc2-47253da00ec7","Type":"ContainerDied","Data":"9a4896b371fb249686089f6c512e0572e3d7f61f6c18b54d57a46d90437bb7a1"} Mar 11 09:18:40 crc kubenswrapper[4840]: I0311 09:18:40.490829 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"86c17dbf-d890-4de3-bf5d-29e0aea4d968","Type":"ContainerStarted","Data":"97908e7657b276e714bdd7983d1b6b792bc1fff3b99535e851314e2428338b75"} Mar 11 09:18:40 crc kubenswrapper[4840]: I0311 09:18:40.491291 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 11 09:18:40 crc kubenswrapper[4840]: I0311 09:18:40.523314 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.5232914859999998 podStartE2EDuration="3.523291486s" podCreationTimestamp="2026-03-11 09:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:18:40.510786582 +0000 UTC m=+1319.176456387" watchObservedRunningTime="2026-03-11 09:18:40.523291486 +0000 UTC m=+1319.188961301" Mar 11 09:18:41 crc kubenswrapper[4840]: I0311 09:18:41.496945 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:18:41 crc kubenswrapper[4840]: I0311 09:18:41.497569 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b3ad2408-3dda-4009-a898-5f2618fc18cf" containerName="glance-log" containerID="cri-o://0f3c80a9ff890a01d385011299ef95918b4a997ceb457e6a46517cb25523ebe7" gracePeriod=30 Mar 11 09:18:41 crc kubenswrapper[4840]: I0311 09:18:41.498052 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b3ad2408-3dda-4009-a898-5f2618fc18cf" containerName="glance-httpd" containerID="cri-o://58052d3369d8502a608a76a3fac8089afd975a97605d8aede537f5084de1526b" gracePeriod=30 Mar 11 09:18:41 crc kubenswrapper[4840]: I0311 09:18:41.513334 4840 generic.go:334] "Generic (PLEG): container finished" podID="20e05035-a26a-4828-8067-5406560cabe8" containerID="6fd91ebbfc8cbae432e74ef45f69b373f98dbc8c5489b6a909c0a88fc8e7d31e" exitCode=0 Mar 11 09:18:41 crc kubenswrapper[4840]: I0311 09:18:41.513409 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e05035-a26a-4828-8067-5406560cabe8","Type":"ContainerDied","Data":"6fd91ebbfc8cbae432e74ef45f69b373f98dbc8c5489b6a909c0a88fc8e7d31e"} Mar 11 09:18:41 crc kubenswrapper[4840]: I0311 09:18:41.516149 4840 generic.go:334] "Generic (PLEG): container finished" podID="5e3d8e86-db0a-40d7-bfc2-47253da00ec7" containerID="1ecbc4daf8b0a4f8ae2ccee73e5aa09fda6b5019b1f4313630e50f513c537cb4" exitCode=0 Mar 11 09:18:41 crc kubenswrapper[4840]: I0311 09:18:41.516625 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9dc6d5c86-nszp2" event={"ID":"5e3d8e86-db0a-40d7-bfc2-47253da00ec7","Type":"ContainerDied","Data":"1ecbc4daf8b0a4f8ae2ccee73e5aa09fda6b5019b1f4313630e50f513c537cb4"} Mar 11 09:18:41 crc kubenswrapper[4840]: I0311 09:18:41.854317 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:18:41 crc kubenswrapper[4840]: I0311 09:18:41.857247 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9dc6d5c86-nszp2" Mar 11 09:18:41 crc kubenswrapper[4840]: I0311 09:18:41.990437 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e05035-a26a-4828-8067-5406560cabe8-run-httpd\") pod \"20e05035-a26a-4828-8067-5406560cabe8\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " Mar 11 09:18:41 crc kubenswrapper[4840]: I0311 09:18:41.990642 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-combined-ca-bundle\") pod \"5e3d8e86-db0a-40d7-bfc2-47253da00ec7\" (UID: \"5e3d8e86-db0a-40d7-bfc2-47253da00ec7\") " Mar 11 09:18:41 crc kubenswrapper[4840]: I0311 09:18:41.990680 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20e05035-a26a-4828-8067-5406560cabe8-scripts\") pod \"20e05035-a26a-4828-8067-5406560cabe8\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " Mar 11 09:18:41 crc kubenswrapper[4840]: I0311 09:18:41.990742 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-ovndb-tls-certs\") pod \"5e3d8e86-db0a-40d7-bfc2-47253da00ec7\" (UID: \"5e3d8e86-db0a-40d7-bfc2-47253da00ec7\") " Mar 11 09:18:41 crc kubenswrapper[4840]: I0311 09:18:41.990789 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l7xm\" (UniqueName: \"kubernetes.io/projected/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-kube-api-access-2l7xm\") pod \"5e3d8e86-db0a-40d7-bfc2-47253da00ec7\" (UID: \"5e3d8e86-db0a-40d7-bfc2-47253da00ec7\") " Mar 11 09:18:41 crc kubenswrapper[4840]: I0311 09:18:41.990823 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-config\") pod \"5e3d8e86-db0a-40d7-bfc2-47253da00ec7\" (UID: \"5e3d8e86-db0a-40d7-bfc2-47253da00ec7\") " Mar 11 09:18:41 crc kubenswrapper[4840]: I0311 09:18:41.990865 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20e05035-a26a-4828-8067-5406560cabe8-config-data\") pod \"20e05035-a26a-4828-8067-5406560cabe8\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " Mar 11 09:18:41 crc kubenswrapper[4840]: I0311 09:18:41.990905 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20e05035-a26a-4828-8067-5406560cabe8-sg-core-conf-yaml\") pod \"20e05035-a26a-4828-8067-5406560cabe8\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " Mar 11 09:18:41 crc kubenswrapper[4840]: I0311 09:18:41.990931 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ldtr\" (UniqueName: \"kubernetes.io/projected/20e05035-a26a-4828-8067-5406560cabe8-kube-api-access-2ldtr\") pod \"20e05035-a26a-4828-8067-5406560cabe8\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " Mar 11 09:18:41 crc kubenswrapper[4840]: I0311 09:18:41.990983 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-httpd-config\") pod \"5e3d8e86-db0a-40d7-bfc2-47253da00ec7\" (UID: \"5e3d8e86-db0a-40d7-bfc2-47253da00ec7\") " Mar 11 09:18:41 crc kubenswrapper[4840]: I0311 09:18:41.991024 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e05035-a26a-4828-8067-5406560cabe8-log-httpd\") pod \"20e05035-a26a-4828-8067-5406560cabe8\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " Mar 11 09:18:41 crc kubenswrapper[4840]: I0311 09:18:41.991059 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20e05035-a26a-4828-8067-5406560cabe8-combined-ca-bundle\") pod \"20e05035-a26a-4828-8067-5406560cabe8\" (UID: \"20e05035-a26a-4828-8067-5406560cabe8\") " Mar 11 09:18:41 crc kubenswrapper[4840]: I0311 09:18:41.992050 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20e05035-a26a-4828-8067-5406560cabe8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "20e05035-a26a-4828-8067-5406560cabe8" (UID: "20e05035-a26a-4828-8067-5406560cabe8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:18:41 crc kubenswrapper[4840]: I0311 09:18:41.992832 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20e05035-a26a-4828-8067-5406560cabe8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "20e05035-a26a-4828-8067-5406560cabe8" (UID: "20e05035-a26a-4828-8067-5406560cabe8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.001936 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20e05035-a26a-4828-8067-5406560cabe8-kube-api-access-2ldtr" (OuterVolumeSpecName: "kube-api-access-2ldtr") pod "20e05035-a26a-4828-8067-5406560cabe8" (UID: "20e05035-a26a-4828-8067-5406560cabe8"). InnerVolumeSpecName "kube-api-access-2ldtr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.001941 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e05035-a26a-4828-8067-5406560cabe8-scripts" (OuterVolumeSpecName: "scripts") pod "20e05035-a26a-4828-8067-5406560cabe8" (UID: "20e05035-a26a-4828-8067-5406560cabe8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.002174 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-kube-api-access-2l7xm" (OuterVolumeSpecName: "kube-api-access-2l7xm") pod "5e3d8e86-db0a-40d7-bfc2-47253da00ec7" (UID: "5e3d8e86-db0a-40d7-bfc2-47253da00ec7"). InnerVolumeSpecName "kube-api-access-2l7xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.001957 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5e3d8e86-db0a-40d7-bfc2-47253da00ec7" (UID: "5e3d8e86-db0a-40d7-bfc2-47253da00ec7"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.063689 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e05035-a26a-4828-8067-5406560cabe8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "20e05035-a26a-4828-8067-5406560cabe8" (UID: "20e05035-a26a-4828-8067-5406560cabe8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.080450 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e3d8e86-db0a-40d7-bfc2-47253da00ec7" (UID: "5e3d8e86-db0a-40d7-bfc2-47253da00ec7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.092460 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5e3d8e86-db0a-40d7-bfc2-47253da00ec7" (UID: "5e3d8e86-db0a-40d7-bfc2-47253da00ec7"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.093790 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e05035-a26a-4828-8067-5406560cabe8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20e05035-a26a-4828-8067-5406560cabe8" (UID: "20e05035-a26a-4828-8067-5406560cabe8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.093935 4840 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20e05035-a26a-4828-8067-5406560cabe8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.093971 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ldtr\" (UniqueName: \"kubernetes.io/projected/20e05035-a26a-4828-8067-5406560cabe8-kube-api-access-2ldtr\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.094009 4840 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.094019 4840 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e05035-a26a-4828-8067-5406560cabe8-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.094032 4840 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20e05035-a26a-4828-8067-5406560cabe8-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.094041 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.094052 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20e05035-a26a-4828-8067-5406560cabe8-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.094062 4840 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.094074 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l7xm\" (UniqueName: \"kubernetes.io/projected/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-kube-api-access-2l7xm\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.095934 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-config" (OuterVolumeSpecName: "config") pod "5e3d8e86-db0a-40d7-bfc2-47253da00ec7" (UID: "5e3d8e86-db0a-40d7-bfc2-47253da00ec7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.112046 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20e05035-a26a-4828-8067-5406560cabe8-config-data" (OuterVolumeSpecName: "config-data") pod "20e05035-a26a-4828-8067-5406560cabe8" (UID: "20e05035-a26a-4828-8067-5406560cabe8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.196046 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5e3d8e86-db0a-40d7-bfc2-47253da00ec7-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.196352 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20e05035-a26a-4828-8067-5406560cabe8-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.196490 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20e05035-a26a-4828-8067-5406560cabe8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.547129 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20e05035-a26a-4828-8067-5406560cabe8","Type":"ContainerDied","Data":"a106320a9e73e6245e440b5084460c035694bffe9c3d467fc2ac3a8c1661b12c"} Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.547440 4840 scope.go:117] "RemoveContainer" containerID="59f266b51d6867709606041420e753652ef6708584b05b1ec13f9c7ff9e33c69" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.547586 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.591164 4840 generic.go:334] "Generic (PLEG): container finished" podID="8ea45272-7dc6-4227-ba52-e506fd81c0b4" containerID="a4dd624ea8792381b07e9ab369608d7d8ed8a0f685ccd5a7af208b7afb64530d" exitCode=0 Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.591317 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8ea45272-7dc6-4227-ba52-e506fd81c0b4","Type":"ContainerDied","Data":"a4dd624ea8792381b07e9ab369608d7d8ed8a0f685ccd5a7af208b7afb64530d"} Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.596254 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-wpn58"] Mar 11 09:18:42 crc kubenswrapper[4840]: E0311 09:18:42.596808 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e05035-a26a-4828-8067-5406560cabe8" containerName="sg-core" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.596829 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e05035-a26a-4828-8067-5406560cabe8" containerName="sg-core" Mar 11 09:18:42 crc kubenswrapper[4840]: E0311 09:18:42.596859 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e05035-a26a-4828-8067-5406560cabe8" containerName="ceilometer-notification-agent" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.596870 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e05035-a26a-4828-8067-5406560cabe8" containerName="ceilometer-notification-agent" Mar 11 09:18:42 crc kubenswrapper[4840]: E0311 09:18:42.596882 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e3d8e86-db0a-40d7-bfc2-47253da00ec7" containerName="neutron-api" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.596891 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e3d8e86-db0a-40d7-bfc2-47253da00ec7" containerName="neutron-api" Mar 11 09:18:42 crc kubenswrapper[4840]: E0311 09:18:42.596914 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e05035-a26a-4828-8067-5406560cabe8" containerName="ceilometer-central-agent" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.596923 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e05035-a26a-4828-8067-5406560cabe8" containerName="ceilometer-central-agent" Mar 11 09:18:42 crc kubenswrapper[4840]: E0311 09:18:42.596935 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e3d8e86-db0a-40d7-bfc2-47253da00ec7" containerName="neutron-httpd" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.596943 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e3d8e86-db0a-40d7-bfc2-47253da00ec7" containerName="neutron-httpd" Mar 11 09:18:42 crc kubenswrapper[4840]: E0311 09:18:42.596971 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e05035-a26a-4828-8067-5406560cabe8" containerName="proxy-httpd" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.596978 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e05035-a26a-4828-8067-5406560cabe8" containerName="proxy-httpd" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.597168 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e05035-a26a-4828-8067-5406560cabe8" containerName="proxy-httpd" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.597181 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e05035-a26a-4828-8067-5406560cabe8" containerName="ceilometer-notification-agent" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.597200 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e05035-a26a-4828-8067-5406560cabe8" containerName="sg-core" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.597216 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e3d8e86-db0a-40d7-bfc2-47253da00ec7" containerName="neutron-httpd" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.597233 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e3d8e86-db0a-40d7-bfc2-47253da00ec7" containerName="neutron-api" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.597247 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e05035-a26a-4828-8067-5406560cabe8" containerName="ceilometer-central-agent" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.597960 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wpn58" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.609574 4840 generic.go:334] "Generic (PLEG): container finished" podID="b3ad2408-3dda-4009-a898-5f2618fc18cf" containerID="0f3c80a9ff890a01d385011299ef95918b4a997ceb457e6a46517cb25523ebe7" exitCode=143 Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.609659 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3ad2408-3dda-4009-a898-5f2618fc18cf","Type":"ContainerDied","Data":"0f3c80a9ff890a01d385011299ef95918b4a997ceb457e6a46517cb25523ebe7"} Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.622942 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-9dc6d5c86-nszp2" event={"ID":"5e3d8e86-db0a-40d7-bfc2-47253da00ec7","Type":"ContainerDied","Data":"7c00c25e651728f75ee2f78f5a18804b4417a369f94146852075105195868065"} Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.623096 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-9dc6d5c86-nszp2" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.627541 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.651453 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wpn58"] Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.683813 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.696186 4840 scope.go:117] "RemoveContainer" containerID="fb8d6e0d75c7e81975620f94f9139bab168e2150be79170daa6948fe6d4abb90" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.708092 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.710986 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.716441 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.716687 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.718382 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f748n\" (UniqueName: \"kubernetes.io/projected/0ba90e38-377f-42d7-91dd-cb92889e0cbf-kube-api-access-f748n\") pod \"nova-api-db-create-wpn58\" (UID: \"0ba90e38-377f-42d7-91dd-cb92889e0cbf\") " pod="openstack/nova-api-db-create-wpn58" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.718617 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ba90e38-377f-42d7-91dd-cb92889e0cbf-operator-scripts\") pod \"nova-api-db-create-wpn58\" (UID: \"0ba90e38-377f-42d7-91dd-cb92889e0cbf\") " pod="openstack/nova-api-db-create-wpn58" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.754521 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.756279 4840 scope.go:117] "RemoveContainer" containerID="138f3c925f9fe88369bef687b6f1184806749d5859d5f0b52c284192e63f8c53" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.761765 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-f2mtr"] Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.764274 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-f2mtr" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.789623 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-9dc6d5c86-nszp2"] Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.817165 4840 scope.go:117] "RemoveContainer" containerID="6fd91ebbfc8cbae432e74ef45f69b373f98dbc8c5489b6a909c0a88fc8e7d31e" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.820928 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w8w8\" (UniqueName: \"kubernetes.io/projected/a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4-kube-api-access-9w8w8\") pod \"nova-cell0-db-create-f2mtr\" (UID: \"a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4\") " pod="openstack/nova-cell0-db-create-f2mtr" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.821013 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqfxs\" (UniqueName: \"kubernetes.io/projected/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-kube-api-access-kqfxs\") pod \"ceilometer-0\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " pod="openstack/ceilometer-0" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.821040 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-run-httpd\") pod \"ceilometer-0\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " pod="openstack/ceilometer-0" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.821067 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " pod="openstack/ceilometer-0" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.821095 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f748n\" (UniqueName: \"kubernetes.io/projected/0ba90e38-377f-42d7-91dd-cb92889e0cbf-kube-api-access-f748n\") pod \"nova-api-db-create-wpn58\" (UID: \"0ba90e38-377f-42d7-91dd-cb92889e0cbf\") " pod="openstack/nova-api-db-create-wpn58" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.821118 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4-operator-scripts\") pod \"nova-cell0-db-create-f2mtr\" (UID: \"a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4\") " pod="openstack/nova-cell0-db-create-f2mtr" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.821139 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-log-httpd\") pod \"ceilometer-0\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " pod="openstack/ceilometer-0" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.821183 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-config-data\") pod \"ceilometer-0\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " pod="openstack/ceilometer-0" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.821218 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-scripts\") pod \"ceilometer-0\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " pod="openstack/ceilometer-0" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.821237 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ba90e38-377f-42d7-91dd-cb92889e0cbf-operator-scripts\") pod \"nova-api-db-create-wpn58\" (UID: \"0ba90e38-377f-42d7-91dd-cb92889e0cbf\") " pod="openstack/nova-api-db-create-wpn58" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.821257 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " pod="openstack/ceilometer-0" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.822223 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ba90e38-377f-42d7-91dd-cb92889e0cbf-operator-scripts\") pod \"nova-api-db-create-wpn58\" (UID: \"0ba90e38-377f-42d7-91dd-cb92889e0cbf\") " pod="openstack/nova-api-db-create-wpn58" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.831021 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-9dc6d5c86-nszp2"] Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.846372 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f748n\" (UniqueName: \"kubernetes.io/projected/0ba90e38-377f-42d7-91dd-cb92889e0cbf-kube-api-access-f748n\") pod \"nova-api-db-create-wpn58\" (UID: \"0ba90e38-377f-42d7-91dd-cb92889e0cbf\") " pod="openstack/nova-api-db-create-wpn58" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.857661 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-28ac-account-create-update-9lzd8"] Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.859095 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-28ac-account-create-update-9lzd8" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.861591 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.866215 4840 scope.go:117] "RemoveContainer" containerID="9a4896b371fb249686089f6c512e0572e3d7f61f6c18b54d57a46d90437bb7a1" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.894907 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-f2mtr"] Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.912930 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-28ac-account-create-update-9lzd8"] Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.928194 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wpn58" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.929128 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-scripts\") pod \"ceilometer-0\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " pod="openstack/ceilometer-0" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.946100 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " pod="openstack/ceilometer-0" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.946271 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzrf9\" (UniqueName: \"kubernetes.io/projected/8141eccc-ba41-4cfc-ba72-ffeae8858902-kube-api-access-qzrf9\") pod \"nova-api-28ac-account-create-update-9lzd8\" (UID: \"8141eccc-ba41-4cfc-ba72-ffeae8858902\") " pod="openstack/nova-api-28ac-account-create-update-9lzd8" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.946373 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w8w8\" (UniqueName: \"kubernetes.io/projected/a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4-kube-api-access-9w8w8\") pod \"nova-cell0-db-create-f2mtr\" (UID: \"a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4\") " pod="openstack/nova-cell0-db-create-f2mtr" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.946559 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqfxs\" (UniqueName: \"kubernetes.io/projected/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-kube-api-access-kqfxs\") pod \"ceilometer-0\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " pod="openstack/ceilometer-0" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.946591 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-run-httpd\") pod \"ceilometer-0\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " pod="openstack/ceilometer-0" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.946722 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8141eccc-ba41-4cfc-ba72-ffeae8858902-operator-scripts\") pod \"nova-api-28ac-account-create-update-9lzd8\" (UID: \"8141eccc-ba41-4cfc-ba72-ffeae8858902\") " pod="openstack/nova-api-28ac-account-create-update-9lzd8" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.947395 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-run-httpd\") pod \"ceilometer-0\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " pod="openstack/ceilometer-0" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.952325 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-scripts\") pod \"ceilometer-0\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " pod="openstack/ceilometer-0" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.957786 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " pod="openstack/ceilometer-0" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.957837 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4-operator-scripts\") pod \"nova-cell0-db-create-f2mtr\" (UID: \"a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4\") " pod="openstack/nova-cell0-db-create-f2mtr" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.957866 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-log-httpd\") pod \"ceilometer-0\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " pod="openstack/ceilometer-0" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.957976 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-config-data\") pod \"ceilometer-0\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " pod="openstack/ceilometer-0" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.959044 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-log-httpd\") pod \"ceilometer-0\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " pod="openstack/ceilometer-0" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.959323 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-nhlzp"] Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.962681 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4-operator-scripts\") pod \"nova-cell0-db-create-f2mtr\" (UID: \"a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4\") " pod="openstack/nova-cell0-db-create-f2mtr" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.965068 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-config-data\") pod \"ceilometer-0\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " pod="openstack/ceilometer-0" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.967764 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nhlzp" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.969121 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0f23-account-create-update-5wkdk"] Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.972598 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f23-account-create-update-5wkdk" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.976256 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " pod="openstack/ceilometer-0" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.978745 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " pod="openstack/ceilometer-0" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.986670 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqfxs\" (UniqueName: \"kubernetes.io/projected/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-kube-api-access-kqfxs\") pod \"ceilometer-0\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " pod="openstack/ceilometer-0" Mar 11 09:18:42 crc kubenswrapper[4840]: I0311 09:18:42.989226 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w8w8\" (UniqueName: \"kubernetes.io/projected/a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4-kube-api-access-9w8w8\") pod \"nova-cell0-db-create-f2mtr\" (UID: \"a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4\") " pod="openstack/nova-cell0-db-create-f2mtr" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.015912 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-nhlzp"] Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.017329 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.033770 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0f23-account-create-update-5wkdk"] Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.044881 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.062453 4840 scope.go:117] "RemoveContainer" containerID="1ecbc4daf8b0a4f8ae2ccee73e5aa09fda6b5019b1f4313630e50f513c537cb4" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.063533 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8141eccc-ba41-4cfc-ba72-ffeae8858902-operator-scripts\") pod \"nova-api-28ac-account-create-update-9lzd8\" (UID: \"8141eccc-ba41-4cfc-ba72-ffeae8858902\") " pod="openstack/nova-api-28ac-account-create-update-9lzd8" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.063623 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a3982dc-cf14-417d-8fac-b8e811843798-operator-scripts\") pod \"nova-cell0-0f23-account-create-update-5wkdk\" (UID: \"5a3982dc-cf14-417d-8fac-b8e811843798\") " pod="openstack/nova-cell0-0f23-account-create-update-5wkdk" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.063803 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rctz2\" (UniqueName: \"kubernetes.io/projected/5a3982dc-cf14-417d-8fac-b8e811843798-kube-api-access-rctz2\") pod \"nova-cell0-0f23-account-create-update-5wkdk\" (UID: \"5a3982dc-cf14-417d-8fac-b8e811843798\") " pod="openstack/nova-cell0-0f23-account-create-update-5wkdk" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.063963 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzrf9\" (UniqueName: \"kubernetes.io/projected/8141eccc-ba41-4cfc-ba72-ffeae8858902-kube-api-access-qzrf9\") pod \"nova-api-28ac-account-create-update-9lzd8\" (UID: \"8141eccc-ba41-4cfc-ba72-ffeae8858902\") " pod="openstack/nova-api-28ac-account-create-update-9lzd8" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.064013 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8627bd78-c61f-4140-88b3-09cd6091afb4-operator-scripts\") pod \"nova-cell1-db-create-nhlzp\" (UID: \"8627bd78-c61f-4140-88b3-09cd6091afb4\") " pod="openstack/nova-cell1-db-create-nhlzp" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.064052 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mdfk\" (UniqueName: \"kubernetes.io/projected/8627bd78-c61f-4140-88b3-09cd6091afb4-kube-api-access-9mdfk\") pod \"nova-cell1-db-create-nhlzp\" (UID: \"8627bd78-c61f-4140-88b3-09cd6091afb4\") " pod="openstack/nova-cell1-db-create-nhlzp" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.065351 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8141eccc-ba41-4cfc-ba72-ffeae8858902-operator-scripts\") pod \"nova-api-28ac-account-create-update-9lzd8\" (UID: \"8141eccc-ba41-4cfc-ba72-ffeae8858902\") " pod="openstack/nova-api-28ac-account-create-update-9lzd8" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.089483 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzrf9\" (UniqueName: \"kubernetes.io/projected/8141eccc-ba41-4cfc-ba72-ffeae8858902-kube-api-access-qzrf9\") pod \"nova-api-28ac-account-create-update-9lzd8\" (UID: \"8141eccc-ba41-4cfc-ba72-ffeae8858902\") " pod="openstack/nova-api-28ac-account-create-update-9lzd8" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.099131 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-f2mtr" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.102451 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.119295 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-298f-account-create-update-bgx4w"] Mar 11 09:18:43 crc kubenswrapper[4840]: E0311 09:18:43.119971 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea45272-7dc6-4227-ba52-e506fd81c0b4" containerName="glance-log" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.119999 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea45272-7dc6-4227-ba52-e506fd81c0b4" containerName="glance-log" Mar 11 09:18:43 crc kubenswrapper[4840]: E0311 09:18:43.120025 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea45272-7dc6-4227-ba52-e506fd81c0b4" containerName="glance-httpd" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.120036 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea45272-7dc6-4227-ba52-e506fd81c0b4" containerName="glance-httpd" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.120280 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea45272-7dc6-4227-ba52-e506fd81c0b4" containerName="glance-log" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.120308 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea45272-7dc6-4227-ba52-e506fd81c0b4" containerName="glance-httpd" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.121193 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-298f-account-create-update-bgx4w" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.126130 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-298f-account-create-update-bgx4w"] Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.150933 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.171201 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea45272-7dc6-4227-ba52-e506fd81c0b4-config-data\") pod \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.171304 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea45272-7dc6-4227-ba52-e506fd81c0b4-logs\") pod \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.171412 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea45272-7dc6-4227-ba52-e506fd81c0b4-combined-ca-bundle\") pod \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.171578 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ea45272-7dc6-4227-ba52-e506fd81c0b4-public-tls-certs\") pod \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.171608 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.171632 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ea45272-7dc6-4227-ba52-e506fd81c0b4-scripts\") pod \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.171657 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ea45272-7dc6-4227-ba52-e506fd81c0b4-httpd-run\") pod \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.171754 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzsbb\" (UniqueName: \"kubernetes.io/projected/8ea45272-7dc6-4227-ba52-e506fd81c0b4-kube-api-access-tzsbb\") pod \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\" (UID: \"8ea45272-7dc6-4227-ba52-e506fd81c0b4\") " Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.172142 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8627bd78-c61f-4140-88b3-09cd6091afb4-operator-scripts\") pod \"nova-cell1-db-create-nhlzp\" (UID: \"8627bd78-c61f-4140-88b3-09cd6091afb4\") " pod="openstack/nova-cell1-db-create-nhlzp" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.172210 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mdfk\" (UniqueName: \"kubernetes.io/projected/8627bd78-c61f-4140-88b3-09cd6091afb4-kube-api-access-9mdfk\") pod \"nova-cell1-db-create-nhlzp\" (UID: \"8627bd78-c61f-4140-88b3-09cd6091afb4\") " pod="openstack/nova-cell1-db-create-nhlzp" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.172398 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a3982dc-cf14-417d-8fac-b8e811843798-operator-scripts\") pod \"nova-cell0-0f23-account-create-update-5wkdk\" (UID: \"5a3982dc-cf14-417d-8fac-b8e811843798\") " pod="openstack/nova-cell0-0f23-account-create-update-5wkdk" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.172479 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d383441-a8df-429d-9b04-65b01f06c46e-operator-scripts\") pod \"nova-cell1-298f-account-create-update-bgx4w\" (UID: \"0d383441-a8df-429d-9b04-65b01f06c46e\") " pod="openstack/nova-cell1-298f-account-create-update-bgx4w" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.172587 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j92z7\" (UniqueName: \"kubernetes.io/projected/0d383441-a8df-429d-9b04-65b01f06c46e-kube-api-access-j92z7\") pod \"nova-cell1-298f-account-create-update-bgx4w\" (UID: \"0d383441-a8df-429d-9b04-65b01f06c46e\") " pod="openstack/nova-cell1-298f-account-create-update-bgx4w" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.172632 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rctz2\" (UniqueName: \"kubernetes.io/projected/5a3982dc-cf14-417d-8fac-b8e811843798-kube-api-access-rctz2\") pod \"nova-cell0-0f23-account-create-update-5wkdk\" (UID: \"5a3982dc-cf14-417d-8fac-b8e811843798\") " pod="openstack/nova-cell0-0f23-account-create-update-5wkdk" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.184823 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8627bd78-c61f-4140-88b3-09cd6091afb4-operator-scripts\") pod \"nova-cell1-db-create-nhlzp\" (UID: \"8627bd78-c61f-4140-88b3-09cd6091afb4\") " pod="openstack/nova-cell1-db-create-nhlzp" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.187824 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a3982dc-cf14-417d-8fac-b8e811843798-operator-scripts\") pod \"nova-cell0-0f23-account-create-update-5wkdk\" (UID: \"5a3982dc-cf14-417d-8fac-b8e811843798\") " pod="openstack/nova-cell0-0f23-account-create-update-5wkdk" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.200390 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ea45272-7dc6-4227-ba52-e506fd81c0b4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8ea45272-7dc6-4227-ba52-e506fd81c0b4" (UID: "8ea45272-7dc6-4227-ba52-e506fd81c0b4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.205768 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ea45272-7dc6-4227-ba52-e506fd81c0b4-kube-api-access-tzsbb" (OuterVolumeSpecName: "kube-api-access-tzsbb") pod "8ea45272-7dc6-4227-ba52-e506fd81c0b4" (UID: "8ea45272-7dc6-4227-ba52-e506fd81c0b4"). InnerVolumeSpecName "kube-api-access-tzsbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.209989 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ea45272-7dc6-4227-ba52-e506fd81c0b4-logs" (OuterVolumeSpecName: "logs") pod "8ea45272-7dc6-4227-ba52-e506fd81c0b4" (UID: "8ea45272-7dc6-4227-ba52-e506fd81c0b4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.222269 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mdfk\" (UniqueName: \"kubernetes.io/projected/8627bd78-c61f-4140-88b3-09cd6091afb4-kube-api-access-9mdfk\") pod \"nova-cell1-db-create-nhlzp\" (UID: \"8627bd78-c61f-4140-88b3-09cd6091afb4\") " pod="openstack/nova-cell1-db-create-nhlzp" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.231124 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "8ea45272-7dc6-4227-ba52-e506fd81c0b4" (UID: "8ea45272-7dc6-4227-ba52-e506fd81c0b4"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.231762 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea45272-7dc6-4227-ba52-e506fd81c0b4-scripts" (OuterVolumeSpecName: "scripts") pod "8ea45272-7dc6-4227-ba52-e506fd81c0b4" (UID: "8ea45272-7dc6-4227-ba52-e506fd81c0b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.251431 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rctz2\" (UniqueName: \"kubernetes.io/projected/5a3982dc-cf14-417d-8fac-b8e811843798-kube-api-access-rctz2\") pod \"nova-cell0-0f23-account-create-update-5wkdk\" (UID: \"5a3982dc-cf14-417d-8fac-b8e811843798\") " pod="openstack/nova-cell0-0f23-account-create-update-5wkdk" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.272709 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea45272-7dc6-4227-ba52-e506fd81c0b4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8ea45272-7dc6-4227-ba52-e506fd81c0b4" (UID: "8ea45272-7dc6-4227-ba52-e506fd81c0b4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.275095 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d383441-a8df-429d-9b04-65b01f06c46e-operator-scripts\") pod \"nova-cell1-298f-account-create-update-bgx4w\" (UID: \"0d383441-a8df-429d-9b04-65b01f06c46e\") " pod="openstack/nova-cell1-298f-account-create-update-bgx4w" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.275176 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j92z7\" (UniqueName: \"kubernetes.io/projected/0d383441-a8df-429d-9b04-65b01f06c46e-kube-api-access-j92z7\") pod \"nova-cell1-298f-account-create-update-bgx4w\" (UID: \"0d383441-a8df-429d-9b04-65b01f06c46e\") " pod="openstack/nova-cell1-298f-account-create-update-bgx4w" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.275355 4840 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ea45272-7dc6-4227-ba52-e506fd81c0b4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.275372 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ea45272-7dc6-4227-ba52-e506fd81c0b4-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.275401 4840 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.275413 4840 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ea45272-7dc6-4227-ba52-e506fd81c0b4-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.275425 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzsbb\" (UniqueName: \"kubernetes.io/projected/8ea45272-7dc6-4227-ba52-e506fd81c0b4-kube-api-access-tzsbb\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.275439 4840 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ea45272-7dc6-4227-ba52-e506fd81c0b4-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.276936 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d383441-a8df-429d-9b04-65b01f06c46e-operator-scripts\") pod \"nova-cell1-298f-account-create-update-bgx4w\" (UID: \"0d383441-a8df-429d-9b04-65b01f06c46e\") " pod="openstack/nova-cell1-298f-account-create-update-bgx4w" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.277629 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea45272-7dc6-4227-ba52-e506fd81c0b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ea45272-7dc6-4227-ba52-e506fd81c0b4" (UID: "8ea45272-7dc6-4227-ba52-e506fd81c0b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.300535 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j92z7\" (UniqueName: \"kubernetes.io/projected/0d383441-a8df-429d-9b04-65b01f06c46e-kube-api-access-j92z7\") pod \"nova-cell1-298f-account-create-update-bgx4w\" (UID: \"0d383441-a8df-429d-9b04-65b01f06c46e\") " pod="openstack/nova-cell1-298f-account-create-update-bgx4w" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.301123 4840 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.328016 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea45272-7dc6-4227-ba52-e506fd81c0b4-config-data" (OuterVolumeSpecName: "config-data") pod "8ea45272-7dc6-4227-ba52-e506fd81c0b4" (UID: "8ea45272-7dc6-4227-ba52-e506fd81c0b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.379865 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-28ac-account-create-update-9lzd8" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.380089 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ea45272-7dc6-4227-ba52-e506fd81c0b4-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.382049 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ea45272-7dc6-4227-ba52-e506fd81c0b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.382507 4840 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.406267 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nhlzp" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.457444 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f23-account-create-update-5wkdk" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.489000 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-298f-account-create-update-bgx4w" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.651943 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8ea45272-7dc6-4227-ba52-e506fd81c0b4","Type":"ContainerDied","Data":"641f0434d44e5d56a4e2ce90c3d1d2824a770c4a5206ef5c0c4746e61d3a77b8"} Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.652040 4840 scope.go:117] "RemoveContainer" containerID="a4dd624ea8792381b07e9ab369608d7d8ed8a0f685ccd5a7af208b7afb64530d" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.651957 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.739332 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.757187 4840 scope.go:117] "RemoveContainer" containerID="12da96256bafcbc710e6afeaa00bdcd0372baf66eb1a6d2439ecdbd9e02d7b40" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.775035 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.810704 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.812146 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.815164 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.816107 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.822648 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.833492 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wpn58"] Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.894204 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71aaf352-8b91-4846-8ce4-1d83303ac203-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " pod="openstack/glance-default-external-api-0" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.894348 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71aaf352-8b91-4846-8ce4-1d83303ac203-logs\") pod \"glance-default-external-api-0\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " pod="openstack/glance-default-external-api-0" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.894452 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " pod="openstack/glance-default-external-api-0" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.894507 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71aaf352-8b91-4846-8ce4-1d83303ac203-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " pod="openstack/glance-default-external-api-0" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.895215 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjwpl\" (UniqueName: \"kubernetes.io/projected/71aaf352-8b91-4846-8ce4-1d83303ac203-kube-api-access-gjwpl\") pod \"glance-default-external-api-0\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " pod="openstack/glance-default-external-api-0" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.895345 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/71aaf352-8b91-4846-8ce4-1d83303ac203-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " pod="openstack/glance-default-external-api-0" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.895399 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71aaf352-8b91-4846-8ce4-1d83303ac203-scripts\") pod \"glance-default-external-api-0\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " pod="openstack/glance-default-external-api-0" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.895455 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71aaf352-8b91-4846-8ce4-1d83303ac203-config-data\") pod \"glance-default-external-api-0\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " pod="openstack/glance-default-external-api-0" Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.906584 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.914378 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-f2mtr"] Mar 11 09:18:43 crc kubenswrapper[4840]: W0311 09:18:43.947953 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2ca037a_561b_4d4b_b85e_5c0fcc9b1cb4.slice/crio-e53c391e009e602c69f568a30c3697c3f0717aec2f3152f8ff10acfe6fb04859 WatchSource:0}: Error finding container e53c391e009e602c69f568a30c3697c3f0717aec2f3152f8ff10acfe6fb04859: Status 404 returned error can't find the container with id e53c391e009e602c69f568a30c3697c3f0717aec2f3152f8ff10acfe6fb04859 Mar 11 09:18:43 crc kubenswrapper[4840]: I0311 09:18:43.968897 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:43.997917 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71aaf352-8b91-4846-8ce4-1d83303ac203-scripts\") pod \"glance-default-external-api-0\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " pod="openstack/glance-default-external-api-0" Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:43.997976 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/71aaf352-8b91-4846-8ce4-1d83303ac203-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " pod="openstack/glance-default-external-api-0" Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:43.998014 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71aaf352-8b91-4846-8ce4-1d83303ac203-config-data\") pod \"glance-default-external-api-0\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " pod="openstack/glance-default-external-api-0" Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:43.998107 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71aaf352-8b91-4846-8ce4-1d83303ac203-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " pod="openstack/glance-default-external-api-0" Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:43.998171 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71aaf352-8b91-4846-8ce4-1d83303ac203-logs\") pod \"glance-default-external-api-0\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " pod="openstack/glance-default-external-api-0" Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:43.998232 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " pod="openstack/glance-default-external-api-0" Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:43.998260 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71aaf352-8b91-4846-8ce4-1d83303ac203-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " pod="openstack/glance-default-external-api-0" Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:43.998345 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjwpl\" (UniqueName: \"kubernetes.io/projected/71aaf352-8b91-4846-8ce4-1d83303ac203-kube-api-access-gjwpl\") pod \"glance-default-external-api-0\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " pod="openstack/glance-default-external-api-0" Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:43.999378 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71aaf352-8b91-4846-8ce4-1d83303ac203-logs\") pod \"glance-default-external-api-0\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " pod="openstack/glance-default-external-api-0" Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.000028 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.000592 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/71aaf352-8b91-4846-8ce4-1d83303ac203-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " pod="openstack/glance-default-external-api-0" Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.007031 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71aaf352-8b91-4846-8ce4-1d83303ac203-scripts\") pod \"glance-default-external-api-0\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " pod="openstack/glance-default-external-api-0" Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.009868 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71aaf352-8b91-4846-8ce4-1d83303ac203-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " pod="openstack/glance-default-external-api-0" Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.013634 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71aaf352-8b91-4846-8ce4-1d83303ac203-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " pod="openstack/glance-default-external-api-0" Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.015452 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71aaf352-8b91-4846-8ce4-1d83303ac203-config-data\") pod \"glance-default-external-api-0\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " pod="openstack/glance-default-external-api-0" Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.039110 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjwpl\" (UniqueName: \"kubernetes.io/projected/71aaf352-8b91-4846-8ce4-1d83303ac203-kube-api-access-gjwpl\") pod \"glance-default-external-api-0\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " pod="openstack/glance-default-external-api-0" Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.052808 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " pod="openstack/glance-default-external-api-0" Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.097940 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20e05035-a26a-4828-8067-5406560cabe8" path="/var/lib/kubelet/pods/20e05035-a26a-4828-8067-5406560cabe8/volumes" Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.101453 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e3d8e86-db0a-40d7-bfc2-47253da00ec7" path="/var/lib/kubelet/pods/5e3d8e86-db0a-40d7-bfc2-47253da00ec7/volumes" Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.115508 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ea45272-7dc6-4227-ba52-e506fd81c0b4" path="/var/lib/kubelet/pods/8ea45272-7dc6-4227-ba52-e506fd81c0b4/volumes" Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.117012 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-nhlzp"] Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.122923 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-28ac-account-create-update-9lzd8"] Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.156173 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 11 09:18:44 crc kubenswrapper[4840]: W0311 09:18:44.156868 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8141eccc_ba41_4cfc_ba72_ffeae8858902.slice/crio-2b46744645ccd61feab20917a196422caec335ad238f964c45e806a4f18fa096 WatchSource:0}: Error finding container 2b46744645ccd61feab20917a196422caec335ad238f964c45e806a4f18fa096: Status 404 returned error can't find the container with id 2b46744645ccd61feab20917a196422caec335ad238f964c45e806a4f18fa096 Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.295136 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0f23-account-create-update-5wkdk"] Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.304341 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-298f-account-create-update-bgx4w"] Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.708685 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94","Type":"ContainerStarted","Data":"179e4b819eb27feb1e697bc5ac01671aa723700150ae7ed9a00374937a75707b"} Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.723607 4840 generic.go:334] "Generic (PLEG): container finished" podID="0ba90e38-377f-42d7-91dd-cb92889e0cbf" containerID="0eda35b74c7024d65999ffc9b92358f1a7d45fd985b08bbc809d93bd30d201d7" exitCode=0 Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.723683 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wpn58" event={"ID":"0ba90e38-377f-42d7-91dd-cb92889e0cbf","Type":"ContainerDied","Data":"0eda35b74c7024d65999ffc9b92358f1a7d45fd985b08bbc809d93bd30d201d7"} Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.723757 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wpn58" event={"ID":"0ba90e38-377f-42d7-91dd-cb92889e0cbf","Type":"ContainerStarted","Data":"7162d061c9b8ae0d94253a79afa46b2c546a0b97a12197e670cd30664e3d0e9c"} Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.731882 4840 generic.go:334] "Generic (PLEG): container finished" podID="a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4" containerID="2ad9683900e7600f53df802b0bbd447af349254e001c1e0d18204012a848f901" exitCode=0 Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.732119 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-f2mtr" event={"ID":"a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4","Type":"ContainerDied","Data":"2ad9683900e7600f53df802b0bbd447af349254e001c1e0d18204012a848f901"} Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.732149 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-f2mtr" event={"ID":"a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4","Type":"ContainerStarted","Data":"e53c391e009e602c69f568a30c3697c3f0717aec2f3152f8ff10acfe6fb04859"} Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.741140 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-298f-account-create-update-bgx4w" event={"ID":"0d383441-a8df-429d-9b04-65b01f06c46e","Type":"ContainerStarted","Data":"12f2040ed1999492b1a68fe309979b9c60b5033c9c74fbb5893e294299ef33f4"} Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.753040 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-nhlzp" event={"ID":"8627bd78-c61f-4140-88b3-09cd6091afb4","Type":"ContainerStarted","Data":"d90b0fbf2cbe6f6a8079326c29aaed29f17ab5f835d86bb6f9720cfe6a3f2e16"} Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.753107 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-nhlzp" event={"ID":"8627bd78-c61f-4140-88b3-09cd6091afb4","Type":"ContainerStarted","Data":"629a28995afd674a343db7d455e1bd04b47b976e670a3191a47d2387d12034f1"} Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.760051 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0f23-account-create-update-5wkdk" event={"ID":"5a3982dc-cf14-417d-8fac-b8e811843798","Type":"ContainerStarted","Data":"8d7aa28ffb955a6be4d3ec851ce4c35fbcec53360da57f1f2413a9e5e4021485"} Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.780286 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-28ac-account-create-update-9lzd8" event={"ID":"8141eccc-ba41-4cfc-ba72-ffeae8858902","Type":"ContainerStarted","Data":"b57eefef57ea1e67d05ef7a344edc34490c14c383f807109b3c6f25730ff91f8"} Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.780352 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-28ac-account-create-update-9lzd8" event={"ID":"8141eccc-ba41-4cfc-ba72-ffeae8858902","Type":"ContainerStarted","Data":"2b46744645ccd61feab20917a196422caec335ad238f964c45e806a4f18fa096"} Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.803905 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-28ac-account-create-update-9lzd8" podStartSLOduration=2.803887039 podStartE2EDuration="2.803887039s" podCreationTimestamp="2026-03-11 09:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:18:44.797110109 +0000 UTC m=+1323.462779934" watchObservedRunningTime="2026-03-11 09:18:44.803887039 +0000 UTC m=+1323.469556854" Mar 11 09:18:44 crc kubenswrapper[4840]: I0311 09:18:44.948825 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:18:44 crc kubenswrapper[4840]: W0311 09:18:44.953172 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71aaf352_8b91_4846_8ce4_1d83303ac203.slice/crio-a1993715d8a5812f3755cf35ed8576120afdedb601a5203ee479d8d13ab997ad WatchSource:0}: Error finding container a1993715d8a5812f3755cf35ed8576120afdedb601a5203ee479d8d13ab997ad: Status 404 returned error can't find the container with id a1993715d8a5812f3755cf35ed8576120afdedb601a5203ee479d8d13ab997ad Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.602186 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.754494 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3ad2408-3dda-4009-a898-5f2618fc18cf-internal-tls-certs\") pod \"b3ad2408-3dda-4009-a898-5f2618fc18cf\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.754891 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7zkj\" (UniqueName: \"kubernetes.io/projected/b3ad2408-3dda-4009-a898-5f2618fc18cf-kube-api-access-b7zkj\") pod \"b3ad2408-3dda-4009-a898-5f2618fc18cf\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.754934 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ad2408-3dda-4009-a898-5f2618fc18cf-combined-ca-bundle\") pod \"b3ad2408-3dda-4009-a898-5f2618fc18cf\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.754998 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"b3ad2408-3dda-4009-a898-5f2618fc18cf\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.755618 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ad2408-3dda-4009-a898-5f2618fc18cf-scripts\") pod \"b3ad2408-3dda-4009-a898-5f2618fc18cf\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.755715 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3ad2408-3dda-4009-a898-5f2618fc18cf-logs\") pod \"b3ad2408-3dda-4009-a898-5f2618fc18cf\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.755737 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ad2408-3dda-4009-a898-5f2618fc18cf-config-data\") pod \"b3ad2408-3dda-4009-a898-5f2618fc18cf\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.755817 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3ad2408-3dda-4009-a898-5f2618fc18cf-httpd-run\") pod \"b3ad2408-3dda-4009-a898-5f2618fc18cf\" (UID: \"b3ad2408-3dda-4009-a898-5f2618fc18cf\") " Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.756385 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3ad2408-3dda-4009-a898-5f2618fc18cf-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b3ad2408-3dda-4009-a898-5f2618fc18cf" (UID: "b3ad2408-3dda-4009-a898-5f2618fc18cf"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.756635 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3ad2408-3dda-4009-a898-5f2618fc18cf-logs" (OuterVolumeSpecName: "logs") pod "b3ad2408-3dda-4009-a898-5f2618fc18cf" (UID: "b3ad2408-3dda-4009-a898-5f2618fc18cf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.763641 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "b3ad2408-3dda-4009-a898-5f2618fc18cf" (UID: "b3ad2408-3dda-4009-a898-5f2618fc18cf"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.766678 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ad2408-3dda-4009-a898-5f2618fc18cf-scripts" (OuterVolumeSpecName: "scripts") pod "b3ad2408-3dda-4009-a898-5f2618fc18cf" (UID: "b3ad2408-3dda-4009-a898-5f2618fc18cf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.767960 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3ad2408-3dda-4009-a898-5f2618fc18cf-kube-api-access-b7zkj" (OuterVolumeSpecName: "kube-api-access-b7zkj") pod "b3ad2408-3dda-4009-a898-5f2618fc18cf" (UID: "b3ad2408-3dda-4009-a898-5f2618fc18cf"). InnerVolumeSpecName "kube-api-access-b7zkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.798952 4840 generic.go:334] "Generic (PLEG): container finished" podID="b3ad2408-3dda-4009-a898-5f2618fc18cf" containerID="58052d3369d8502a608a76a3fac8089afd975a97605d8aede537f5084de1526b" exitCode=0 Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.799245 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.799131 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3ad2408-3dda-4009-a898-5f2618fc18cf","Type":"ContainerDied","Data":"58052d3369d8502a608a76a3fac8089afd975a97605d8aede537f5084de1526b"} Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.799460 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b3ad2408-3dda-4009-a898-5f2618fc18cf","Type":"ContainerDied","Data":"cae0ee9cc9a39f3f6b139b839d6bb08a2817d9b18835e9df953610610da27737"} Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.799557 4840 scope.go:117] "RemoveContainer" containerID="58052d3369d8502a608a76a3fac8089afd975a97605d8aede537f5084de1526b" Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.803757 4840 generic.go:334] "Generic (PLEG): container finished" podID="8141eccc-ba41-4cfc-ba72-ffeae8858902" containerID="b57eefef57ea1e67d05ef7a344edc34490c14c383f807109b3c6f25730ff91f8" exitCode=0 Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.803826 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-28ac-account-create-update-9lzd8" event={"ID":"8141eccc-ba41-4cfc-ba72-ffeae8858902","Type":"ContainerDied","Data":"b57eefef57ea1e67d05ef7a344edc34490c14c383f807109b3c6f25730ff91f8"} Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.807553 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94","Type":"ContainerStarted","Data":"16afea71c3761ce179df9319d13391e3e844074bd494bd4c6055290cf0280ae9"} Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.810344 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"71aaf352-8b91-4846-8ce4-1d83303ac203","Type":"ContainerStarted","Data":"a1993715d8a5812f3755cf35ed8576120afdedb601a5203ee479d8d13ab997ad"} Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.813418 4840 generic.go:334] "Generic (PLEG): container finished" podID="0d383441-a8df-429d-9b04-65b01f06c46e" containerID="c6f36f6775379a090ca9ce1ac0bd9477183ab08a224e5ad7f14b54317170e04f" exitCode=0 Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.813498 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-298f-account-create-update-bgx4w" event={"ID":"0d383441-a8df-429d-9b04-65b01f06c46e","Type":"ContainerDied","Data":"c6f36f6775379a090ca9ce1ac0bd9477183ab08a224e5ad7f14b54317170e04f"} Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.816677 4840 generic.go:334] "Generic (PLEG): container finished" podID="5a3982dc-cf14-417d-8fac-b8e811843798" containerID="7090d3ac747a15763e7046444045a8df17215da7e48e042854b41c49b658c877" exitCode=0 Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.816729 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0f23-account-create-update-5wkdk" event={"ID":"5a3982dc-cf14-417d-8fac-b8e811843798","Type":"ContainerDied","Data":"7090d3ac747a15763e7046444045a8df17215da7e48e042854b41c49b658c877"} Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.831629 4840 scope.go:117] "RemoveContainer" containerID="0f3c80a9ff890a01d385011299ef95918b4a997ceb457e6a46517cb25523ebe7" Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.833833 4840 generic.go:334] "Generic (PLEG): container finished" podID="8627bd78-c61f-4140-88b3-09cd6091afb4" containerID="d90b0fbf2cbe6f6a8079326c29aaed29f17ab5f835d86bb6f9720cfe6a3f2e16" exitCode=0 Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.833947 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-nhlzp" event={"ID":"8627bd78-c61f-4140-88b3-09cd6091afb4","Type":"ContainerDied","Data":"d90b0fbf2cbe6f6a8079326c29aaed29f17ab5f835d86bb6f9720cfe6a3f2e16"} Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.858164 4840 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.858202 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3ad2408-3dda-4009-a898-5f2618fc18cf-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.858215 4840 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3ad2408-3dda-4009-a898-5f2618fc18cf-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.858223 4840 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b3ad2408-3dda-4009-a898-5f2618fc18cf-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.858232 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7zkj\" (UniqueName: \"kubernetes.io/projected/b3ad2408-3dda-4009-a898-5f2618fc18cf-kube-api-access-b7zkj\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.861660 4840 scope.go:117] "RemoveContainer" containerID="58052d3369d8502a608a76a3fac8089afd975a97605d8aede537f5084de1526b" Mar 11 09:18:45 crc kubenswrapper[4840]: E0311 09:18:45.862236 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58052d3369d8502a608a76a3fac8089afd975a97605d8aede537f5084de1526b\": container with ID starting with 58052d3369d8502a608a76a3fac8089afd975a97605d8aede537f5084de1526b not found: ID does not exist" containerID="58052d3369d8502a608a76a3fac8089afd975a97605d8aede537f5084de1526b" Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.862274 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58052d3369d8502a608a76a3fac8089afd975a97605d8aede537f5084de1526b"} err="failed to get container status \"58052d3369d8502a608a76a3fac8089afd975a97605d8aede537f5084de1526b\": rpc error: code = NotFound desc = could not find container \"58052d3369d8502a608a76a3fac8089afd975a97605d8aede537f5084de1526b\": container with ID starting with 58052d3369d8502a608a76a3fac8089afd975a97605d8aede537f5084de1526b not found: ID does not exist" Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.862304 4840 scope.go:117] "RemoveContainer" containerID="0f3c80a9ff890a01d385011299ef95918b4a997ceb457e6a46517cb25523ebe7" Mar 11 09:18:45 crc kubenswrapper[4840]: E0311 09:18:45.863535 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f3c80a9ff890a01d385011299ef95918b4a997ceb457e6a46517cb25523ebe7\": container with ID starting with 0f3c80a9ff890a01d385011299ef95918b4a997ceb457e6a46517cb25523ebe7 not found: ID does not exist" containerID="0f3c80a9ff890a01d385011299ef95918b4a997ceb457e6a46517cb25523ebe7" Mar 11 09:18:45 crc kubenswrapper[4840]: I0311 09:18:45.863560 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f3c80a9ff890a01d385011299ef95918b4a997ceb457e6a46517cb25523ebe7"} err="failed to get container status \"0f3c80a9ff890a01d385011299ef95918b4a997ceb457e6a46517cb25523ebe7\": rpc error: code = NotFound desc = could not find container \"0f3c80a9ff890a01d385011299ef95918b4a997ceb457e6a46517cb25523ebe7\": container with ID starting with 0f3c80a9ff890a01d385011299ef95918b4a997ceb457e6a46517cb25523ebe7 not found: ID does not exist" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.044495 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ad2408-3dda-4009-a898-5f2618fc18cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3ad2408-3dda-4009-a898-5f2618fc18cf" (UID: "b3ad2408-3dda-4009-a898-5f2618fc18cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.048106 4840 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.069166 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3ad2408-3dda-4009-a898-5f2618fc18cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.069737 4840 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.071597 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ad2408-3dda-4009-a898-5f2618fc18cf-config-data" (OuterVolumeSpecName: "config-data") pod "b3ad2408-3dda-4009-a898-5f2618fc18cf" (UID: "b3ad2408-3dda-4009-a898-5f2618fc18cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.075742 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3ad2408-3dda-4009-a898-5f2618fc18cf-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b3ad2408-3dda-4009-a898-5f2618fc18cf" (UID: "b3ad2408-3dda-4009-a898-5f2618fc18cf"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.174979 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.182060 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.184060 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3ad2408-3dda-4009-a898-5f2618fc18cf-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.184095 4840 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3ad2408-3dda-4009-a898-5f2618fc18cf-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.219070 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:18:46 crc kubenswrapper[4840]: E0311 09:18:46.219852 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ad2408-3dda-4009-a898-5f2618fc18cf" containerName="glance-httpd" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.219870 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ad2408-3dda-4009-a898-5f2618fc18cf" containerName="glance-httpd" Mar 11 09:18:46 crc kubenswrapper[4840]: E0311 09:18:46.219890 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ad2408-3dda-4009-a898-5f2618fc18cf" containerName="glance-log" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.219897 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ad2408-3dda-4009-a898-5f2618fc18cf" containerName="glance-log" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.220086 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ad2408-3dda-4009-a898-5f2618fc18cf" containerName="glance-log" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.220098 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ad2408-3dda-4009-a898-5f2618fc18cf" containerName="glance-httpd" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.221834 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.225693 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.227744 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.228137 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.371421 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nhlzp" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.403008 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.403112 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.403141 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.403246 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxf22\" (UniqueName: \"kubernetes.io/projected/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-kube-api-access-gxf22\") pod \"glance-default-internal-api-0\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.403295 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.403365 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.403417 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-logs\") pod \"glance-default-internal-api-0\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.403543 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.513422 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mdfk\" (UniqueName: \"kubernetes.io/projected/8627bd78-c61f-4140-88b3-09cd6091afb4-kube-api-access-9mdfk\") pod \"8627bd78-c61f-4140-88b3-09cd6091afb4\" (UID: \"8627bd78-c61f-4140-88b3-09cd6091afb4\") " Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.513682 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8627bd78-c61f-4140-88b3-09cd6091afb4-operator-scripts\") pod \"8627bd78-c61f-4140-88b3-09cd6091afb4\" (UID: \"8627bd78-c61f-4140-88b3-09cd6091afb4\") " Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.514007 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.514052 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.514074 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.514111 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxf22\" (UniqueName: \"kubernetes.io/projected/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-kube-api-access-gxf22\") pod \"glance-default-internal-api-0\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.514131 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.514157 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.517596 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8627bd78-c61f-4140-88b3-09cd6091afb4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8627bd78-c61f-4140-88b3-09cd6091afb4" (UID: "8627bd78-c61f-4140-88b3-09cd6091afb4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.519802 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.519817 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.520018 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-logs\") pod \"glance-default-internal-api-0\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.519430 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-logs\") pod \"glance-default-internal-api-0\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.521266 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.526332 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.526949 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.531528 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8627bd78-c61f-4140-88b3-09cd6091afb4-kube-api-access-9mdfk" (OuterVolumeSpecName: "kube-api-access-9mdfk") pod "8627bd78-c61f-4140-88b3-09cd6091afb4" (UID: "8627bd78-c61f-4140-88b3-09cd6091afb4"). InnerVolumeSpecName "kube-api-access-9mdfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.533012 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.543447 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxf22\" (UniqueName: \"kubernetes.io/projected/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-kube-api-access-gxf22\") pod \"glance-default-internal-api-0\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.543668 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mdfk\" (UniqueName: \"kubernetes.io/projected/8627bd78-c61f-4140-88b3-09cd6091afb4-kube-api-access-9mdfk\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.543705 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8627bd78-c61f-4140-88b3-09cd6091afb4-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.552012 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.589295 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " pod="openstack/glance-default-internal-api-0" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.595929 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-f2mtr" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.603965 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wpn58" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.752452 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ba90e38-377f-42d7-91dd-cb92889e0cbf-operator-scripts\") pod \"0ba90e38-377f-42d7-91dd-cb92889e0cbf\" (UID: \"0ba90e38-377f-42d7-91dd-cb92889e0cbf\") " Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.752610 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f748n\" (UniqueName: \"kubernetes.io/projected/0ba90e38-377f-42d7-91dd-cb92889e0cbf-kube-api-access-f748n\") pod \"0ba90e38-377f-42d7-91dd-cb92889e0cbf\" (UID: \"0ba90e38-377f-42d7-91dd-cb92889e0cbf\") " Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.752648 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4-operator-scripts\") pod \"a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4\" (UID: \"a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4\") " Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.752764 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w8w8\" (UniqueName: \"kubernetes.io/projected/a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4-kube-api-access-9w8w8\") pod \"a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4\" (UID: \"a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4\") " Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.753727 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4" (UID: "a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.753803 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ba90e38-377f-42d7-91dd-cb92889e0cbf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ba90e38-377f-42d7-91dd-cb92889e0cbf" (UID: "0ba90e38-377f-42d7-91dd-cb92889e0cbf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.757935 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ba90e38-377f-42d7-91dd-cb92889e0cbf-kube-api-access-f748n" (OuterVolumeSpecName: "kube-api-access-f748n") pod "0ba90e38-377f-42d7-91dd-cb92889e0cbf" (UID: "0ba90e38-377f-42d7-91dd-cb92889e0cbf"). InnerVolumeSpecName "kube-api-access-f748n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.758697 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4-kube-api-access-9w8w8" (OuterVolumeSpecName: "kube-api-access-9w8w8") pod "a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4" (UID: "a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4"). InnerVolumeSpecName "kube-api-access-9w8w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.854920 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ba90e38-377f-42d7-91dd-cb92889e0cbf-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.855304 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f748n\" (UniqueName: \"kubernetes.io/projected/0ba90e38-377f-42d7-91dd-cb92889e0cbf-kube-api-access-f748n\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.855318 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.855326 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w8w8\" (UniqueName: \"kubernetes.io/projected/a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4-kube-api-access-9w8w8\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.866306 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wpn58" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.866952 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wpn58" event={"ID":"0ba90e38-377f-42d7-91dd-cb92889e0cbf","Type":"ContainerDied","Data":"7162d061c9b8ae0d94253a79afa46b2c546a0b97a12197e670cd30664e3d0e9c"} Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.867020 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7162d061c9b8ae0d94253a79afa46b2c546a0b97a12197e670cd30664e3d0e9c" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.872299 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-f2mtr" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.872288 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-f2mtr" event={"ID":"a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4","Type":"ContainerDied","Data":"e53c391e009e602c69f568a30c3697c3f0717aec2f3152f8ff10acfe6fb04859"} Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.873316 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e53c391e009e602c69f568a30c3697c3f0717aec2f3152f8ff10acfe6fb04859" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.877322 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nhlzp" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.877709 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-nhlzp" event={"ID":"8627bd78-c61f-4140-88b3-09cd6091afb4","Type":"ContainerDied","Data":"629a28995afd674a343db7d455e1bd04b47b976e670a3191a47d2387d12034f1"} Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.877782 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="629a28995afd674a343db7d455e1bd04b47b976e670a3191a47d2387d12034f1" Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.884609 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94","Type":"ContainerStarted","Data":"250c41117005034e3a732fb002e282e1a49cced22d9cc43d26c370c816e7d6f1"} Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.887098 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"71aaf352-8b91-4846-8ce4-1d83303ac203","Type":"ContainerStarted","Data":"8c20b7cb7d072af1e8fc8505f85ae53009e95d00f58e75523c006bc2e4ffcfc3"} Mar 11 09:18:46 crc kubenswrapper[4840]: I0311 09:18:46.895671 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.362921 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-298f-account-create-update-bgx4w" Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.470754 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d383441-a8df-429d-9b04-65b01f06c46e-operator-scripts\") pod \"0d383441-a8df-429d-9b04-65b01f06c46e\" (UID: \"0d383441-a8df-429d-9b04-65b01f06c46e\") " Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.470865 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j92z7\" (UniqueName: \"kubernetes.io/projected/0d383441-a8df-429d-9b04-65b01f06c46e-kube-api-access-j92z7\") pod \"0d383441-a8df-429d-9b04-65b01f06c46e\" (UID: \"0d383441-a8df-429d-9b04-65b01f06c46e\") " Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.473300 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d383441-a8df-429d-9b04-65b01f06c46e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0d383441-a8df-429d-9b04-65b01f06c46e" (UID: "0d383441-a8df-429d-9b04-65b01f06c46e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.498097 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d383441-a8df-429d-9b04-65b01f06c46e-kube-api-access-j92z7" (OuterVolumeSpecName: "kube-api-access-j92z7") pod "0d383441-a8df-429d-9b04-65b01f06c46e" (UID: "0d383441-a8df-429d-9b04-65b01f06c46e"). InnerVolumeSpecName "kube-api-access-j92z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.572751 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d383441-a8df-429d-9b04-65b01f06c46e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.572789 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j92z7\" (UniqueName: \"kubernetes.io/projected/0d383441-a8df-429d-9b04-65b01f06c46e-kube-api-access-j92z7\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.680059 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f23-account-create-update-5wkdk" Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.694182 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-28ac-account-create-update-9lzd8" Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.748813 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:18:47 crc kubenswrapper[4840]: W0311 09:18:47.753385 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3ccfc13_7a62_4923_95ab_c68cb93aa03c.slice/crio-cd8de2c1f764692164114f6363292122cbecbd4b42a9e5a5e6ee07c85eb32226 WatchSource:0}: Error finding container cd8de2c1f764692164114f6363292122cbecbd4b42a9e5a5e6ee07c85eb32226: Status 404 returned error can't find the container with id cd8de2c1f764692164114f6363292122cbecbd4b42a9e5a5e6ee07c85eb32226 Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.782483 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rctz2\" (UniqueName: \"kubernetes.io/projected/5a3982dc-cf14-417d-8fac-b8e811843798-kube-api-access-rctz2\") pod \"5a3982dc-cf14-417d-8fac-b8e811843798\" (UID: \"5a3982dc-cf14-417d-8fac-b8e811843798\") " Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.782741 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a3982dc-cf14-417d-8fac-b8e811843798-operator-scripts\") pod \"5a3982dc-cf14-417d-8fac-b8e811843798\" (UID: \"5a3982dc-cf14-417d-8fac-b8e811843798\") " Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.782841 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8141eccc-ba41-4cfc-ba72-ffeae8858902-operator-scripts\") pod \"8141eccc-ba41-4cfc-ba72-ffeae8858902\" (UID: \"8141eccc-ba41-4cfc-ba72-ffeae8858902\") " Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.783044 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzrf9\" (UniqueName: \"kubernetes.io/projected/8141eccc-ba41-4cfc-ba72-ffeae8858902-kube-api-access-qzrf9\") pod \"8141eccc-ba41-4cfc-ba72-ffeae8858902\" (UID: \"8141eccc-ba41-4cfc-ba72-ffeae8858902\") " Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.783366 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a3982dc-cf14-417d-8fac-b8e811843798-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a3982dc-cf14-417d-8fac-b8e811843798" (UID: "5a3982dc-cf14-417d-8fac-b8e811843798"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.783784 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8141eccc-ba41-4cfc-ba72-ffeae8858902-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8141eccc-ba41-4cfc-ba72-ffeae8858902" (UID: "8141eccc-ba41-4cfc-ba72-ffeae8858902"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.783939 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a3982dc-cf14-417d-8fac-b8e811843798-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.783958 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8141eccc-ba41-4cfc-ba72-ffeae8858902-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.789151 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8141eccc-ba41-4cfc-ba72-ffeae8858902-kube-api-access-qzrf9" (OuterVolumeSpecName: "kube-api-access-qzrf9") pod "8141eccc-ba41-4cfc-ba72-ffeae8858902" (UID: "8141eccc-ba41-4cfc-ba72-ffeae8858902"). InnerVolumeSpecName "kube-api-access-qzrf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.790805 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a3982dc-cf14-417d-8fac-b8e811843798-kube-api-access-rctz2" (OuterVolumeSpecName: "kube-api-access-rctz2") pod "5a3982dc-cf14-417d-8fac-b8e811843798" (UID: "5a3982dc-cf14-417d-8fac-b8e811843798"). InnerVolumeSpecName "kube-api-access-rctz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.885728 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzrf9\" (UniqueName: \"kubernetes.io/projected/8141eccc-ba41-4cfc-ba72-ffeae8858902-kube-api-access-qzrf9\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.885783 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rctz2\" (UniqueName: \"kubernetes.io/projected/5a3982dc-cf14-417d-8fac-b8e811843798-kube-api-access-rctz2\") on node \"crc\" DevicePath \"\"" Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.909458 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-298f-account-create-update-bgx4w" event={"ID":"0d383441-a8df-429d-9b04-65b01f06c46e","Type":"ContainerDied","Data":"12f2040ed1999492b1a68fe309979b9c60b5033c9c74fbb5893e294299ef33f4"} Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.909538 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12f2040ed1999492b1a68fe309979b9c60b5033c9c74fbb5893e294299ef33f4" Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.909644 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-298f-account-create-update-bgx4w" Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.922018 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f23-account-create-update-5wkdk" Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.922185 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0f23-account-create-update-5wkdk" event={"ID":"5a3982dc-cf14-417d-8fac-b8e811843798","Type":"ContainerDied","Data":"8d7aa28ffb955a6be4d3ec851ce4c35fbcec53360da57f1f2413a9e5e4021485"} Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.922236 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d7aa28ffb955a6be4d3ec851ce4c35fbcec53360da57f1f2413a9e5e4021485" Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.930767 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-28ac-account-create-update-9lzd8" Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.931361 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-28ac-account-create-update-9lzd8" event={"ID":"8141eccc-ba41-4cfc-ba72-ffeae8858902","Type":"ContainerDied","Data":"2b46744645ccd61feab20917a196422caec335ad238f964c45e806a4f18fa096"} Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.931404 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b46744645ccd61feab20917a196422caec335ad238f964c45e806a4f18fa096" Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.939048 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94","Type":"ContainerStarted","Data":"da79b032d3abd758671b7009924f9df3eb6e1d97617aaefd065782b1af6d325b"} Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.942524 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"71aaf352-8b91-4846-8ce4-1d83303ac203","Type":"ContainerStarted","Data":"7d5a479df92438b43deb38719eb65c2cb14faa128400d6d01ccbd757dae47f94"} Mar 11 09:18:47 crc kubenswrapper[4840]: I0311 09:18:47.946543 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d3ccfc13-7a62-4923-95ab-c68cb93aa03c","Type":"ContainerStarted","Data":"cd8de2c1f764692164114f6363292122cbecbd4b42a9e5a5e6ee07c85eb32226"} Mar 11 09:18:48 crc kubenswrapper[4840]: I0311 09:18:48.053105 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.053078473 podStartE2EDuration="5.053078473s" podCreationTimestamp="2026-03-11 09:18:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:18:48.049634446 +0000 UTC m=+1326.715304261" watchObservedRunningTime="2026-03-11 09:18:48.053078473 +0000 UTC m=+1326.718748288" Mar 11 09:18:48 crc kubenswrapper[4840]: I0311 09:18:48.093859 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3ad2408-3dda-4009-a898-5f2618fc18cf" path="/var/lib/kubelet/pods/b3ad2408-3dda-4009-a898-5f2618fc18cf/volumes" Mar 11 09:18:49 crc kubenswrapper[4840]: I0311 09:18:49.001854 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d3ccfc13-7a62-4923-95ab-c68cb93aa03c","Type":"ContainerStarted","Data":"fa51fa6846a6391fd4d41433bac4cdd3a55817184d7eb6184af7110475d61e48"} Mar 11 09:18:50 crc kubenswrapper[4840]: I0311 09:18:50.017385 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94","Type":"ContainerStarted","Data":"db50668ced3af3031bffa34e41dc4994f83543442782266a4f6816117c9d3d66"} Mar 11 09:18:50 crc kubenswrapper[4840]: I0311 09:18:50.017571 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" containerName="ceilometer-central-agent" containerID="cri-o://16afea71c3761ce179df9319d13391e3e844074bd494bd4c6055290cf0280ae9" gracePeriod=30 Mar 11 09:18:50 crc kubenswrapper[4840]: I0311 09:18:50.017624 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" containerName="proxy-httpd" containerID="cri-o://db50668ced3af3031bffa34e41dc4994f83543442782266a4f6816117c9d3d66" gracePeriod=30 Mar 11 09:18:50 crc kubenswrapper[4840]: I0311 09:18:50.018430 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 11 09:18:50 crc kubenswrapper[4840]: I0311 09:18:50.017662 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" containerName="ceilometer-notification-agent" containerID="cri-o://250c41117005034e3a732fb002e282e1a49cced22d9cc43d26c370c816e7d6f1" gracePeriod=30 Mar 11 09:18:50 crc kubenswrapper[4840]: I0311 09:18:50.017647 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" containerName="sg-core" containerID="cri-o://da79b032d3abd758671b7009924f9df3eb6e1d97617aaefd065782b1af6d325b" gracePeriod=30 Mar 11 09:18:50 crc kubenswrapper[4840]: I0311 09:18:50.021118 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d3ccfc13-7a62-4923-95ab-c68cb93aa03c","Type":"ContainerStarted","Data":"1443dcc20b53c34d6b5983e69576991044f6bc08da4320cceeb036e8ad539edf"} Mar 11 09:18:50 crc kubenswrapper[4840]: I0311 09:18:50.045532 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.927799974 podStartE2EDuration="8.045511691s" podCreationTimestamp="2026-03-11 09:18:42 +0000 UTC" firstStartedPulling="2026-03-11 09:18:43.957092561 +0000 UTC m=+1322.622762376" lastFinishedPulling="2026-03-11 09:18:49.074804278 +0000 UTC m=+1327.740474093" observedRunningTime="2026-03-11 09:18:50.036367811 +0000 UTC m=+1328.702037626" watchObservedRunningTime="2026-03-11 09:18:50.045511691 +0000 UTC m=+1328.711181506" Mar 11 09:18:50 crc kubenswrapper[4840]: I0311 09:18:50.066776 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.066750795 podStartE2EDuration="4.066750795s" podCreationTimestamp="2026-03-11 09:18:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:18:50.05976089 +0000 UTC m=+1328.725430705" watchObservedRunningTime="2026-03-11 09:18:50.066750795 +0000 UTC m=+1328.732420610" Mar 11 09:18:50 crc kubenswrapper[4840]: I0311 09:18:50.580338 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 11 09:18:51 crc kubenswrapper[4840]: I0311 09:18:51.035579 4840 generic.go:334] "Generic (PLEG): container finished" podID="7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" containerID="db50668ced3af3031bffa34e41dc4994f83543442782266a4f6816117c9d3d66" exitCode=0 Mar 11 09:18:51 crc kubenswrapper[4840]: I0311 09:18:51.035890 4840 generic.go:334] "Generic (PLEG): container finished" podID="7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" containerID="da79b032d3abd758671b7009924f9df3eb6e1d97617aaefd065782b1af6d325b" exitCode=2 Mar 11 09:18:51 crc kubenswrapper[4840]: I0311 09:18:51.035905 4840 generic.go:334] "Generic (PLEG): container finished" podID="7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" containerID="250c41117005034e3a732fb002e282e1a49cced22d9cc43d26c370c816e7d6f1" exitCode=0 Mar 11 09:18:51 crc kubenswrapper[4840]: I0311 09:18:51.035656 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94","Type":"ContainerDied","Data":"db50668ced3af3031bffa34e41dc4994f83543442782266a4f6816117c9d3d66"} Mar 11 09:18:51 crc kubenswrapper[4840]: I0311 09:18:51.036807 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94","Type":"ContainerDied","Data":"da79b032d3abd758671b7009924f9df3eb6e1d97617aaefd065782b1af6d325b"} Mar 11 09:18:51 crc kubenswrapper[4840]: I0311 09:18:51.036849 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94","Type":"ContainerDied","Data":"250c41117005034e3a732fb002e282e1a49cced22d9cc43d26c370c816e7d6f1"} Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.435798 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mdkd2"] Mar 11 09:18:53 crc kubenswrapper[4840]: E0311 09:18:53.436922 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba90e38-377f-42d7-91dd-cb92889e0cbf" containerName="mariadb-database-create" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.436940 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba90e38-377f-42d7-91dd-cb92889e0cbf" containerName="mariadb-database-create" Mar 11 09:18:53 crc kubenswrapper[4840]: E0311 09:18:53.436961 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8627bd78-c61f-4140-88b3-09cd6091afb4" containerName="mariadb-database-create" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.436968 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="8627bd78-c61f-4140-88b3-09cd6091afb4" containerName="mariadb-database-create" Mar 11 09:18:53 crc kubenswrapper[4840]: E0311 09:18:53.436986 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8141eccc-ba41-4cfc-ba72-ffeae8858902" containerName="mariadb-account-create-update" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.436993 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="8141eccc-ba41-4cfc-ba72-ffeae8858902" containerName="mariadb-account-create-update" Mar 11 09:18:53 crc kubenswrapper[4840]: E0311 09:18:53.437010 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4" containerName="mariadb-database-create" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.437016 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4" containerName="mariadb-database-create" Mar 11 09:18:53 crc kubenswrapper[4840]: E0311 09:18:53.437033 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a3982dc-cf14-417d-8fac-b8e811843798" containerName="mariadb-account-create-update" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.437039 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a3982dc-cf14-417d-8fac-b8e811843798" containerName="mariadb-account-create-update" Mar 11 09:18:53 crc kubenswrapper[4840]: E0311 09:18:53.437055 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d383441-a8df-429d-9b04-65b01f06c46e" containerName="mariadb-account-create-update" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.437061 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d383441-a8df-429d-9b04-65b01f06c46e" containerName="mariadb-account-create-update" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.437264 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4" containerName="mariadb-database-create" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.437289 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ba90e38-377f-42d7-91dd-cb92889e0cbf" containerName="mariadb-database-create" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.437297 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="8141eccc-ba41-4cfc-ba72-ffeae8858902" containerName="mariadb-account-create-update" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.437308 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="8627bd78-c61f-4140-88b3-09cd6091afb4" containerName="mariadb-database-create" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.437319 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d383441-a8df-429d-9b04-65b01f06c46e" containerName="mariadb-account-create-update" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.437331 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a3982dc-cf14-417d-8fac-b8e811843798" containerName="mariadb-account-create-update" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.438139 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mdkd2" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.440806 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5zplr" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.442015 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.442644 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.447039 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mdkd2"] Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.617113 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg2zd\" (UniqueName: \"kubernetes.io/projected/1e078769-708a-4d94-8d56-cc2e673edbd2-kube-api-access-fg2zd\") pod \"nova-cell0-conductor-db-sync-mdkd2\" (UID: \"1e078769-708a-4d94-8d56-cc2e673edbd2\") " pod="openstack/nova-cell0-conductor-db-sync-mdkd2" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.617197 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e078769-708a-4d94-8d56-cc2e673edbd2-scripts\") pod \"nova-cell0-conductor-db-sync-mdkd2\" (UID: \"1e078769-708a-4d94-8d56-cc2e673edbd2\") " pod="openstack/nova-cell0-conductor-db-sync-mdkd2" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.617289 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e078769-708a-4d94-8d56-cc2e673edbd2-config-data\") pod \"nova-cell0-conductor-db-sync-mdkd2\" (UID: \"1e078769-708a-4d94-8d56-cc2e673edbd2\") " pod="openstack/nova-cell0-conductor-db-sync-mdkd2" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.617378 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e078769-708a-4d94-8d56-cc2e673edbd2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mdkd2\" (UID: \"1e078769-708a-4d94-8d56-cc2e673edbd2\") " pod="openstack/nova-cell0-conductor-db-sync-mdkd2" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.720116 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e078769-708a-4d94-8d56-cc2e673edbd2-config-data\") pod \"nova-cell0-conductor-db-sync-mdkd2\" (UID: \"1e078769-708a-4d94-8d56-cc2e673edbd2\") " pod="openstack/nova-cell0-conductor-db-sync-mdkd2" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.720230 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e078769-708a-4d94-8d56-cc2e673edbd2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mdkd2\" (UID: \"1e078769-708a-4d94-8d56-cc2e673edbd2\") " pod="openstack/nova-cell0-conductor-db-sync-mdkd2" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.720304 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg2zd\" (UniqueName: \"kubernetes.io/projected/1e078769-708a-4d94-8d56-cc2e673edbd2-kube-api-access-fg2zd\") pod \"nova-cell0-conductor-db-sync-mdkd2\" (UID: \"1e078769-708a-4d94-8d56-cc2e673edbd2\") " pod="openstack/nova-cell0-conductor-db-sync-mdkd2" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.720334 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e078769-708a-4d94-8d56-cc2e673edbd2-scripts\") pod \"nova-cell0-conductor-db-sync-mdkd2\" (UID: \"1e078769-708a-4d94-8d56-cc2e673edbd2\") " pod="openstack/nova-cell0-conductor-db-sync-mdkd2" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.729374 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e078769-708a-4d94-8d56-cc2e673edbd2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mdkd2\" (UID: \"1e078769-708a-4d94-8d56-cc2e673edbd2\") " pod="openstack/nova-cell0-conductor-db-sync-mdkd2" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.730235 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e078769-708a-4d94-8d56-cc2e673edbd2-config-data\") pod \"nova-cell0-conductor-db-sync-mdkd2\" (UID: \"1e078769-708a-4d94-8d56-cc2e673edbd2\") " pod="openstack/nova-cell0-conductor-db-sync-mdkd2" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.731011 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e078769-708a-4d94-8d56-cc2e673edbd2-scripts\") pod \"nova-cell0-conductor-db-sync-mdkd2\" (UID: \"1e078769-708a-4d94-8d56-cc2e673edbd2\") " pod="openstack/nova-cell0-conductor-db-sync-mdkd2" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.737975 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg2zd\" (UniqueName: \"kubernetes.io/projected/1e078769-708a-4d94-8d56-cc2e673edbd2-kube-api-access-fg2zd\") pod \"nova-cell0-conductor-db-sync-mdkd2\" (UID: \"1e078769-708a-4d94-8d56-cc2e673edbd2\") " pod="openstack/nova-cell0-conductor-db-sync-mdkd2" Mar 11 09:18:53 crc kubenswrapper[4840]: I0311 09:18:53.767128 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mdkd2" Mar 11 09:18:54 crc kubenswrapper[4840]: I0311 09:18:54.055055 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mdkd2"] Mar 11 09:18:54 crc kubenswrapper[4840]: I0311 09:18:54.097768 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mdkd2" event={"ID":"1e078769-708a-4d94-8d56-cc2e673edbd2","Type":"ContainerStarted","Data":"9b88182f13bb323019bdfc98281c2655d837edb7d2b414947339daba2e004c72"} Mar 11 09:18:54 crc kubenswrapper[4840]: I0311 09:18:54.157730 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 11 09:18:54 crc kubenswrapper[4840]: I0311 09:18:54.157818 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 11 09:18:54 crc kubenswrapper[4840]: I0311 09:18:54.189459 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 11 09:18:54 crc kubenswrapper[4840]: I0311 09:18:54.207600 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 11 09:18:55 crc kubenswrapper[4840]: I0311 09:18:55.091408 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 11 09:18:55 crc kubenswrapper[4840]: I0311 09:18:55.091981 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 11 09:18:56 crc kubenswrapper[4840]: I0311 09:18:56.896026 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 11 09:18:56 crc kubenswrapper[4840]: I0311 09:18:56.896524 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 11 09:18:56 crc kubenswrapper[4840]: I0311 09:18:56.945208 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 11 09:18:56 crc kubenswrapper[4840]: I0311 09:18:56.964610 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 11 09:18:57 crc kubenswrapper[4840]: I0311 09:18:57.113685 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 11 09:18:57 crc kubenswrapper[4840]: I0311 09:18:57.113722 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 11 09:18:57 crc kubenswrapper[4840]: I0311 09:18:57.325771 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 11 09:18:57 crc kubenswrapper[4840]: I0311 09:18:57.325891 4840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 09:18:57 crc kubenswrapper[4840]: I0311 09:18:57.355360 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 11 09:18:57 crc kubenswrapper[4840]: I0311 09:18:57.446029 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:18:57 crc kubenswrapper[4840]: I0311 09:18:57.447786 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:18:58 crc kubenswrapper[4840]: I0311 09:18:58.130272 4840 generic.go:334] "Generic (PLEG): container finished" podID="7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" containerID="16afea71c3761ce179df9319d13391e3e844074bd494bd4c6055290cf0280ae9" exitCode=0 Mar 11 09:18:58 crc kubenswrapper[4840]: I0311 09:18:58.131547 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94","Type":"ContainerDied","Data":"16afea71c3761ce179df9319d13391e3e844074bd494bd4c6055290cf0280ae9"} Mar 11 09:18:59 crc kubenswrapper[4840]: I0311 09:18:59.354590 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 11 09:18:59 crc kubenswrapper[4840]: I0311 09:18:59.355121 4840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 11 09:18:59 crc kubenswrapper[4840]: I0311 09:18:59.362074 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 11 09:19:02 crc kubenswrapper[4840]: I0311 09:19:02.809667 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:19:02 crc kubenswrapper[4840]: I0311 09:19:02.941516 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-log-httpd\") pod \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " Mar 11 09:19:02 crc kubenswrapper[4840]: I0311 09:19:02.941620 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-sg-core-conf-yaml\") pod \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " Mar 11 09:19:02 crc kubenswrapper[4840]: I0311 09:19:02.941718 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqfxs\" (UniqueName: \"kubernetes.io/projected/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-kube-api-access-kqfxs\") pod \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " Mar 11 09:19:02 crc kubenswrapper[4840]: I0311 09:19:02.941769 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-combined-ca-bundle\") pod \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " Mar 11 09:19:02 crc kubenswrapper[4840]: I0311 09:19:02.941795 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-scripts\") pod \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " Mar 11 09:19:02 crc kubenswrapper[4840]: I0311 09:19:02.941833 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-config-data\") pod \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " Mar 11 09:19:02 crc kubenswrapper[4840]: I0311 09:19:02.941874 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-run-httpd\") pod \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\" (UID: \"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94\") " Mar 11 09:19:02 crc kubenswrapper[4840]: I0311 09:19:02.942203 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" (UID: "7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:19:02 crc kubenswrapper[4840]: I0311 09:19:02.942921 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" (UID: "7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:19:02 crc kubenswrapper[4840]: I0311 09:19:02.944065 4840 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:02 crc kubenswrapper[4840]: I0311 09:19:02.944087 4840 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:02 crc kubenswrapper[4840]: I0311 09:19:02.948359 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-kube-api-access-kqfxs" (OuterVolumeSpecName: "kube-api-access-kqfxs") pod "7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" (UID: "7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94"). InnerVolumeSpecName "kube-api-access-kqfxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:19:02 crc kubenswrapper[4840]: I0311 09:19:02.948778 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-scripts" (OuterVolumeSpecName: "scripts") pod "7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" (UID: "7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:19:02 crc kubenswrapper[4840]: I0311 09:19:02.975871 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" (UID: "7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.036207 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" (UID: "7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.045825 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqfxs\" (UniqueName: \"kubernetes.io/projected/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-kube-api-access-kqfxs\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.045862 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.045877 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.045890 4840 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.049519 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-config-data" (OuterVolumeSpecName: "config-data") pod "7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" (UID: "7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.148307 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.222520 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94","Type":"ContainerDied","Data":"179e4b819eb27feb1e697bc5ac01671aa723700150ae7ed9a00374937a75707b"} Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.222575 4840 scope.go:117] "RemoveContainer" containerID="db50668ced3af3031bffa34e41dc4994f83543442782266a4f6816117c9d3d66" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.222647 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.224034 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mdkd2" event={"ID":"1e078769-708a-4d94-8d56-cc2e673edbd2","Type":"ContainerStarted","Data":"672eca719856eb310b27bc3eb87b0f51f097ef830a3ce432a15efdb1c9cb003b"} Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.253433 4840 scope.go:117] "RemoveContainer" containerID="da79b032d3abd758671b7009924f9df3eb6e1d97617aaefd065782b1af6d325b" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.278964 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-mdkd2" podStartSLOduration=1.8741924129999998 podStartE2EDuration="10.278943686s" podCreationTimestamp="2026-03-11 09:18:53 +0000 UTC" firstStartedPulling="2026-03-11 09:18:54.062405435 +0000 UTC m=+1332.728075250" lastFinishedPulling="2026-03-11 09:19:02.467156708 +0000 UTC m=+1341.132826523" observedRunningTime="2026-03-11 09:19:03.2473102 +0000 UTC m=+1341.912980015" watchObservedRunningTime="2026-03-11 09:19:03.278943686 +0000 UTC m=+1341.944613501" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.283158 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.290025 4840 scope.go:117] "RemoveContainer" containerID="250c41117005034e3a732fb002e282e1a49cced22d9cc43d26c370c816e7d6f1" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.309814 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.323750 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:19:03 crc kubenswrapper[4840]: E0311 09:19:03.324280 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" containerName="sg-core" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.324299 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" containerName="sg-core" Mar 11 09:19:03 crc kubenswrapper[4840]: E0311 09:19:03.324312 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" containerName="ceilometer-notification-agent" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.324321 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" containerName="ceilometer-notification-agent" Mar 11 09:19:03 crc kubenswrapper[4840]: E0311 09:19:03.324336 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" containerName="proxy-httpd" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.324343 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" containerName="proxy-httpd" Mar 11 09:19:03 crc kubenswrapper[4840]: E0311 09:19:03.324360 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" containerName="ceilometer-central-agent" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.324366 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" containerName="ceilometer-central-agent" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.324617 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" containerName="ceilometer-central-agent" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.324632 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" containerName="ceilometer-notification-agent" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.324645 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" containerName="proxy-httpd" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.324669 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" containerName="sg-core" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.326530 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.331020 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.331528 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.339208 4840 scope.go:117] "RemoveContainer" containerID="16afea71c3761ce179df9319d13391e3e844074bd494bd4c6055290cf0280ae9" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.342389 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.454624 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz9v7\" (UniqueName: \"kubernetes.io/projected/2c0414f2-89d1-45c6-9433-1884e7731f0f-kube-api-access-zz9v7\") pod \"ceilometer-0\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " pod="openstack/ceilometer-0" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.454784 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c0414f2-89d1-45c6-9433-1884e7731f0f-scripts\") pod \"ceilometer-0\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " pod="openstack/ceilometer-0" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.454882 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2c0414f2-89d1-45c6-9433-1884e7731f0f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " pod="openstack/ceilometer-0" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.454938 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c0414f2-89d1-45c6-9433-1884e7731f0f-run-httpd\") pod \"ceilometer-0\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " pod="openstack/ceilometer-0" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.455010 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c0414f2-89d1-45c6-9433-1884e7731f0f-log-httpd\") pod \"ceilometer-0\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " pod="openstack/ceilometer-0" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.455063 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c0414f2-89d1-45c6-9433-1884e7731f0f-config-data\") pod \"ceilometer-0\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " pod="openstack/ceilometer-0" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.455132 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0414f2-89d1-45c6-9433-1884e7731f0f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " pod="openstack/ceilometer-0" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.557147 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c0414f2-89d1-45c6-9433-1884e7731f0f-scripts\") pod \"ceilometer-0\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " pod="openstack/ceilometer-0" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.557223 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2c0414f2-89d1-45c6-9433-1884e7731f0f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " pod="openstack/ceilometer-0" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.557255 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c0414f2-89d1-45c6-9433-1884e7731f0f-run-httpd\") pod \"ceilometer-0\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " pod="openstack/ceilometer-0" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.557312 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c0414f2-89d1-45c6-9433-1884e7731f0f-log-httpd\") pod \"ceilometer-0\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " pod="openstack/ceilometer-0" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.557337 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c0414f2-89d1-45c6-9433-1884e7731f0f-config-data\") pod \"ceilometer-0\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " pod="openstack/ceilometer-0" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.557358 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0414f2-89d1-45c6-9433-1884e7731f0f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " pod="openstack/ceilometer-0" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.557390 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz9v7\" (UniqueName: \"kubernetes.io/projected/2c0414f2-89d1-45c6-9433-1884e7731f0f-kube-api-access-zz9v7\") pod \"ceilometer-0\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " pod="openstack/ceilometer-0" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.558874 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c0414f2-89d1-45c6-9433-1884e7731f0f-log-httpd\") pod \"ceilometer-0\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " pod="openstack/ceilometer-0" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.559069 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c0414f2-89d1-45c6-9433-1884e7731f0f-run-httpd\") pod \"ceilometer-0\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " pod="openstack/ceilometer-0" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.562008 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2c0414f2-89d1-45c6-9433-1884e7731f0f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " pod="openstack/ceilometer-0" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.562082 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0414f2-89d1-45c6-9433-1884e7731f0f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " pod="openstack/ceilometer-0" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.569157 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c0414f2-89d1-45c6-9433-1884e7731f0f-scripts\") pod \"ceilometer-0\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " pod="openstack/ceilometer-0" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.569377 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c0414f2-89d1-45c6-9433-1884e7731f0f-config-data\") pod \"ceilometer-0\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " pod="openstack/ceilometer-0" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.576700 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz9v7\" (UniqueName: \"kubernetes.io/projected/2c0414f2-89d1-45c6-9433-1884e7731f0f-kube-api-access-zz9v7\") pod \"ceilometer-0\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " pod="openstack/ceilometer-0" Mar 11 09:19:03 crc kubenswrapper[4840]: I0311 09:19:03.648182 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:19:04 crc kubenswrapper[4840]: I0311 09:19:04.072139 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94" path="/var/lib/kubelet/pods/7b8eb1cd-c564-4bad-8ec8-c8bfd1c75c94/volumes" Mar 11 09:19:04 crc kubenswrapper[4840]: I0311 09:19:04.154874 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:19:04 crc kubenswrapper[4840]: W0311 09:19:04.157343 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c0414f2_89d1_45c6_9433_1884e7731f0f.slice/crio-3f5ec335277290bde012c135a94c3fa75dbfd916548046419c62ce6311e61783 WatchSource:0}: Error finding container 3f5ec335277290bde012c135a94c3fa75dbfd916548046419c62ce6311e61783: Status 404 returned error can't find the container with id 3f5ec335277290bde012c135a94c3fa75dbfd916548046419c62ce6311e61783 Mar 11 09:19:04 crc kubenswrapper[4840]: I0311 09:19:04.236055 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c0414f2-89d1-45c6-9433-1884e7731f0f","Type":"ContainerStarted","Data":"3f5ec335277290bde012c135a94c3fa75dbfd916548046419c62ce6311e61783"} Mar 11 09:19:05 crc kubenswrapper[4840]: I0311 09:19:05.736495 4840 scope.go:117] "RemoveContainer" containerID="e9e548f7515506ca605317929e764ac11891e7792857753f0c1300d4de020361" Mar 11 09:19:06 crc kubenswrapper[4840]: I0311 09:19:06.257314 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c0414f2-89d1-45c6-9433-1884e7731f0f","Type":"ContainerStarted","Data":"2711368a873dfbb446e6a30c19cc0c3a51eaaaf1944dfe97f957e417bc24b2ff"} Mar 11 09:19:07 crc kubenswrapper[4840]: I0311 09:19:07.268407 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c0414f2-89d1-45c6-9433-1884e7731f0f","Type":"ContainerStarted","Data":"117f830182e1425cb49612e4d5b3f9366a6b165b6a9890b32850069570416962"} Mar 11 09:19:10 crc kubenswrapper[4840]: I0311 09:19:10.306333 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c0414f2-89d1-45c6-9433-1884e7731f0f","Type":"ContainerStarted","Data":"f6fd46eb789bcc5e512b6dbcda0c655c0fa38e65eba6ea1a8a3aba8c5a679a48"} Mar 11 09:19:11 crc kubenswrapper[4840]: I0311 09:19:11.322825 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c0414f2-89d1-45c6-9433-1884e7731f0f","Type":"ContainerStarted","Data":"078e9e0ef447488a172f16cddf6afe2c56699b647d61e66400c4a8c71dfcf6f9"} Mar 11 09:19:11 crc kubenswrapper[4840]: I0311 09:19:11.323268 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 11 09:19:11 crc kubenswrapper[4840]: I0311 09:19:11.346792 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.5174506220000001 podStartE2EDuration="8.346775009s" podCreationTimestamp="2026-03-11 09:19:03 +0000 UTC" firstStartedPulling="2026-03-11 09:19:04.160836356 +0000 UTC m=+1342.826506171" lastFinishedPulling="2026-03-11 09:19:10.990160743 +0000 UTC m=+1349.655830558" observedRunningTime="2026-03-11 09:19:11.342872901 +0000 UTC m=+1350.008542716" watchObservedRunningTime="2026-03-11 09:19:11.346775009 +0000 UTC m=+1350.012444814" Mar 11 09:19:14 crc kubenswrapper[4840]: I0311 09:19:14.352767 4840 generic.go:334] "Generic (PLEG): container finished" podID="1e078769-708a-4d94-8d56-cc2e673edbd2" containerID="672eca719856eb310b27bc3eb87b0f51f097ef830a3ce432a15efdb1c9cb003b" exitCode=0 Mar 11 09:19:14 crc kubenswrapper[4840]: I0311 09:19:14.352888 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mdkd2" event={"ID":"1e078769-708a-4d94-8d56-cc2e673edbd2","Type":"ContainerDied","Data":"672eca719856eb310b27bc3eb87b0f51f097ef830a3ce432a15efdb1c9cb003b"} Mar 11 09:19:15 crc kubenswrapper[4840]: I0311 09:19:15.732420 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mdkd2" Mar 11 09:19:15 crc kubenswrapper[4840]: I0311 09:19:15.831487 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e078769-708a-4d94-8d56-cc2e673edbd2-combined-ca-bundle\") pod \"1e078769-708a-4d94-8d56-cc2e673edbd2\" (UID: \"1e078769-708a-4d94-8d56-cc2e673edbd2\") " Mar 11 09:19:15 crc kubenswrapper[4840]: I0311 09:19:15.831564 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e078769-708a-4d94-8d56-cc2e673edbd2-scripts\") pod \"1e078769-708a-4d94-8d56-cc2e673edbd2\" (UID: \"1e078769-708a-4d94-8d56-cc2e673edbd2\") " Mar 11 09:19:15 crc kubenswrapper[4840]: I0311 09:19:15.831712 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg2zd\" (UniqueName: \"kubernetes.io/projected/1e078769-708a-4d94-8d56-cc2e673edbd2-kube-api-access-fg2zd\") pod \"1e078769-708a-4d94-8d56-cc2e673edbd2\" (UID: \"1e078769-708a-4d94-8d56-cc2e673edbd2\") " Mar 11 09:19:15 crc kubenswrapper[4840]: I0311 09:19:15.831744 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e078769-708a-4d94-8d56-cc2e673edbd2-config-data\") pod \"1e078769-708a-4d94-8d56-cc2e673edbd2\" (UID: \"1e078769-708a-4d94-8d56-cc2e673edbd2\") " Mar 11 09:19:15 crc kubenswrapper[4840]: I0311 09:19:15.838237 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e078769-708a-4d94-8d56-cc2e673edbd2-scripts" (OuterVolumeSpecName: "scripts") pod "1e078769-708a-4d94-8d56-cc2e673edbd2" (UID: "1e078769-708a-4d94-8d56-cc2e673edbd2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:19:15 crc kubenswrapper[4840]: I0311 09:19:15.840023 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e078769-708a-4d94-8d56-cc2e673edbd2-kube-api-access-fg2zd" (OuterVolumeSpecName: "kube-api-access-fg2zd") pod "1e078769-708a-4d94-8d56-cc2e673edbd2" (UID: "1e078769-708a-4d94-8d56-cc2e673edbd2"). InnerVolumeSpecName "kube-api-access-fg2zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:19:15 crc kubenswrapper[4840]: I0311 09:19:15.859367 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e078769-708a-4d94-8d56-cc2e673edbd2-config-data" (OuterVolumeSpecName: "config-data") pod "1e078769-708a-4d94-8d56-cc2e673edbd2" (UID: "1e078769-708a-4d94-8d56-cc2e673edbd2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:19:15 crc kubenswrapper[4840]: I0311 09:19:15.863796 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e078769-708a-4d94-8d56-cc2e673edbd2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e078769-708a-4d94-8d56-cc2e673edbd2" (UID: "1e078769-708a-4d94-8d56-cc2e673edbd2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:19:15 crc kubenswrapper[4840]: I0311 09:19:15.934026 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e078769-708a-4d94-8d56-cc2e673edbd2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:15 crc kubenswrapper[4840]: I0311 09:19:15.934064 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e078769-708a-4d94-8d56-cc2e673edbd2-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:15 crc kubenswrapper[4840]: I0311 09:19:15.934076 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg2zd\" (UniqueName: \"kubernetes.io/projected/1e078769-708a-4d94-8d56-cc2e673edbd2-kube-api-access-fg2zd\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:15 crc kubenswrapper[4840]: I0311 09:19:15.934086 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e078769-708a-4d94-8d56-cc2e673edbd2-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:16 crc kubenswrapper[4840]: I0311 09:19:16.372170 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mdkd2" event={"ID":"1e078769-708a-4d94-8d56-cc2e673edbd2","Type":"ContainerDied","Data":"9b88182f13bb323019bdfc98281c2655d837edb7d2b414947339daba2e004c72"} Mar 11 09:19:16 crc kubenswrapper[4840]: I0311 09:19:16.372209 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b88182f13bb323019bdfc98281c2655d837edb7d2b414947339daba2e004c72" Mar 11 09:19:16 crc kubenswrapper[4840]: I0311 09:19:16.372208 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mdkd2" Mar 11 09:19:16 crc kubenswrapper[4840]: I0311 09:19:16.509102 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 11 09:19:16 crc kubenswrapper[4840]: E0311 09:19:16.509564 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e078769-708a-4d94-8d56-cc2e673edbd2" containerName="nova-cell0-conductor-db-sync" Mar 11 09:19:16 crc kubenswrapper[4840]: I0311 09:19:16.509583 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e078769-708a-4d94-8d56-cc2e673edbd2" containerName="nova-cell0-conductor-db-sync" Mar 11 09:19:16 crc kubenswrapper[4840]: I0311 09:19:16.509765 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e078769-708a-4d94-8d56-cc2e673edbd2" containerName="nova-cell0-conductor-db-sync" Mar 11 09:19:16 crc kubenswrapper[4840]: I0311 09:19:16.510523 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 11 09:19:16 crc kubenswrapper[4840]: I0311 09:19:16.513300 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5zplr" Mar 11 09:19:16 crc kubenswrapper[4840]: I0311 09:19:16.516421 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 11 09:19:16 crc kubenswrapper[4840]: I0311 09:19:16.541821 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 11 09:19:16 crc kubenswrapper[4840]: I0311 09:19:16.649741 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c08694-92ed-44cb-8ca3-92a47b5571d4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"95c08694-92ed-44cb-8ca3-92a47b5571d4\") " pod="openstack/nova-cell0-conductor-0" Mar 11 09:19:16 crc kubenswrapper[4840]: I0311 09:19:16.649841 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chwnz\" (UniqueName: \"kubernetes.io/projected/95c08694-92ed-44cb-8ca3-92a47b5571d4-kube-api-access-chwnz\") pod \"nova-cell0-conductor-0\" (UID: \"95c08694-92ed-44cb-8ca3-92a47b5571d4\") " pod="openstack/nova-cell0-conductor-0" Mar 11 09:19:16 crc kubenswrapper[4840]: I0311 09:19:16.649946 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c08694-92ed-44cb-8ca3-92a47b5571d4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"95c08694-92ed-44cb-8ca3-92a47b5571d4\") " pod="openstack/nova-cell0-conductor-0" Mar 11 09:19:16 crc kubenswrapper[4840]: I0311 09:19:16.751823 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c08694-92ed-44cb-8ca3-92a47b5571d4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"95c08694-92ed-44cb-8ca3-92a47b5571d4\") " pod="openstack/nova-cell0-conductor-0" Mar 11 09:19:16 crc kubenswrapper[4840]: I0311 09:19:16.752323 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c08694-92ed-44cb-8ca3-92a47b5571d4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"95c08694-92ed-44cb-8ca3-92a47b5571d4\") " pod="openstack/nova-cell0-conductor-0" Mar 11 09:19:16 crc kubenswrapper[4840]: I0311 09:19:16.752375 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chwnz\" (UniqueName: \"kubernetes.io/projected/95c08694-92ed-44cb-8ca3-92a47b5571d4-kube-api-access-chwnz\") pod \"nova-cell0-conductor-0\" (UID: \"95c08694-92ed-44cb-8ca3-92a47b5571d4\") " pod="openstack/nova-cell0-conductor-0" Mar 11 09:19:16 crc kubenswrapper[4840]: I0311 09:19:16.756092 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c08694-92ed-44cb-8ca3-92a47b5571d4-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"95c08694-92ed-44cb-8ca3-92a47b5571d4\") " pod="openstack/nova-cell0-conductor-0" Mar 11 09:19:16 crc kubenswrapper[4840]: I0311 09:19:16.756901 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c08694-92ed-44cb-8ca3-92a47b5571d4-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"95c08694-92ed-44cb-8ca3-92a47b5571d4\") " pod="openstack/nova-cell0-conductor-0" Mar 11 09:19:16 crc kubenswrapper[4840]: I0311 09:19:16.768324 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chwnz\" (UniqueName: \"kubernetes.io/projected/95c08694-92ed-44cb-8ca3-92a47b5571d4-kube-api-access-chwnz\") pod \"nova-cell0-conductor-0\" (UID: \"95c08694-92ed-44cb-8ca3-92a47b5571d4\") " pod="openstack/nova-cell0-conductor-0" Mar 11 09:19:16 crc kubenswrapper[4840]: I0311 09:19:16.832720 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 11 09:19:17 crc kubenswrapper[4840]: I0311 09:19:17.338886 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 11 09:19:17 crc kubenswrapper[4840]: I0311 09:19:17.381108 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"95c08694-92ed-44cb-8ca3-92a47b5571d4","Type":"ContainerStarted","Data":"f88b40934b2119f1b47aca3deb6bf658f2a1f94be2c87943d332e5a1e4a7c666"} Mar 11 09:19:18 crc kubenswrapper[4840]: I0311 09:19:18.392552 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"95c08694-92ed-44cb-8ca3-92a47b5571d4","Type":"ContainerStarted","Data":"a8b0caed3a79d6574e838b5ca30ae9f7dc0d2e8987551e440fb3bb935fcc2b90"} Mar 11 09:19:18 crc kubenswrapper[4840]: I0311 09:19:18.393161 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 11 09:19:18 crc kubenswrapper[4840]: I0311 09:19:18.415217 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.415199506 podStartE2EDuration="2.415199506s" podCreationTimestamp="2026-03-11 09:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:19:18.412055826 +0000 UTC m=+1357.077725641" watchObservedRunningTime="2026-03-11 09:19:18.415199506 +0000 UTC m=+1357.080869321" Mar 11 09:19:26 crc kubenswrapper[4840]: I0311 09:19:26.863945 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.420000 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-h7w9f"] Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.421210 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h7w9f" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.425040 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.432607 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.445627 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.445678 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.451351 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-h7w9f"] Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.464482 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df6db5cc-aad1-4c53-a726-61432206dd4c-config-data\") pod \"nova-cell0-cell-mapping-h7w9f\" (UID: \"df6db5cc-aad1-4c53-a726-61432206dd4c\") " pod="openstack/nova-cell0-cell-mapping-h7w9f" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.464535 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df6db5cc-aad1-4c53-a726-61432206dd4c-scripts\") pod \"nova-cell0-cell-mapping-h7w9f\" (UID: \"df6db5cc-aad1-4c53-a726-61432206dd4c\") " pod="openstack/nova-cell0-cell-mapping-h7w9f" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.464608 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df6db5cc-aad1-4c53-a726-61432206dd4c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-h7w9f\" (UID: \"df6db5cc-aad1-4c53-a726-61432206dd4c\") " pod="openstack/nova-cell0-cell-mapping-h7w9f" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.466933 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwrrs\" (UniqueName: \"kubernetes.io/projected/df6db5cc-aad1-4c53-a726-61432206dd4c-kube-api-access-hwrrs\") pod \"nova-cell0-cell-mapping-h7w9f\" (UID: \"df6db5cc-aad1-4c53-a726-61432206dd4c\") " pod="openstack/nova-cell0-cell-mapping-h7w9f" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.567559 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df6db5cc-aad1-4c53-a726-61432206dd4c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-h7w9f\" (UID: \"df6db5cc-aad1-4c53-a726-61432206dd4c\") " pod="openstack/nova-cell0-cell-mapping-h7w9f" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.567633 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwrrs\" (UniqueName: \"kubernetes.io/projected/df6db5cc-aad1-4c53-a726-61432206dd4c-kube-api-access-hwrrs\") pod \"nova-cell0-cell-mapping-h7w9f\" (UID: \"df6db5cc-aad1-4c53-a726-61432206dd4c\") " pod="openstack/nova-cell0-cell-mapping-h7w9f" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.567706 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df6db5cc-aad1-4c53-a726-61432206dd4c-config-data\") pod \"nova-cell0-cell-mapping-h7w9f\" (UID: \"df6db5cc-aad1-4c53-a726-61432206dd4c\") " pod="openstack/nova-cell0-cell-mapping-h7w9f" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.567735 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df6db5cc-aad1-4c53-a726-61432206dd4c-scripts\") pod \"nova-cell0-cell-mapping-h7w9f\" (UID: \"df6db5cc-aad1-4c53-a726-61432206dd4c\") " pod="openstack/nova-cell0-cell-mapping-h7w9f" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.576615 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df6db5cc-aad1-4c53-a726-61432206dd4c-config-data\") pod \"nova-cell0-cell-mapping-h7w9f\" (UID: \"df6db5cc-aad1-4c53-a726-61432206dd4c\") " pod="openstack/nova-cell0-cell-mapping-h7w9f" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.581033 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df6db5cc-aad1-4c53-a726-61432206dd4c-scripts\") pod \"nova-cell0-cell-mapping-h7w9f\" (UID: \"df6db5cc-aad1-4c53-a726-61432206dd4c\") " pod="openstack/nova-cell0-cell-mapping-h7w9f" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.584256 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df6db5cc-aad1-4c53-a726-61432206dd4c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-h7w9f\" (UID: \"df6db5cc-aad1-4c53-a726-61432206dd4c\") " pod="openstack/nova-cell0-cell-mapping-h7w9f" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.589335 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwrrs\" (UniqueName: \"kubernetes.io/projected/df6db5cc-aad1-4c53-a726-61432206dd4c-kube-api-access-hwrrs\") pod \"nova-cell0-cell-mapping-h7w9f\" (UID: \"df6db5cc-aad1-4c53-a726-61432206dd4c\") " pod="openstack/nova-cell0-cell-mapping-h7w9f" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.658309 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.659965 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.661985 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.675878 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.677997 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.692395 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.695518 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.712554 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.746806 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h7w9f" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.771533 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.772292 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5045b6e-75f6-44c8-b9bb-fd28964f0676-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b5045b6e-75f6-44c8-b9bb-fd28964f0676\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.772347 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbxl8\" (UniqueName: \"kubernetes.io/projected/b5045b6e-75f6-44c8-b9bb-fd28964f0676-kube-api-access-cbxl8\") pod \"nova-cell1-novncproxy-0\" (UID: \"b5045b6e-75f6-44c8-b9bb-fd28964f0676\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.772457 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5045b6e-75f6-44c8-b9bb-fd28964f0676-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b5045b6e-75f6-44c8-b9bb-fd28964f0676\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.772998 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.775687 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.787325 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.873764 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbxl8\" (UniqueName: \"kubernetes.io/projected/b5045b6e-75f6-44c8-b9bb-fd28964f0676-kube-api-access-cbxl8\") pod \"nova-cell1-novncproxy-0\" (UID: \"b5045b6e-75f6-44c8-b9bb-fd28964f0676\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.873855 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5984d0-a2c5-483e-90f4-f8c38056ff43-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fc5984d0-a2c5-483e-90f4-f8c38056ff43\") " pod="openstack/nova-api-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.873886 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc5984d0-a2c5-483e-90f4-f8c38056ff43-config-data\") pod \"nova-api-0\" (UID: \"fc5984d0-a2c5-483e-90f4-f8c38056ff43\") " pod="openstack/nova-api-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.873934 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc5984d0-a2c5-483e-90f4-f8c38056ff43-logs\") pod \"nova-api-0\" (UID: \"fc5984d0-a2c5-483e-90f4-f8c38056ff43\") " pod="openstack/nova-api-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.873964 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5045b6e-75f6-44c8-b9bb-fd28964f0676-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b5045b6e-75f6-44c8-b9bb-fd28964f0676\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.874007 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpqn7\" (UniqueName: \"kubernetes.io/projected/fc5984d0-a2c5-483e-90f4-f8c38056ff43-kube-api-access-lpqn7\") pod \"nova-api-0\" (UID: \"fc5984d0-a2c5-483e-90f4-f8c38056ff43\") " pod="openstack/nova-api-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.874028 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5045b6e-75f6-44c8-b9bb-fd28964f0676-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b5045b6e-75f6-44c8-b9bb-fd28964f0676\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.894481 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5045b6e-75f6-44c8-b9bb-fd28964f0676-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b5045b6e-75f6-44c8-b9bb-fd28964f0676\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.936315 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5045b6e-75f6-44c8-b9bb-fd28964f0676-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b5045b6e-75f6-44c8-b9bb-fd28964f0676\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.947346 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbxl8\" (UniqueName: \"kubernetes.io/projected/b5045b6e-75f6-44c8-b9bb-fd28964f0676-kube-api-access-cbxl8\") pod \"nova-cell1-novncproxy-0\" (UID: \"b5045b6e-75f6-44c8-b9bb-fd28964f0676\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.950681 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.953062 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.963758 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.975106 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.976370 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpqn7\" (UniqueName: \"kubernetes.io/projected/fc5984d0-a2c5-483e-90f4-f8c38056ff43-kube-api-access-lpqn7\") pod \"nova-api-0\" (UID: \"fc5984d0-a2c5-483e-90f4-f8c38056ff43\") " pod="openstack/nova-api-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.976442 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6xgg\" (UniqueName: \"kubernetes.io/projected/c2c9333d-b833-4abe-972c-6263a38f1a97-kube-api-access-m6xgg\") pod \"nova-scheduler-0\" (UID: \"c2c9333d-b833-4abe-972c-6263a38f1a97\") " pod="openstack/nova-scheduler-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.976559 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5984d0-a2c5-483e-90f4-f8c38056ff43-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fc5984d0-a2c5-483e-90f4-f8c38056ff43\") " pod="openstack/nova-api-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.976594 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc5984d0-a2c5-483e-90f4-f8c38056ff43-config-data\") pod \"nova-api-0\" (UID: \"fc5984d0-a2c5-483e-90f4-f8c38056ff43\") " pod="openstack/nova-api-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.976618 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2c9333d-b833-4abe-972c-6263a38f1a97-config-data\") pod \"nova-scheduler-0\" (UID: \"c2c9333d-b833-4abe-972c-6263a38f1a97\") " pod="openstack/nova-scheduler-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.976648 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2c9333d-b833-4abe-972c-6263a38f1a97-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c2c9333d-b833-4abe-972c-6263a38f1a97\") " pod="openstack/nova-scheduler-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.976680 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc5984d0-a2c5-483e-90f4-f8c38056ff43-logs\") pod \"nova-api-0\" (UID: \"fc5984d0-a2c5-483e-90f4-f8c38056ff43\") " pod="openstack/nova-api-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.981059 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5984d0-a2c5-483e-90f4-f8c38056ff43-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fc5984d0-a2c5-483e-90f4-f8c38056ff43\") " pod="openstack/nova-api-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.981666 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc5984d0-a2c5-483e-90f4-f8c38056ff43-logs\") pod \"nova-api-0\" (UID: \"fc5984d0-a2c5-483e-90f4-f8c38056ff43\") " pod="openstack/nova-api-0" Mar 11 09:19:27 crc kubenswrapper[4840]: I0311 09:19:27.984919 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc5984d0-a2c5-483e-90f4-f8c38056ff43-config-data\") pod \"nova-api-0\" (UID: \"fc5984d0-a2c5-483e-90f4-f8c38056ff43\") " pod="openstack/nova-api-0" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.011425 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpqn7\" (UniqueName: \"kubernetes.io/projected/fc5984d0-a2c5-483e-90f4-f8c38056ff43-kube-api-access-lpqn7\") pod \"nova-api-0\" (UID: \"fc5984d0-a2c5-483e-90f4-f8c38056ff43\") " pod="openstack/nova-api-0" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.031087 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.044093 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.059254 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bd5679c8c-g7rsc"] Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.067571 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.077969 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2c9333d-b833-4abe-972c-6263a38f1a97-config-data\") pod \"nova-scheduler-0\" (UID: \"c2c9333d-b833-4abe-972c-6263a38f1a97\") " pod="openstack/nova-scheduler-0" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.078021 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2c9333d-b833-4abe-972c-6263a38f1a97-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c2c9333d-b833-4abe-972c-6263a38f1a97\") " pod="openstack/nova-scheduler-0" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.078065 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9610466b-b516-43bc-b95a-7f671caee44b-logs\") pod \"nova-metadata-0\" (UID: \"9610466b-b516-43bc-b95a-7f671caee44b\") " pod="openstack/nova-metadata-0" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.078095 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9610466b-b516-43bc-b95a-7f671caee44b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9610466b-b516-43bc-b95a-7f671caee44b\") " pod="openstack/nova-metadata-0" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.078132 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57tx7\" (UniqueName: \"kubernetes.io/projected/9610466b-b516-43bc-b95a-7f671caee44b-kube-api-access-57tx7\") pod \"nova-metadata-0\" (UID: \"9610466b-b516-43bc-b95a-7f671caee44b\") " pod="openstack/nova-metadata-0" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.078186 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9610466b-b516-43bc-b95a-7f671caee44b-config-data\") pod \"nova-metadata-0\" (UID: \"9610466b-b516-43bc-b95a-7f671caee44b\") " pod="openstack/nova-metadata-0" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.078246 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6xgg\" (UniqueName: \"kubernetes.io/projected/c2c9333d-b833-4abe-972c-6263a38f1a97-kube-api-access-m6xgg\") pod \"nova-scheduler-0\" (UID: \"c2c9333d-b833-4abe-972c-6263a38f1a97\") " pod="openstack/nova-scheduler-0" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.083756 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2c9333d-b833-4abe-972c-6263a38f1a97-config-data\") pod \"nova-scheduler-0\" (UID: \"c2c9333d-b833-4abe-972c-6263a38f1a97\") " pod="openstack/nova-scheduler-0" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.086268 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bd5679c8c-g7rsc"] Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.086561 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2c9333d-b833-4abe-972c-6263a38f1a97-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c2c9333d-b833-4abe-972c-6263a38f1a97\") " pod="openstack/nova-scheduler-0" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.117037 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6xgg\" (UniqueName: \"kubernetes.io/projected/c2c9333d-b833-4abe-972c-6263a38f1a97-kube-api-access-m6xgg\") pod \"nova-scheduler-0\" (UID: \"c2c9333d-b833-4abe-972c-6263a38f1a97\") " pod="openstack/nova-scheduler-0" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.180008 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-config\") pod \"dnsmasq-dns-7bd5679c8c-g7rsc\" (UID: \"df0f59d5-d70d-49d0-aa71-9265ea29995b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.180136 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-dns-svc\") pod \"dnsmasq-dns-7bd5679c8c-g7rsc\" (UID: \"df0f59d5-d70d-49d0-aa71-9265ea29995b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.180172 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9610466b-b516-43bc-b95a-7f671caee44b-logs\") pod \"nova-metadata-0\" (UID: \"9610466b-b516-43bc-b95a-7f671caee44b\") " pod="openstack/nova-metadata-0" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.180273 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9610466b-b516-43bc-b95a-7f671caee44b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9610466b-b516-43bc-b95a-7f671caee44b\") " pod="openstack/nova-metadata-0" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.180315 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57tx7\" (UniqueName: \"kubernetes.io/projected/9610466b-b516-43bc-b95a-7f671caee44b-kube-api-access-57tx7\") pod \"nova-metadata-0\" (UID: \"9610466b-b516-43bc-b95a-7f671caee44b\") " pod="openstack/nova-metadata-0" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.180346 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-ovsdbserver-nb\") pod \"dnsmasq-dns-7bd5679c8c-g7rsc\" (UID: \"df0f59d5-d70d-49d0-aa71-9265ea29995b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.180371 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-ovsdbserver-sb\") pod \"dnsmasq-dns-7bd5679c8c-g7rsc\" (UID: \"df0f59d5-d70d-49d0-aa71-9265ea29995b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.180429 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9610466b-b516-43bc-b95a-7f671caee44b-config-data\") pod \"nova-metadata-0\" (UID: \"9610466b-b516-43bc-b95a-7f671caee44b\") " pod="openstack/nova-metadata-0" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.180458 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stv7k\" (UniqueName: \"kubernetes.io/projected/df0f59d5-d70d-49d0-aa71-9265ea29995b-kube-api-access-stv7k\") pod \"dnsmasq-dns-7bd5679c8c-g7rsc\" (UID: \"df0f59d5-d70d-49d0-aa71-9265ea29995b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.180564 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-dns-swift-storage-0\") pod \"dnsmasq-dns-7bd5679c8c-g7rsc\" (UID: \"df0f59d5-d70d-49d0-aa71-9265ea29995b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.188277 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9610466b-b516-43bc-b95a-7f671caee44b-logs\") pod \"nova-metadata-0\" (UID: \"9610466b-b516-43bc-b95a-7f671caee44b\") " pod="openstack/nova-metadata-0" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.188763 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9610466b-b516-43bc-b95a-7f671caee44b-config-data\") pod \"nova-metadata-0\" (UID: \"9610466b-b516-43bc-b95a-7f671caee44b\") " pod="openstack/nova-metadata-0" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.207436 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9610466b-b516-43bc-b95a-7f671caee44b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9610466b-b516-43bc-b95a-7f671caee44b\") " pod="openstack/nova-metadata-0" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.211152 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57tx7\" (UniqueName: \"kubernetes.io/projected/9610466b-b516-43bc-b95a-7f671caee44b-kube-api-access-57tx7\") pod \"nova-metadata-0\" (UID: \"9610466b-b516-43bc-b95a-7f671caee44b\") " pod="openstack/nova-metadata-0" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.249176 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.276872 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.282402 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-ovsdbserver-nb\") pod \"dnsmasq-dns-7bd5679c8c-g7rsc\" (UID: \"df0f59d5-d70d-49d0-aa71-9265ea29995b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.282453 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-ovsdbserver-sb\") pod \"dnsmasq-dns-7bd5679c8c-g7rsc\" (UID: \"df0f59d5-d70d-49d0-aa71-9265ea29995b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.282533 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stv7k\" (UniqueName: \"kubernetes.io/projected/df0f59d5-d70d-49d0-aa71-9265ea29995b-kube-api-access-stv7k\") pod \"dnsmasq-dns-7bd5679c8c-g7rsc\" (UID: \"df0f59d5-d70d-49d0-aa71-9265ea29995b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.282563 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-dns-swift-storage-0\") pod \"dnsmasq-dns-7bd5679c8c-g7rsc\" (UID: \"df0f59d5-d70d-49d0-aa71-9265ea29995b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.282629 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-config\") pod \"dnsmasq-dns-7bd5679c8c-g7rsc\" (UID: \"df0f59d5-d70d-49d0-aa71-9265ea29995b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.282702 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-dns-svc\") pod \"dnsmasq-dns-7bd5679c8c-g7rsc\" (UID: \"df0f59d5-d70d-49d0-aa71-9265ea29995b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.287833 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-dns-svc\") pod \"dnsmasq-dns-7bd5679c8c-g7rsc\" (UID: \"df0f59d5-d70d-49d0-aa71-9265ea29995b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.287854 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-dns-swift-storage-0\") pod \"dnsmasq-dns-7bd5679c8c-g7rsc\" (UID: \"df0f59d5-d70d-49d0-aa71-9265ea29995b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.287982 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-config\") pod \"dnsmasq-dns-7bd5679c8c-g7rsc\" (UID: \"df0f59d5-d70d-49d0-aa71-9265ea29995b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.287993 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-ovsdbserver-nb\") pod \"dnsmasq-dns-7bd5679c8c-g7rsc\" (UID: \"df0f59d5-d70d-49d0-aa71-9265ea29995b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.292967 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-ovsdbserver-sb\") pod \"dnsmasq-dns-7bd5679c8c-g7rsc\" (UID: \"df0f59d5-d70d-49d0-aa71-9265ea29995b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.311732 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stv7k\" (UniqueName: \"kubernetes.io/projected/df0f59d5-d70d-49d0-aa71-9265ea29995b-kube-api-access-stv7k\") pod \"dnsmasq-dns-7bd5679c8c-g7rsc\" (UID: \"df0f59d5-d70d-49d0-aa71-9265ea29995b\") " pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.440572 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.573645 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-h7w9f"] Mar 11 09:19:28 crc kubenswrapper[4840]: W0311 09:19:28.599344 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf6db5cc_aad1_4c53_a726_61432206dd4c.slice/crio-d850458f24dcb201a6be7037a1d4b792dbfd604094c6a083356d874ae0ab8c66 WatchSource:0}: Error finding container d850458f24dcb201a6be7037a1d4b792dbfd604094c6a083356d874ae0ab8c66: Status 404 returned error can't find the container with id d850458f24dcb201a6be7037a1d4b792dbfd604094c6a083356d874ae0ab8c66 Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.729714 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.773926 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-w9dg6"] Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.776082 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-w9dg6" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.784949 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.784971 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.788822 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-w9dg6"] Mar 11 09:19:28 crc kubenswrapper[4840]: W0311 09:19:28.894665 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc5984d0_a2c5_483e_90f4_f8c38056ff43.slice/crio-5698c59282614add62a521605f972ba57005cfd670a921072e2e126d325ffc17 WatchSource:0}: Error finding container 5698c59282614add62a521605f972ba57005cfd670a921072e2e126d325ffc17: Status 404 returned error can't find the container with id 5698c59282614add62a521605f972ba57005cfd670a921072e2e126d325ffc17 Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.898426 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37f195c4-316b-428e-9213-ee66b1fcfd9f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-w9dg6\" (UID: \"37f195c4-316b-428e-9213-ee66b1fcfd9f\") " pod="openstack/nova-cell1-conductor-db-sync-w9dg6" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.901726 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37f195c4-316b-428e-9213-ee66b1fcfd9f-config-data\") pod \"nova-cell1-conductor-db-sync-w9dg6\" (UID: \"37f195c4-316b-428e-9213-ee66b1fcfd9f\") " pod="openstack/nova-cell1-conductor-db-sync-w9dg6" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.901914 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37f195c4-316b-428e-9213-ee66b1fcfd9f-scripts\") pod \"nova-cell1-conductor-db-sync-w9dg6\" (UID: \"37f195c4-316b-428e-9213-ee66b1fcfd9f\") " pod="openstack/nova-cell1-conductor-db-sync-w9dg6" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.901953 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6c6r\" (UniqueName: \"kubernetes.io/projected/37f195c4-316b-428e-9213-ee66b1fcfd9f-kube-api-access-s6c6r\") pod \"nova-cell1-conductor-db-sync-w9dg6\" (UID: \"37f195c4-316b-428e-9213-ee66b1fcfd9f\") " pod="openstack/nova-cell1-conductor-db-sync-w9dg6" Mar 11 09:19:28 crc kubenswrapper[4840]: I0311 09:19:28.907124 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:19:29 crc kubenswrapper[4840]: I0311 09:19:29.009766 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37f195c4-316b-428e-9213-ee66b1fcfd9f-scripts\") pod \"nova-cell1-conductor-db-sync-w9dg6\" (UID: \"37f195c4-316b-428e-9213-ee66b1fcfd9f\") " pod="openstack/nova-cell1-conductor-db-sync-w9dg6" Mar 11 09:19:29 crc kubenswrapper[4840]: I0311 09:19:29.009821 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6c6r\" (UniqueName: \"kubernetes.io/projected/37f195c4-316b-428e-9213-ee66b1fcfd9f-kube-api-access-s6c6r\") pod \"nova-cell1-conductor-db-sync-w9dg6\" (UID: \"37f195c4-316b-428e-9213-ee66b1fcfd9f\") " pod="openstack/nova-cell1-conductor-db-sync-w9dg6" Mar 11 09:19:29 crc kubenswrapper[4840]: I0311 09:19:29.009901 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37f195c4-316b-428e-9213-ee66b1fcfd9f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-w9dg6\" (UID: \"37f195c4-316b-428e-9213-ee66b1fcfd9f\") " pod="openstack/nova-cell1-conductor-db-sync-w9dg6" Mar 11 09:19:29 crc kubenswrapper[4840]: I0311 09:19:29.009982 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37f195c4-316b-428e-9213-ee66b1fcfd9f-config-data\") pod \"nova-cell1-conductor-db-sync-w9dg6\" (UID: \"37f195c4-316b-428e-9213-ee66b1fcfd9f\") " pod="openstack/nova-cell1-conductor-db-sync-w9dg6" Mar 11 09:19:29 crc kubenswrapper[4840]: I0311 09:19:29.016733 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37f195c4-316b-428e-9213-ee66b1fcfd9f-scripts\") pod \"nova-cell1-conductor-db-sync-w9dg6\" (UID: \"37f195c4-316b-428e-9213-ee66b1fcfd9f\") " pod="openstack/nova-cell1-conductor-db-sync-w9dg6" Mar 11 09:19:29 crc kubenswrapper[4840]: I0311 09:19:29.028101 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37f195c4-316b-428e-9213-ee66b1fcfd9f-config-data\") pod \"nova-cell1-conductor-db-sync-w9dg6\" (UID: \"37f195c4-316b-428e-9213-ee66b1fcfd9f\") " pod="openstack/nova-cell1-conductor-db-sync-w9dg6" Mar 11 09:19:29 crc kubenswrapper[4840]: I0311 09:19:29.028505 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37f195c4-316b-428e-9213-ee66b1fcfd9f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-w9dg6\" (UID: \"37f195c4-316b-428e-9213-ee66b1fcfd9f\") " pod="openstack/nova-cell1-conductor-db-sync-w9dg6" Mar 11 09:19:29 crc kubenswrapper[4840]: I0311 09:19:29.048576 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:19:29 crc kubenswrapper[4840]: I0311 09:19:29.051738 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6c6r\" (UniqueName: \"kubernetes.io/projected/37f195c4-316b-428e-9213-ee66b1fcfd9f-kube-api-access-s6c6r\") pod \"nova-cell1-conductor-db-sync-w9dg6\" (UID: \"37f195c4-316b-428e-9213-ee66b1fcfd9f\") " pod="openstack/nova-cell1-conductor-db-sync-w9dg6" Mar 11 09:19:29 crc kubenswrapper[4840]: I0311 09:19:29.072307 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:19:29 crc kubenswrapper[4840]: I0311 09:19:29.170302 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-w9dg6" Mar 11 09:19:29 crc kubenswrapper[4840]: I0311 09:19:29.195339 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bd5679c8c-g7rsc"] Mar 11 09:19:29 crc kubenswrapper[4840]: I0311 09:19:29.512661 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9610466b-b516-43bc-b95a-7f671caee44b","Type":"ContainerStarted","Data":"5710c98e70ee4bafdfd35c2821a6446cdd84e307624e1ccb73749b72a1fe578e"} Mar 11 09:19:29 crc kubenswrapper[4840]: I0311 09:19:29.514661 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc5984d0-a2c5-483e-90f4-f8c38056ff43","Type":"ContainerStarted","Data":"5698c59282614add62a521605f972ba57005cfd670a921072e2e126d325ffc17"} Mar 11 09:19:29 crc kubenswrapper[4840]: I0311 09:19:29.519789 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b5045b6e-75f6-44c8-b9bb-fd28964f0676","Type":"ContainerStarted","Data":"e03b1f32f18c5147475329f2074cf0319edb631499820a3663d443bc3e37e98e"} Mar 11 09:19:29 crc kubenswrapper[4840]: I0311 09:19:29.522133 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" event={"ID":"df0f59d5-d70d-49d0-aa71-9265ea29995b","Type":"ContainerStarted","Data":"9151fca0a30fbd98856e0676ce8c461e2164057414cba8383e689bf0cfda6ec8"} Mar 11 09:19:29 crc kubenswrapper[4840]: I0311 09:19:29.522167 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" event={"ID":"df0f59d5-d70d-49d0-aa71-9265ea29995b","Type":"ContainerStarted","Data":"6c0846901006e72b1dced1d3713e43d4082e775944e0fb467c77a9a96c9fa677"} Mar 11 09:19:29 crc kubenswrapper[4840]: I0311 09:19:29.524270 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h7w9f" event={"ID":"df6db5cc-aad1-4c53-a726-61432206dd4c","Type":"ContainerStarted","Data":"6e805737b5c9e7e0a90405eecbec7fbac37b33b617516eebfd9b44d4a63815b1"} Mar 11 09:19:29 crc kubenswrapper[4840]: I0311 09:19:29.524306 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h7w9f" event={"ID":"df6db5cc-aad1-4c53-a726-61432206dd4c","Type":"ContainerStarted","Data":"d850458f24dcb201a6be7037a1d4b792dbfd604094c6a083356d874ae0ab8c66"} Mar 11 09:19:29 crc kubenswrapper[4840]: I0311 09:19:29.532240 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c2c9333d-b833-4abe-972c-6263a38f1a97","Type":"ContainerStarted","Data":"09d0575c087720c30eaa444291e7be9064b53fd9320b244ffe9d8a013f128f9f"} Mar 11 09:19:29 crc kubenswrapper[4840]: I0311 09:19:29.576166 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-h7w9f" podStartSLOduration=2.576142947 podStartE2EDuration="2.576142947s" podCreationTimestamp="2026-03-11 09:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:19:29.56949031 +0000 UTC m=+1368.235160125" watchObservedRunningTime="2026-03-11 09:19:29.576142947 +0000 UTC m=+1368.241812762" Mar 11 09:19:29 crc kubenswrapper[4840]: I0311 09:19:29.666145 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-w9dg6"] Mar 11 09:19:30 crc kubenswrapper[4840]: I0311 09:19:30.544645 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-w9dg6" event={"ID":"37f195c4-316b-428e-9213-ee66b1fcfd9f","Type":"ContainerStarted","Data":"48f647feaba3271860d14aae492462ec4fa6f0b34e03e667ec047f9b3e726377"} Mar 11 09:19:30 crc kubenswrapper[4840]: I0311 09:19:30.545827 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-w9dg6" event={"ID":"37f195c4-316b-428e-9213-ee66b1fcfd9f","Type":"ContainerStarted","Data":"2a43bde1f431f47d2f59d56d4162b79f2ad3a20c1b5161851c1081f90b1f9ff8"} Mar 11 09:19:30 crc kubenswrapper[4840]: I0311 09:19:30.547628 4840 generic.go:334] "Generic (PLEG): container finished" podID="df0f59d5-d70d-49d0-aa71-9265ea29995b" containerID="9151fca0a30fbd98856e0676ce8c461e2164057414cba8383e689bf0cfda6ec8" exitCode=0 Mar 11 09:19:30 crc kubenswrapper[4840]: I0311 09:19:30.550308 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" event={"ID":"df0f59d5-d70d-49d0-aa71-9265ea29995b","Type":"ContainerDied","Data":"9151fca0a30fbd98856e0676ce8c461e2164057414cba8383e689bf0cfda6ec8"} Mar 11 09:19:30 crc kubenswrapper[4840]: I0311 09:19:30.573850 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-w9dg6" podStartSLOduration=2.573126742 podStartE2EDuration="2.573126742s" podCreationTimestamp="2026-03-11 09:19:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:19:30.565409917 +0000 UTC m=+1369.231079732" watchObservedRunningTime="2026-03-11 09:19:30.573126742 +0000 UTC m=+1369.238796557" Mar 11 09:19:31 crc kubenswrapper[4840]: I0311 09:19:31.808514 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 11 09:19:31 crc kubenswrapper[4840]: I0311 09:19:31.820433 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:19:33 crc kubenswrapper[4840]: I0311 09:19:33.583540 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc5984d0-a2c5-483e-90f4-f8c38056ff43","Type":"ContainerStarted","Data":"360c1334dca539b14a76da6b532e85612504bb222d990668c6428cca26dd2bf9"} Mar 11 09:19:33 crc kubenswrapper[4840]: I0311 09:19:33.584233 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc5984d0-a2c5-483e-90f4-f8c38056ff43","Type":"ContainerStarted","Data":"027c2c1bb24efcf0502330b87a7ec03d7e20466b13d4bc837283e0ce5988dd62"} Mar 11 09:19:33 crc kubenswrapper[4840]: I0311 09:19:33.590329 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b5045b6e-75f6-44c8-b9bb-fd28964f0676","Type":"ContainerStarted","Data":"1bb46ca958f12008f70e1613c2594276c581a8c47f2383163050203d5a0ea214"} Mar 11 09:19:33 crc kubenswrapper[4840]: I0311 09:19:33.590641 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="b5045b6e-75f6-44c8-b9bb-fd28964f0676" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://1bb46ca958f12008f70e1613c2594276c581a8c47f2383163050203d5a0ea214" gracePeriod=30 Mar 11 09:19:33 crc kubenswrapper[4840]: I0311 09:19:33.596756 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" event={"ID":"df0f59d5-d70d-49d0-aa71-9265ea29995b","Type":"ContainerStarted","Data":"5711d8729ac8b979c9fd3f48c06225e869b5494ff41f422a71510c13b667f9ef"} Mar 11 09:19:33 crc kubenswrapper[4840]: I0311 09:19:33.598010 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" Mar 11 09:19:33 crc kubenswrapper[4840]: I0311 09:19:33.600071 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c2c9333d-b833-4abe-972c-6263a38f1a97","Type":"ContainerStarted","Data":"808b0dac42783831dcf51f322b9be907700ea65285b996ebeb9820b02a8e205e"} Mar 11 09:19:33 crc kubenswrapper[4840]: I0311 09:19:33.601999 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9610466b-b516-43bc-b95a-7f671caee44b","Type":"ContainerStarted","Data":"e8f1da4d0f3e7a2f8bc4b508dafa4919bf88cefa86b7bf10b4d0be8c2a589de6"} Mar 11 09:19:33 crc kubenswrapper[4840]: I0311 09:19:33.602028 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9610466b-b516-43bc-b95a-7f671caee44b","Type":"ContainerStarted","Data":"c78a34698d838d68e83ec9791eb567ee6047145f621deb1b5314394e0674c6c5"} Mar 11 09:19:33 crc kubenswrapper[4840]: I0311 09:19:33.602121 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9610466b-b516-43bc-b95a-7f671caee44b" containerName="nova-metadata-log" containerID="cri-o://c78a34698d838d68e83ec9791eb567ee6047145f621deb1b5314394e0674c6c5" gracePeriod=30 Mar 11 09:19:33 crc kubenswrapper[4840]: I0311 09:19:33.602374 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9610466b-b516-43bc-b95a-7f671caee44b" containerName="nova-metadata-metadata" containerID="cri-o://e8f1da4d0f3e7a2f8bc4b508dafa4919bf88cefa86b7bf10b4d0be8c2a589de6" gracePeriod=30 Mar 11 09:19:33 crc kubenswrapper[4840]: I0311 09:19:33.635498 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.6480756960000003 podStartE2EDuration="6.635448487s" podCreationTimestamp="2026-03-11 09:19:27 +0000 UTC" firstStartedPulling="2026-03-11 09:19:28.927152982 +0000 UTC m=+1367.592822797" lastFinishedPulling="2026-03-11 09:19:32.914525773 +0000 UTC m=+1371.580195588" observedRunningTime="2026-03-11 09:19:33.622244825 +0000 UTC m=+1372.287914640" watchObservedRunningTime="2026-03-11 09:19:33.635448487 +0000 UTC m=+1372.301118302" Mar 11 09:19:33 crc kubenswrapper[4840]: I0311 09:19:33.645275 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.519262188 podStartE2EDuration="6.645255364s" podCreationTimestamp="2026-03-11 09:19:27 +0000 UTC" firstStartedPulling="2026-03-11 09:19:28.762050951 +0000 UTC m=+1367.427720756" lastFinishedPulling="2026-03-11 09:19:32.888044117 +0000 UTC m=+1371.553713932" observedRunningTime="2026-03-11 09:19:33.63954212 +0000 UTC m=+1372.305211935" watchObservedRunningTime="2026-03-11 09:19:33.645255364 +0000 UTC m=+1372.310925179" Mar 11 09:19:33 crc kubenswrapper[4840]: I0311 09:19:33.657459 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 11 09:19:33 crc kubenswrapper[4840]: I0311 09:19:33.667027 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.833696983 podStartE2EDuration="6.667007691s" podCreationTimestamp="2026-03-11 09:19:27 +0000 UTC" firstStartedPulling="2026-03-11 09:19:29.052900283 +0000 UTC m=+1367.718570088" lastFinishedPulling="2026-03-11 09:19:32.886210981 +0000 UTC m=+1371.551880796" observedRunningTime="2026-03-11 09:19:33.660371314 +0000 UTC m=+1372.326041129" watchObservedRunningTime="2026-03-11 09:19:33.667007691 +0000 UTC m=+1372.332677506" Mar 11 09:19:33 crc kubenswrapper[4840]: I0311 09:19:33.698237 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" podStartSLOduration=5.698218805 podStartE2EDuration="5.698218805s" podCreationTimestamp="2026-03-11 09:19:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:19:33.694722797 +0000 UTC m=+1372.360392612" watchObservedRunningTime="2026-03-11 09:19:33.698218805 +0000 UTC m=+1372.363888620" Mar 11 09:19:33 crc kubenswrapper[4840]: I0311 09:19:33.722019 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.8869232 podStartE2EDuration="6.721996533s" podCreationTimestamp="2026-03-11 09:19:27 +0000 UTC" firstStartedPulling="2026-03-11 09:19:29.052993425 +0000 UTC m=+1367.718663240" lastFinishedPulling="2026-03-11 09:19:32.888066758 +0000 UTC m=+1371.553736573" observedRunningTime="2026-03-11 09:19:33.716048394 +0000 UTC m=+1372.381718209" watchObservedRunningTime="2026-03-11 09:19:33.721996533 +0000 UTC m=+1372.387666348" Mar 11 09:19:34 crc kubenswrapper[4840]: I0311 09:19:34.642659 4840 generic.go:334] "Generic (PLEG): container finished" podID="9610466b-b516-43bc-b95a-7f671caee44b" containerID="c78a34698d838d68e83ec9791eb567ee6047145f621deb1b5314394e0674c6c5" exitCode=143 Mar 11 09:19:34 crc kubenswrapper[4840]: I0311 09:19:34.642807 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9610466b-b516-43bc-b95a-7f671caee44b","Type":"ContainerDied","Data":"c78a34698d838d68e83ec9791eb567ee6047145f621deb1b5314394e0674c6c5"} Mar 11 09:19:37 crc kubenswrapper[4840]: I0311 09:19:37.673120 4840 generic.go:334] "Generic (PLEG): container finished" podID="df6db5cc-aad1-4c53-a726-61432206dd4c" containerID="6e805737b5c9e7e0a90405eecbec7fbac37b33b617516eebfd9b44d4a63815b1" exitCode=0 Mar 11 09:19:37 crc kubenswrapper[4840]: I0311 09:19:37.673202 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h7w9f" event={"ID":"df6db5cc-aad1-4c53-a726-61432206dd4c","Type":"ContainerDied","Data":"6e805737b5c9e7e0a90405eecbec7fbac37b33b617516eebfd9b44d4a63815b1"} Mar 11 09:19:38 crc kubenswrapper[4840]: I0311 09:19:38.006311 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 11 09:19:38 crc kubenswrapper[4840]: I0311 09:19:38.007198 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="8346e561-a5cc-4bf9-807e-8837c8e13007" containerName="kube-state-metrics" containerID="cri-o://d8e46432f83ff44762566602f5f078b7458c3b3c787cfeb72abc5d6c99520ac0" gracePeriod=30 Mar 11 09:19:38 crc kubenswrapper[4840]: I0311 09:19:38.031897 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:19:38 crc kubenswrapper[4840]: I0311 09:19:38.045174 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 11 09:19:38 crc kubenswrapper[4840]: I0311 09:19:38.045257 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 11 09:19:38 crc kubenswrapper[4840]: I0311 09:19:38.249578 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 11 09:19:38 crc kubenswrapper[4840]: I0311 09:19:38.249845 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 11 09:19:38 crc kubenswrapper[4840]: I0311 09:19:38.277837 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 11 09:19:38 crc kubenswrapper[4840]: I0311 09:19:38.277901 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 11 09:19:38 crc kubenswrapper[4840]: I0311 09:19:38.294129 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 11 09:19:38 crc kubenswrapper[4840]: I0311 09:19:38.442301 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" Mar 11 09:19:38 crc kubenswrapper[4840]: I0311 09:19:38.605167 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b8fcc65cc-psbl7"] Mar 11 09:19:38 crc kubenswrapper[4840]: I0311 09:19:38.605951 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" podUID="4ab6bbb2-998d-4f33-831b-71f0e06d1b8b" containerName="dnsmasq-dns" containerID="cri-o://05a6df8771e5d1d9bd2b29e2e53f4bf1d24dbb28545aad896202cfeadea22534" gracePeriod=10 Mar 11 09:19:38 crc kubenswrapper[4840]: I0311 09:19:38.632137 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 11 09:19:38 crc kubenswrapper[4840]: I0311 09:19:38.694351 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrt7c\" (UniqueName: \"kubernetes.io/projected/8346e561-a5cc-4bf9-807e-8837c8e13007-kube-api-access-vrt7c\") pod \"8346e561-a5cc-4bf9-807e-8837c8e13007\" (UID: \"8346e561-a5cc-4bf9-807e-8837c8e13007\") " Mar 11 09:19:38 crc kubenswrapper[4840]: I0311 09:19:38.705022 4840 generic.go:334] "Generic (PLEG): container finished" podID="8346e561-a5cc-4bf9-807e-8837c8e13007" containerID="d8e46432f83ff44762566602f5f078b7458c3b3c787cfeb72abc5d6c99520ac0" exitCode=2 Mar 11 09:19:38 crc kubenswrapper[4840]: I0311 09:19:38.706680 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 11 09:19:38 crc kubenswrapper[4840]: I0311 09:19:38.707628 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8346e561-a5cc-4bf9-807e-8837c8e13007","Type":"ContainerDied","Data":"d8e46432f83ff44762566602f5f078b7458c3b3c787cfeb72abc5d6c99520ac0"} Mar 11 09:19:38 crc kubenswrapper[4840]: I0311 09:19:38.707697 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8346e561-a5cc-4bf9-807e-8837c8e13007","Type":"ContainerDied","Data":"cdc46dd9c231a1062de2700ff8635c027f1dc99cef3385a9b1a62e344fa6b284"} Mar 11 09:19:38 crc kubenswrapper[4840]: I0311 09:19:38.707725 4840 scope.go:117] "RemoveContainer" containerID="d8e46432f83ff44762566602f5f078b7458c3b3c787cfeb72abc5d6c99520ac0" Mar 11 09:19:38 crc kubenswrapper[4840]: I0311 09:19:38.716773 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8346e561-a5cc-4bf9-807e-8837c8e13007-kube-api-access-vrt7c" (OuterVolumeSpecName: "kube-api-access-vrt7c") pod "8346e561-a5cc-4bf9-807e-8837c8e13007" (UID: "8346e561-a5cc-4bf9-807e-8837c8e13007"). InnerVolumeSpecName "kube-api-access-vrt7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:19:38 crc kubenswrapper[4840]: I0311 09:19:38.783514 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 11 09:19:38 crc kubenswrapper[4840]: I0311 09:19:38.797703 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrt7c\" (UniqueName: \"kubernetes.io/projected/8346e561-a5cc-4bf9-807e-8837c8e13007-kube-api-access-vrt7c\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:38 crc kubenswrapper[4840]: I0311 09:19:38.877377 4840 scope.go:117] "RemoveContainer" containerID="d8e46432f83ff44762566602f5f078b7458c3b3c787cfeb72abc5d6c99520ac0" Mar 11 09:19:38 crc kubenswrapper[4840]: E0311 09:19:38.883505 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8e46432f83ff44762566602f5f078b7458c3b3c787cfeb72abc5d6c99520ac0\": container with ID starting with d8e46432f83ff44762566602f5f078b7458c3b3c787cfeb72abc5d6c99520ac0 not found: ID does not exist" containerID="d8e46432f83ff44762566602f5f078b7458c3b3c787cfeb72abc5d6c99520ac0" Mar 11 09:19:38 crc kubenswrapper[4840]: I0311 09:19:38.883582 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8e46432f83ff44762566602f5f078b7458c3b3c787cfeb72abc5d6c99520ac0"} err="failed to get container status \"d8e46432f83ff44762566602f5f078b7458c3b3c787cfeb72abc5d6c99520ac0\": rpc error: code = NotFound desc = could not find container \"d8e46432f83ff44762566602f5f078b7458c3b3c787cfeb72abc5d6c99520ac0\": container with ID starting with d8e46432f83ff44762566602f5f078b7458c3b3c787cfeb72abc5d6c99520ac0 not found: ID does not exist" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.047814 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.093485 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.105592 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 11 09:19:39 crc kubenswrapper[4840]: E0311 09:19:39.106272 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8346e561-a5cc-4bf9-807e-8837c8e13007" containerName="kube-state-metrics" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.106290 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="8346e561-a5cc-4bf9-807e-8837c8e13007" containerName="kube-state-metrics" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.106616 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="8346e561-a5cc-4bf9-807e-8837c8e13007" containerName="kube-state-metrics" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.107504 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.114232 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.114248 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.130549 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fc5984d0-a2c5-483e-90f4-f8c38056ff43" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.131518 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fc5984d0-a2c5-483e-90f4-f8c38056ff43" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.132895 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.214888 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e\") " pod="openstack/kube-state-metrics-0" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.215090 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e\") " pod="openstack/kube-state-metrics-0" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.215254 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h79kk\" (UniqueName: \"kubernetes.io/projected/cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e-kube-api-access-h79kk\") pod \"kube-state-metrics-0\" (UID: \"cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e\") " pod="openstack/kube-state-metrics-0" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.215286 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e\") " pod="openstack/kube-state-metrics-0" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.316875 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e\") " pod="openstack/kube-state-metrics-0" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.317420 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e\") " pod="openstack/kube-state-metrics-0" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.317517 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h79kk\" (UniqueName: \"kubernetes.io/projected/cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e-kube-api-access-h79kk\") pod \"kube-state-metrics-0\" (UID: \"cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e\") " pod="openstack/kube-state-metrics-0" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.317548 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e\") " pod="openstack/kube-state-metrics-0" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.321995 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h7w9f" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.337996 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e\") " pod="openstack/kube-state-metrics-0" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.340043 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e\") " pod="openstack/kube-state-metrics-0" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.342159 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h79kk\" (UniqueName: \"kubernetes.io/projected/cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e-kube-api-access-h79kk\") pod \"kube-state-metrics-0\" (UID: \"cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e\") " pod="openstack/kube-state-metrics-0" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.353183 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e\") " pod="openstack/kube-state-metrics-0" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.418727 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df6db5cc-aad1-4c53-a726-61432206dd4c-config-data\") pod \"df6db5cc-aad1-4c53-a726-61432206dd4c\" (UID: \"df6db5cc-aad1-4c53-a726-61432206dd4c\") " Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.418817 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df6db5cc-aad1-4c53-a726-61432206dd4c-combined-ca-bundle\") pod \"df6db5cc-aad1-4c53-a726-61432206dd4c\" (UID: \"df6db5cc-aad1-4c53-a726-61432206dd4c\") " Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.418890 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwrrs\" (UniqueName: \"kubernetes.io/projected/df6db5cc-aad1-4c53-a726-61432206dd4c-kube-api-access-hwrrs\") pod \"df6db5cc-aad1-4c53-a726-61432206dd4c\" (UID: \"df6db5cc-aad1-4c53-a726-61432206dd4c\") " Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.418938 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df6db5cc-aad1-4c53-a726-61432206dd4c-scripts\") pod \"df6db5cc-aad1-4c53-a726-61432206dd4c\" (UID: \"df6db5cc-aad1-4c53-a726-61432206dd4c\") " Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.435436 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df6db5cc-aad1-4c53-a726-61432206dd4c-kube-api-access-hwrrs" (OuterVolumeSpecName: "kube-api-access-hwrrs") pod "df6db5cc-aad1-4c53-a726-61432206dd4c" (UID: "df6db5cc-aad1-4c53-a726-61432206dd4c"). InnerVolumeSpecName "kube-api-access-hwrrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.444435 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6db5cc-aad1-4c53-a726-61432206dd4c-scripts" (OuterVolumeSpecName: "scripts") pod "df6db5cc-aad1-4c53-a726-61432206dd4c" (UID: "df6db5cc-aad1-4c53-a726-61432206dd4c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.463852 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.466750 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6db5cc-aad1-4c53-a726-61432206dd4c-config-data" (OuterVolumeSpecName: "config-data") pod "df6db5cc-aad1-4c53-a726-61432206dd4c" (UID: "df6db5cc-aad1-4c53-a726-61432206dd4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.469598 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6db5cc-aad1-4c53-a726-61432206dd4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df6db5cc-aad1-4c53-a726-61432206dd4c" (UID: "df6db5cc-aad1-4c53-a726-61432206dd4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.522594 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df6db5cc-aad1-4c53-a726-61432206dd4c-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.522632 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df6db5cc-aad1-4c53-a726-61432206dd4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.522645 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwrrs\" (UniqueName: \"kubernetes.io/projected/df6db5cc-aad1-4c53-a726-61432206dd4c-kube-api-access-hwrrs\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.522656 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df6db5cc-aad1-4c53-a726-61432206dd4c-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.552165 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.623981 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-dns-svc\") pod \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\" (UID: \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\") " Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.624046 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-config\") pod \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\" (UID: \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\") " Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.624122 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b294z\" (UniqueName: \"kubernetes.io/projected/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-kube-api-access-b294z\") pod \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\" (UID: \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\") " Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.624216 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-ovsdbserver-nb\") pod \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\" (UID: \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\") " Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.624258 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-ovsdbserver-sb\") pod \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\" (UID: \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\") " Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.624345 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-dns-swift-storage-0\") pod \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\" (UID: \"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b\") " Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.636124 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-kube-api-access-b294z" (OuterVolumeSpecName: "kube-api-access-b294z") pod "4ab6bbb2-998d-4f33-831b-71f0e06d1b8b" (UID: "4ab6bbb2-998d-4f33-831b-71f0e06d1b8b"). InnerVolumeSpecName "kube-api-access-b294z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.701836 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-config" (OuterVolumeSpecName: "config") pod "4ab6bbb2-998d-4f33-831b-71f0e06d1b8b" (UID: "4ab6bbb2-998d-4f33-831b-71f0e06d1b8b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.730450 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.730517 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b294z\" (UniqueName: \"kubernetes.io/projected/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-kube-api-access-b294z\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.737048 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4ab6bbb2-998d-4f33-831b-71f0e06d1b8b" (UID: "4ab6bbb2-998d-4f33-831b-71f0e06d1b8b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.737676 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4ab6bbb2-998d-4f33-831b-71f0e06d1b8b" (UID: "4ab6bbb2-998d-4f33-831b-71f0e06d1b8b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.742200 4840 generic.go:334] "Generic (PLEG): container finished" podID="4ab6bbb2-998d-4f33-831b-71f0e06d1b8b" containerID="05a6df8771e5d1d9bd2b29e2e53f4bf1d24dbb28545aad896202cfeadea22534" exitCode=0 Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.742284 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" event={"ID":"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b","Type":"ContainerDied","Data":"05a6df8771e5d1d9bd2b29e2e53f4bf1d24dbb28545aad896202cfeadea22534"} Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.742328 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" event={"ID":"4ab6bbb2-998d-4f33-831b-71f0e06d1b8b","Type":"ContainerDied","Data":"93a36fe2e07b42131c9be4238e93f248d40c69f158419ec3f3bdbf3e2e3b2e83"} Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.742355 4840 scope.go:117] "RemoveContainer" containerID="05a6df8771e5d1d9bd2b29e2e53f4bf1d24dbb28545aad896202cfeadea22534" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.742597 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b8fcc65cc-psbl7" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.744346 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4ab6bbb2-998d-4f33-831b-71f0e06d1b8b" (UID: "4ab6bbb2-998d-4f33-831b-71f0e06d1b8b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.747928 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h7w9f" event={"ID":"df6db5cc-aad1-4c53-a726-61432206dd4c","Type":"ContainerDied","Data":"d850458f24dcb201a6be7037a1d4b792dbfd604094c6a083356d874ae0ab8c66"} Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.747990 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d850458f24dcb201a6be7037a1d4b792dbfd604094c6a083356d874ae0ab8c66" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.748046 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h7w9f" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.754237 4840 generic.go:334] "Generic (PLEG): container finished" podID="37f195c4-316b-428e-9213-ee66b1fcfd9f" containerID="48f647feaba3271860d14aae492462ec4fa6f0b34e03e667ec047f9b3e726377" exitCode=0 Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.754671 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-w9dg6" event={"ID":"37f195c4-316b-428e-9213-ee66b1fcfd9f","Type":"ContainerDied","Data":"48f647feaba3271860d14aae492462ec4fa6f0b34e03e667ec047f9b3e726377"} Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.759385 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4ab6bbb2-998d-4f33-831b-71f0e06d1b8b" (UID: "4ab6bbb2-998d-4f33-831b-71f0e06d1b8b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.785022 4840 scope.go:117] "RemoveContainer" containerID="24f634bed346f88226d4efdf7e8364f489e5b583386d6617f0c0933893d176c6" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.827728 4840 scope.go:117] "RemoveContainer" containerID="05a6df8771e5d1d9bd2b29e2e53f4bf1d24dbb28545aad896202cfeadea22534" Mar 11 09:19:39 crc kubenswrapper[4840]: E0311 09:19:39.831634 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05a6df8771e5d1d9bd2b29e2e53f4bf1d24dbb28545aad896202cfeadea22534\": container with ID starting with 05a6df8771e5d1d9bd2b29e2e53f4bf1d24dbb28545aad896202cfeadea22534 not found: ID does not exist" containerID="05a6df8771e5d1d9bd2b29e2e53f4bf1d24dbb28545aad896202cfeadea22534" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.831695 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05a6df8771e5d1d9bd2b29e2e53f4bf1d24dbb28545aad896202cfeadea22534"} err="failed to get container status \"05a6df8771e5d1d9bd2b29e2e53f4bf1d24dbb28545aad896202cfeadea22534\": rpc error: code = NotFound desc = could not find container \"05a6df8771e5d1d9bd2b29e2e53f4bf1d24dbb28545aad896202cfeadea22534\": container with ID starting with 05a6df8771e5d1d9bd2b29e2e53f4bf1d24dbb28545aad896202cfeadea22534 not found: ID does not exist" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.831734 4840 scope.go:117] "RemoveContainer" containerID="24f634bed346f88226d4efdf7e8364f489e5b583386d6617f0c0933893d176c6" Mar 11 09:19:39 crc kubenswrapper[4840]: E0311 09:19:39.834167 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24f634bed346f88226d4efdf7e8364f489e5b583386d6617f0c0933893d176c6\": container with ID starting with 24f634bed346f88226d4efdf7e8364f489e5b583386d6617f0c0933893d176c6 not found: ID does not exist" containerID="24f634bed346f88226d4efdf7e8364f489e5b583386d6617f0c0933893d176c6" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.834226 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24f634bed346f88226d4efdf7e8364f489e5b583386d6617f0c0933893d176c6"} err="failed to get container status \"24f634bed346f88226d4efdf7e8364f489e5b583386d6617f0c0933893d176c6\": rpc error: code = NotFound desc = could not find container \"24f634bed346f88226d4efdf7e8364f489e5b583386d6617f0c0933893d176c6\": container with ID starting with 24f634bed346f88226d4efdf7e8364f489e5b583386d6617f0c0933893d176c6 not found: ID does not exist" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.835780 4840 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.835821 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.835836 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.835850 4840 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.924289 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.928208 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fc5984d0-a2c5-483e-90f4-f8c38056ff43" containerName="nova-api-log" containerID="cri-o://027c2c1bb24efcf0502330b87a7ec03d7e20466b13d4bc837283e0ce5988dd62" gracePeriod=30 Mar 11 09:19:39 crc kubenswrapper[4840]: I0311 09:19:39.928551 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fc5984d0-a2c5-483e-90f4-f8c38056ff43" containerName="nova-api-api" containerID="cri-o://360c1334dca539b14a76da6b532e85612504bb222d990668c6428cca26dd2bf9" gracePeriod=30 Mar 11 09:19:40 crc kubenswrapper[4840]: I0311 09:19:40.024622 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:19:40 crc kubenswrapper[4840]: I0311 09:19:40.081168 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8346e561-a5cc-4bf9-807e-8837c8e13007" path="/var/lib/kubelet/pods/8346e561-a5cc-4bf9-807e-8837c8e13007/volumes" Mar 11 09:19:40 crc kubenswrapper[4840]: I0311 09:19:40.087527 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b8fcc65cc-psbl7"] Mar 11 09:19:40 crc kubenswrapper[4840]: I0311 09:19:40.102121 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b8fcc65cc-psbl7"] Mar 11 09:19:40 crc kubenswrapper[4840]: I0311 09:19:40.134519 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 11 09:19:40 crc kubenswrapper[4840]: W0311 09:19:40.138116 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfc57f3e_df7f_40fe_9cc0_7ad00ecd651e.slice/crio-f6e9e04a8117c3de3c5a7248e19d2b7294656b7c0aba9f170b488aa9fb69a671 WatchSource:0}: Error finding container f6e9e04a8117c3de3c5a7248e19d2b7294656b7c0aba9f170b488aa9fb69a671: Status 404 returned error can't find the container with id f6e9e04a8117c3de3c5a7248e19d2b7294656b7c0aba9f170b488aa9fb69a671 Mar 11 09:19:40 crc kubenswrapper[4840]: I0311 09:19:40.587496 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:19:40 crc kubenswrapper[4840]: I0311 09:19:40.588226 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2c0414f2-89d1-45c6-9433-1884e7731f0f" containerName="ceilometer-central-agent" containerID="cri-o://2711368a873dfbb446e6a30c19cc0c3a51eaaaf1944dfe97f957e417bc24b2ff" gracePeriod=30 Mar 11 09:19:40 crc kubenswrapper[4840]: I0311 09:19:40.588331 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2c0414f2-89d1-45c6-9433-1884e7731f0f" containerName="proxy-httpd" containerID="cri-o://078e9e0ef447488a172f16cddf6afe2c56699b647d61e66400c4a8c71dfcf6f9" gracePeriod=30 Mar 11 09:19:40 crc kubenswrapper[4840]: I0311 09:19:40.588413 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2c0414f2-89d1-45c6-9433-1884e7731f0f" containerName="ceilometer-notification-agent" containerID="cri-o://117f830182e1425cb49612e4d5b3f9366a6b165b6a9890b32850069570416962" gracePeriod=30 Mar 11 09:19:40 crc kubenswrapper[4840]: I0311 09:19:40.588329 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2c0414f2-89d1-45c6-9433-1884e7731f0f" containerName="sg-core" containerID="cri-o://f6fd46eb789bcc5e512b6dbcda0c655c0fa38e65eba6ea1a8a3aba8c5a679a48" gracePeriod=30 Mar 11 09:19:40 crc kubenswrapper[4840]: I0311 09:19:40.775802 4840 generic.go:334] "Generic (PLEG): container finished" podID="2c0414f2-89d1-45c6-9433-1884e7731f0f" containerID="f6fd46eb789bcc5e512b6dbcda0c655c0fa38e65eba6ea1a8a3aba8c5a679a48" exitCode=2 Mar 11 09:19:40 crc kubenswrapper[4840]: I0311 09:19:40.775875 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c0414f2-89d1-45c6-9433-1884e7731f0f","Type":"ContainerDied","Data":"f6fd46eb789bcc5e512b6dbcda0c655c0fa38e65eba6ea1a8a3aba8c5a679a48"} Mar 11 09:19:40 crc kubenswrapper[4840]: I0311 09:19:40.776997 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e","Type":"ContainerStarted","Data":"f6e9e04a8117c3de3c5a7248e19d2b7294656b7c0aba9f170b488aa9fb69a671"} Mar 11 09:19:40 crc kubenswrapper[4840]: I0311 09:19:40.785426 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc5984d0-a2c5-483e-90f4-f8c38056ff43","Type":"ContainerDied","Data":"027c2c1bb24efcf0502330b87a7ec03d7e20466b13d4bc837283e0ce5988dd62"} Mar 11 09:19:40 crc kubenswrapper[4840]: I0311 09:19:40.784352 4840 generic.go:334] "Generic (PLEG): container finished" podID="fc5984d0-a2c5-483e-90f4-f8c38056ff43" containerID="027c2c1bb24efcf0502330b87a7ec03d7e20466b13d4bc837283e0ce5988dd62" exitCode=143 Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.090984 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-w9dg6" Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.164652 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37f195c4-316b-428e-9213-ee66b1fcfd9f-config-data\") pod \"37f195c4-316b-428e-9213-ee66b1fcfd9f\" (UID: \"37f195c4-316b-428e-9213-ee66b1fcfd9f\") " Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.164954 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37f195c4-316b-428e-9213-ee66b1fcfd9f-scripts\") pod \"37f195c4-316b-428e-9213-ee66b1fcfd9f\" (UID: \"37f195c4-316b-428e-9213-ee66b1fcfd9f\") " Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.165031 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6c6r\" (UniqueName: \"kubernetes.io/projected/37f195c4-316b-428e-9213-ee66b1fcfd9f-kube-api-access-s6c6r\") pod \"37f195c4-316b-428e-9213-ee66b1fcfd9f\" (UID: \"37f195c4-316b-428e-9213-ee66b1fcfd9f\") " Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.165074 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37f195c4-316b-428e-9213-ee66b1fcfd9f-combined-ca-bundle\") pod \"37f195c4-316b-428e-9213-ee66b1fcfd9f\" (UID: \"37f195c4-316b-428e-9213-ee66b1fcfd9f\") " Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.174487 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37f195c4-316b-428e-9213-ee66b1fcfd9f-kube-api-access-s6c6r" (OuterVolumeSpecName: "kube-api-access-s6c6r") pod "37f195c4-316b-428e-9213-ee66b1fcfd9f" (UID: "37f195c4-316b-428e-9213-ee66b1fcfd9f"). InnerVolumeSpecName "kube-api-access-s6c6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.174740 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37f195c4-316b-428e-9213-ee66b1fcfd9f-scripts" (OuterVolumeSpecName: "scripts") pod "37f195c4-316b-428e-9213-ee66b1fcfd9f" (UID: "37f195c4-316b-428e-9213-ee66b1fcfd9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.202773 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37f195c4-316b-428e-9213-ee66b1fcfd9f-config-data" (OuterVolumeSpecName: "config-data") pod "37f195c4-316b-428e-9213-ee66b1fcfd9f" (UID: "37f195c4-316b-428e-9213-ee66b1fcfd9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.216417 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37f195c4-316b-428e-9213-ee66b1fcfd9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37f195c4-316b-428e-9213-ee66b1fcfd9f" (UID: "37f195c4-316b-428e-9213-ee66b1fcfd9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.270019 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37f195c4-316b-428e-9213-ee66b1fcfd9f-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.270072 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37f195c4-316b-428e-9213-ee66b1fcfd9f-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.270086 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6c6r\" (UniqueName: \"kubernetes.io/projected/37f195c4-316b-428e-9213-ee66b1fcfd9f-kube-api-access-s6c6r\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.270103 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37f195c4-316b-428e-9213-ee66b1fcfd9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.799992 4840 generic.go:334] "Generic (PLEG): container finished" podID="2c0414f2-89d1-45c6-9433-1884e7731f0f" containerID="078e9e0ef447488a172f16cddf6afe2c56699b647d61e66400c4a8c71dfcf6f9" exitCode=0 Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.800502 4840 generic.go:334] "Generic (PLEG): container finished" podID="2c0414f2-89d1-45c6-9433-1884e7731f0f" containerID="2711368a873dfbb446e6a30c19cc0c3a51eaaaf1944dfe97f957e417bc24b2ff" exitCode=0 Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.800095 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c0414f2-89d1-45c6-9433-1884e7731f0f","Type":"ContainerDied","Data":"078e9e0ef447488a172f16cddf6afe2c56699b647d61e66400c4a8c71dfcf6f9"} Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.800616 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c0414f2-89d1-45c6-9433-1884e7731f0f","Type":"ContainerDied","Data":"2711368a873dfbb446e6a30c19cc0c3a51eaaaf1944dfe97f957e417bc24b2ff"} Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.802426 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e","Type":"ContainerStarted","Data":"6da9e415bb6fb1fafd6a20eb3f85c8ad0216612c5870621974d7888d3f87aa59"} Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.803005 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.805220 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-w9dg6" event={"ID":"37f195c4-316b-428e-9213-ee66b1fcfd9f","Type":"ContainerDied","Data":"2a43bde1f431f47d2f59d56d4162b79f2ad3a20c1b5161851c1081f90b1f9ff8"} Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.805260 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-w9dg6" Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.805288 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a43bde1f431f47d2f59d56d4162b79f2ad3a20c1b5161851c1081f90b1f9ff8" Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.805348 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c2c9333d-b833-4abe-972c-6263a38f1a97" containerName="nova-scheduler-scheduler" containerID="cri-o://808b0dac42783831dcf51f322b9be907700ea65285b996ebeb9820b02a8e205e" gracePeriod=30 Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.848421 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.392491138 podStartE2EDuration="2.848397369s" podCreationTimestamp="2026-03-11 09:19:39 +0000 UTC" firstStartedPulling="2026-03-11 09:19:40.141924858 +0000 UTC m=+1378.807594673" lastFinishedPulling="2026-03-11 09:19:40.597831089 +0000 UTC m=+1379.263500904" observedRunningTime="2026-03-11 09:19:41.830660453 +0000 UTC m=+1380.496330268" watchObservedRunningTime="2026-03-11 09:19:41.848397369 +0000 UTC m=+1380.514067174" Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.872823 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 11 09:19:41 crc kubenswrapper[4840]: E0311 09:19:41.873422 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab6bbb2-998d-4f33-831b-71f0e06d1b8b" containerName="dnsmasq-dns" Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.873442 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab6bbb2-998d-4f33-831b-71f0e06d1b8b" containerName="dnsmasq-dns" Mar 11 09:19:41 crc kubenswrapper[4840]: E0311 09:19:41.873459 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab6bbb2-998d-4f33-831b-71f0e06d1b8b" containerName="init" Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.873503 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab6bbb2-998d-4f33-831b-71f0e06d1b8b" containerName="init" Mar 11 09:19:41 crc kubenswrapper[4840]: E0311 09:19:41.873522 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df6db5cc-aad1-4c53-a726-61432206dd4c" containerName="nova-manage" Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.873532 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="df6db5cc-aad1-4c53-a726-61432206dd4c" containerName="nova-manage" Mar 11 09:19:41 crc kubenswrapper[4840]: E0311 09:19:41.873557 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37f195c4-316b-428e-9213-ee66b1fcfd9f" containerName="nova-cell1-conductor-db-sync" Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.873565 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="37f195c4-316b-428e-9213-ee66b1fcfd9f" containerName="nova-cell1-conductor-db-sync" Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.873812 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="df6db5cc-aad1-4c53-a726-61432206dd4c" containerName="nova-manage" Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.873841 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ab6bbb2-998d-4f33-831b-71f0e06d1b8b" containerName="dnsmasq-dns" Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.873851 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="37f195c4-316b-428e-9213-ee66b1fcfd9f" containerName="nova-cell1-conductor-db-sync" Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.874717 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.879144 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.930619 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.987576 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9699b913-55db-46fe-9831-1e1ac94ca609-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9699b913-55db-46fe-9831-1e1ac94ca609\") " pod="openstack/nova-cell1-conductor-0" Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.990612 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9699b913-55db-46fe-9831-1e1ac94ca609-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9699b913-55db-46fe-9831-1e1ac94ca609\") " pod="openstack/nova-cell1-conductor-0" Mar 11 09:19:41 crc kubenswrapper[4840]: I0311 09:19:41.990828 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf8hr\" (UniqueName: \"kubernetes.io/projected/9699b913-55db-46fe-9831-1e1ac94ca609-kube-api-access-tf8hr\") pod \"nova-cell1-conductor-0\" (UID: \"9699b913-55db-46fe-9831-1e1ac94ca609\") " pod="openstack/nova-cell1-conductor-0" Mar 11 09:19:42 crc kubenswrapper[4840]: I0311 09:19:42.084754 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ab6bbb2-998d-4f33-831b-71f0e06d1b8b" path="/var/lib/kubelet/pods/4ab6bbb2-998d-4f33-831b-71f0e06d1b8b/volumes" Mar 11 09:19:42 crc kubenswrapper[4840]: I0311 09:19:42.092815 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9699b913-55db-46fe-9831-1e1ac94ca609-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9699b913-55db-46fe-9831-1e1ac94ca609\") " pod="openstack/nova-cell1-conductor-0" Mar 11 09:19:42 crc kubenswrapper[4840]: I0311 09:19:42.092907 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf8hr\" (UniqueName: \"kubernetes.io/projected/9699b913-55db-46fe-9831-1e1ac94ca609-kube-api-access-tf8hr\") pod \"nova-cell1-conductor-0\" (UID: \"9699b913-55db-46fe-9831-1e1ac94ca609\") " pod="openstack/nova-cell1-conductor-0" Mar 11 09:19:42 crc kubenswrapper[4840]: I0311 09:19:42.093000 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9699b913-55db-46fe-9831-1e1ac94ca609-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9699b913-55db-46fe-9831-1e1ac94ca609\") " pod="openstack/nova-cell1-conductor-0" Mar 11 09:19:42 crc kubenswrapper[4840]: I0311 09:19:42.095119 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 11 09:19:42 crc kubenswrapper[4840]: I0311 09:19:42.101192 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9699b913-55db-46fe-9831-1e1ac94ca609-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"9699b913-55db-46fe-9831-1e1ac94ca609\") " pod="openstack/nova-cell1-conductor-0" Mar 11 09:19:42 crc kubenswrapper[4840]: I0311 09:19:42.112402 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9699b913-55db-46fe-9831-1e1ac94ca609-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"9699b913-55db-46fe-9831-1e1ac94ca609\") " pod="openstack/nova-cell1-conductor-0" Mar 11 09:19:42 crc kubenswrapper[4840]: I0311 09:19:42.116044 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf8hr\" (UniqueName: \"kubernetes.io/projected/9699b913-55db-46fe-9831-1e1ac94ca609-kube-api-access-tf8hr\") pod \"nova-cell1-conductor-0\" (UID: \"9699b913-55db-46fe-9831-1e1ac94ca609\") " pod="openstack/nova-cell1-conductor-0" Mar 11 09:19:42 crc kubenswrapper[4840]: I0311 09:19:42.207764 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 11 09:19:42 crc kubenswrapper[4840]: I0311 09:19:42.679262 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 11 09:19:42 crc kubenswrapper[4840]: I0311 09:19:42.815597 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9699b913-55db-46fe-9831-1e1ac94ca609","Type":"ContainerStarted","Data":"41b38ee46e15d1fb3fb180f35855c895589415561d01f113d63fb75a37737b9d"} Mar 11 09:19:43 crc kubenswrapper[4840]: E0311 09:19:43.256108 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="808b0dac42783831dcf51f322b9be907700ea65285b996ebeb9820b02a8e205e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 11 09:19:43 crc kubenswrapper[4840]: E0311 09:19:43.260623 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="808b0dac42783831dcf51f322b9be907700ea65285b996ebeb9820b02a8e205e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 11 09:19:43 crc kubenswrapper[4840]: E0311 09:19:43.262648 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="808b0dac42783831dcf51f322b9be907700ea65285b996ebeb9820b02a8e205e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 11 09:19:43 crc kubenswrapper[4840]: E0311 09:19:43.262680 4840 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c2c9333d-b833-4abe-972c-6263a38f1a97" containerName="nova-scheduler-scheduler" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.562207 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.760005 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c0414f2-89d1-45c6-9433-1884e7731f0f-config-data\") pod \"2c0414f2-89d1-45c6-9433-1884e7731f0f\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.760184 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz9v7\" (UniqueName: \"kubernetes.io/projected/2c0414f2-89d1-45c6-9433-1884e7731f0f-kube-api-access-zz9v7\") pod \"2c0414f2-89d1-45c6-9433-1884e7731f0f\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.760348 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c0414f2-89d1-45c6-9433-1884e7731f0f-scripts\") pod \"2c0414f2-89d1-45c6-9433-1884e7731f0f\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.760454 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c0414f2-89d1-45c6-9433-1884e7731f0f-log-httpd\") pod \"2c0414f2-89d1-45c6-9433-1884e7731f0f\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.760755 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2c0414f2-89d1-45c6-9433-1884e7731f0f-sg-core-conf-yaml\") pod \"2c0414f2-89d1-45c6-9433-1884e7731f0f\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.760865 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c0414f2-89d1-45c6-9433-1884e7731f0f-run-httpd\") pod \"2c0414f2-89d1-45c6-9433-1884e7731f0f\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.760995 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0414f2-89d1-45c6-9433-1884e7731f0f-combined-ca-bundle\") pod \"2c0414f2-89d1-45c6-9433-1884e7731f0f\" (UID: \"2c0414f2-89d1-45c6-9433-1884e7731f0f\") " Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.761134 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c0414f2-89d1-45c6-9433-1884e7731f0f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2c0414f2-89d1-45c6-9433-1884e7731f0f" (UID: "2c0414f2-89d1-45c6-9433-1884e7731f0f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.761193 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c0414f2-89d1-45c6-9433-1884e7731f0f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2c0414f2-89d1-45c6-9433-1884e7731f0f" (UID: "2c0414f2-89d1-45c6-9433-1884e7731f0f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.761946 4840 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c0414f2-89d1-45c6-9433-1884e7731f0f-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.761991 4840 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2c0414f2-89d1-45c6-9433-1884e7731f0f-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.767481 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c0414f2-89d1-45c6-9433-1884e7731f0f-scripts" (OuterVolumeSpecName: "scripts") pod "2c0414f2-89d1-45c6-9433-1884e7731f0f" (UID: "2c0414f2-89d1-45c6-9433-1884e7731f0f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.767947 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c0414f2-89d1-45c6-9433-1884e7731f0f-kube-api-access-zz9v7" (OuterVolumeSpecName: "kube-api-access-zz9v7") pod "2c0414f2-89d1-45c6-9433-1884e7731f0f" (UID: "2c0414f2-89d1-45c6-9433-1884e7731f0f"). InnerVolumeSpecName "kube-api-access-zz9v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.805568 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c0414f2-89d1-45c6-9433-1884e7731f0f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2c0414f2-89d1-45c6-9433-1884e7731f0f" (UID: "2c0414f2-89d1-45c6-9433-1884e7731f0f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.828227 4840 generic.go:334] "Generic (PLEG): container finished" podID="2c0414f2-89d1-45c6-9433-1884e7731f0f" containerID="117f830182e1425cb49612e4d5b3f9366a6b165b6a9890b32850069570416962" exitCode=0 Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.828298 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.828313 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c0414f2-89d1-45c6-9433-1884e7731f0f","Type":"ContainerDied","Data":"117f830182e1425cb49612e4d5b3f9366a6b165b6a9890b32850069570416962"} Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.829185 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2c0414f2-89d1-45c6-9433-1884e7731f0f","Type":"ContainerDied","Data":"3f5ec335277290bde012c135a94c3fa75dbfd916548046419c62ce6311e61783"} Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.829241 4840 scope.go:117] "RemoveContainer" containerID="078e9e0ef447488a172f16cddf6afe2c56699b647d61e66400c4a8c71dfcf6f9" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.830614 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9699b913-55db-46fe-9831-1e1ac94ca609","Type":"ContainerStarted","Data":"cfec390aae2b33c2baf59838d2300ba819a1ecdc9f653835cc711873fa787a85"} Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.830816 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.859651 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.8596281 podStartE2EDuration="2.8596281s" podCreationTimestamp="2026-03-11 09:19:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:19:43.85207161 +0000 UTC m=+1382.517741425" watchObservedRunningTime="2026-03-11 09:19:43.8596281 +0000 UTC m=+1382.525297915" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.862834 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz9v7\" (UniqueName: \"kubernetes.io/projected/2c0414f2-89d1-45c6-9433-1884e7731f0f-kube-api-access-zz9v7\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.862858 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2c0414f2-89d1-45c6-9433-1884e7731f0f-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.862867 4840 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2c0414f2-89d1-45c6-9433-1884e7731f0f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.880915 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c0414f2-89d1-45c6-9433-1884e7731f0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c0414f2-89d1-45c6-9433-1884e7731f0f" (UID: "2c0414f2-89d1-45c6-9433-1884e7731f0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.889810 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c0414f2-89d1-45c6-9433-1884e7731f0f-config-data" (OuterVolumeSpecName: "config-data") pod "2c0414f2-89d1-45c6-9433-1884e7731f0f" (UID: "2c0414f2-89d1-45c6-9433-1884e7731f0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.903640 4840 scope.go:117] "RemoveContainer" containerID="f6fd46eb789bcc5e512b6dbcda0c655c0fa38e65eba6ea1a8a3aba8c5a679a48" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.933804 4840 scope.go:117] "RemoveContainer" containerID="117f830182e1425cb49612e4d5b3f9366a6b165b6a9890b32850069570416962" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.960045 4840 scope.go:117] "RemoveContainer" containerID="2711368a873dfbb446e6a30c19cc0c3a51eaaaf1944dfe97f957e417bc24b2ff" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.966036 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0414f2-89d1-45c6-9433-1884e7731f0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.966312 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c0414f2-89d1-45c6-9433-1884e7731f0f-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.990054 4840 scope.go:117] "RemoveContainer" containerID="078e9e0ef447488a172f16cddf6afe2c56699b647d61e66400c4a8c71dfcf6f9" Mar 11 09:19:43 crc kubenswrapper[4840]: E0311 09:19:43.990894 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"078e9e0ef447488a172f16cddf6afe2c56699b647d61e66400c4a8c71dfcf6f9\": container with ID starting with 078e9e0ef447488a172f16cddf6afe2c56699b647d61e66400c4a8c71dfcf6f9 not found: ID does not exist" containerID="078e9e0ef447488a172f16cddf6afe2c56699b647d61e66400c4a8c71dfcf6f9" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.990997 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"078e9e0ef447488a172f16cddf6afe2c56699b647d61e66400c4a8c71dfcf6f9"} err="failed to get container status \"078e9e0ef447488a172f16cddf6afe2c56699b647d61e66400c4a8c71dfcf6f9\": rpc error: code = NotFound desc = could not find container \"078e9e0ef447488a172f16cddf6afe2c56699b647d61e66400c4a8c71dfcf6f9\": container with ID starting with 078e9e0ef447488a172f16cddf6afe2c56699b647d61e66400c4a8c71dfcf6f9 not found: ID does not exist" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.991042 4840 scope.go:117] "RemoveContainer" containerID="f6fd46eb789bcc5e512b6dbcda0c655c0fa38e65eba6ea1a8a3aba8c5a679a48" Mar 11 09:19:43 crc kubenswrapper[4840]: E0311 09:19:43.991629 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6fd46eb789bcc5e512b6dbcda0c655c0fa38e65eba6ea1a8a3aba8c5a679a48\": container with ID starting with f6fd46eb789bcc5e512b6dbcda0c655c0fa38e65eba6ea1a8a3aba8c5a679a48 not found: ID does not exist" containerID="f6fd46eb789bcc5e512b6dbcda0c655c0fa38e65eba6ea1a8a3aba8c5a679a48" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.991653 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6fd46eb789bcc5e512b6dbcda0c655c0fa38e65eba6ea1a8a3aba8c5a679a48"} err="failed to get container status \"f6fd46eb789bcc5e512b6dbcda0c655c0fa38e65eba6ea1a8a3aba8c5a679a48\": rpc error: code = NotFound desc = could not find container \"f6fd46eb789bcc5e512b6dbcda0c655c0fa38e65eba6ea1a8a3aba8c5a679a48\": container with ID starting with f6fd46eb789bcc5e512b6dbcda0c655c0fa38e65eba6ea1a8a3aba8c5a679a48 not found: ID does not exist" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.991670 4840 scope.go:117] "RemoveContainer" containerID="117f830182e1425cb49612e4d5b3f9366a6b165b6a9890b32850069570416962" Mar 11 09:19:43 crc kubenswrapper[4840]: E0311 09:19:43.991967 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"117f830182e1425cb49612e4d5b3f9366a6b165b6a9890b32850069570416962\": container with ID starting with 117f830182e1425cb49612e4d5b3f9366a6b165b6a9890b32850069570416962 not found: ID does not exist" containerID="117f830182e1425cb49612e4d5b3f9366a6b165b6a9890b32850069570416962" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.992028 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"117f830182e1425cb49612e4d5b3f9366a6b165b6a9890b32850069570416962"} err="failed to get container status \"117f830182e1425cb49612e4d5b3f9366a6b165b6a9890b32850069570416962\": rpc error: code = NotFound desc = could not find container \"117f830182e1425cb49612e4d5b3f9366a6b165b6a9890b32850069570416962\": container with ID starting with 117f830182e1425cb49612e4d5b3f9366a6b165b6a9890b32850069570416962 not found: ID does not exist" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.992065 4840 scope.go:117] "RemoveContainer" containerID="2711368a873dfbb446e6a30c19cc0c3a51eaaaf1944dfe97f957e417bc24b2ff" Mar 11 09:19:43 crc kubenswrapper[4840]: E0311 09:19:43.992445 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2711368a873dfbb446e6a30c19cc0c3a51eaaaf1944dfe97f957e417bc24b2ff\": container with ID starting with 2711368a873dfbb446e6a30c19cc0c3a51eaaaf1944dfe97f957e417bc24b2ff not found: ID does not exist" containerID="2711368a873dfbb446e6a30c19cc0c3a51eaaaf1944dfe97f957e417bc24b2ff" Mar 11 09:19:43 crc kubenswrapper[4840]: I0311 09:19:43.992571 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2711368a873dfbb446e6a30c19cc0c3a51eaaaf1944dfe97f957e417bc24b2ff"} err="failed to get container status \"2711368a873dfbb446e6a30c19cc0c3a51eaaaf1944dfe97f957e417bc24b2ff\": rpc error: code = NotFound desc = could not find container \"2711368a873dfbb446e6a30c19cc0c3a51eaaaf1944dfe97f957e417bc24b2ff\": container with ID starting with 2711368a873dfbb446e6a30c19cc0c3a51eaaaf1944dfe97f957e417bc24b2ff not found: ID does not exist" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.160778 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.176814 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.196656 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:19:44 crc kubenswrapper[4840]: E0311 09:19:44.197211 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c0414f2-89d1-45c6-9433-1884e7731f0f" containerName="ceilometer-notification-agent" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.197241 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0414f2-89d1-45c6-9433-1884e7731f0f" containerName="ceilometer-notification-agent" Mar 11 09:19:44 crc kubenswrapper[4840]: E0311 09:19:44.197257 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c0414f2-89d1-45c6-9433-1884e7731f0f" containerName="sg-core" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.197267 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0414f2-89d1-45c6-9433-1884e7731f0f" containerName="sg-core" Mar 11 09:19:44 crc kubenswrapper[4840]: E0311 09:19:44.197286 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c0414f2-89d1-45c6-9433-1884e7731f0f" containerName="proxy-httpd" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.197294 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0414f2-89d1-45c6-9433-1884e7731f0f" containerName="proxy-httpd" Mar 11 09:19:44 crc kubenswrapper[4840]: E0311 09:19:44.197338 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c0414f2-89d1-45c6-9433-1884e7731f0f" containerName="ceilometer-central-agent" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.197347 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0414f2-89d1-45c6-9433-1884e7731f0f" containerName="ceilometer-central-agent" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.197623 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c0414f2-89d1-45c6-9433-1884e7731f0f" containerName="ceilometer-notification-agent" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.197655 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c0414f2-89d1-45c6-9433-1884e7731f0f" containerName="sg-core" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.197672 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c0414f2-89d1-45c6-9433-1884e7731f0f" containerName="ceilometer-central-agent" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.197685 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c0414f2-89d1-45c6-9433-1884e7731f0f" containerName="proxy-httpd" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.202060 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.208078 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.209373 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.210169 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.210365 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.370927 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/603351dd-ecf9-45be-9a52-12a3122cc22d-run-httpd\") pod \"ceilometer-0\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " pod="openstack/ceilometer-0" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.371349 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " pod="openstack/ceilometer-0" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.371381 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/603351dd-ecf9-45be-9a52-12a3122cc22d-log-httpd\") pod \"ceilometer-0\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " pod="openstack/ceilometer-0" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.371440 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdlfk\" (UniqueName: \"kubernetes.io/projected/603351dd-ecf9-45be-9a52-12a3122cc22d-kube-api-access-kdlfk\") pod \"ceilometer-0\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " pod="openstack/ceilometer-0" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.371541 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " pod="openstack/ceilometer-0" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.371590 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-config-data\") pod \"ceilometer-0\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " pod="openstack/ceilometer-0" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.371688 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-scripts\") pod \"ceilometer-0\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " pod="openstack/ceilometer-0" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.371715 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " pod="openstack/ceilometer-0" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.473723 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/603351dd-ecf9-45be-9a52-12a3122cc22d-run-httpd\") pod \"ceilometer-0\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " pod="openstack/ceilometer-0" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.473777 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " pod="openstack/ceilometer-0" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.473800 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/603351dd-ecf9-45be-9a52-12a3122cc22d-log-httpd\") pod \"ceilometer-0\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " pod="openstack/ceilometer-0" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.473854 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdlfk\" (UniqueName: \"kubernetes.io/projected/603351dd-ecf9-45be-9a52-12a3122cc22d-kube-api-access-kdlfk\") pod \"ceilometer-0\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " pod="openstack/ceilometer-0" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.473896 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " pod="openstack/ceilometer-0" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.473917 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-config-data\") pod \"ceilometer-0\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " pod="openstack/ceilometer-0" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.474001 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " pod="openstack/ceilometer-0" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.474020 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-scripts\") pod \"ceilometer-0\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " pod="openstack/ceilometer-0" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.474215 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/603351dd-ecf9-45be-9a52-12a3122cc22d-run-httpd\") pod \"ceilometer-0\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " pod="openstack/ceilometer-0" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.474964 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/603351dd-ecf9-45be-9a52-12a3122cc22d-log-httpd\") pod \"ceilometer-0\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " pod="openstack/ceilometer-0" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.480442 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " pod="openstack/ceilometer-0" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.481168 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " pod="openstack/ceilometer-0" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.481276 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " pod="openstack/ceilometer-0" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.491813 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-scripts\") pod \"ceilometer-0\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " pod="openstack/ceilometer-0" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.492231 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-config-data\") pod \"ceilometer-0\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " pod="openstack/ceilometer-0" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.501591 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdlfk\" (UniqueName: \"kubernetes.io/projected/603351dd-ecf9-45be-9a52-12a3122cc22d-kube-api-access-kdlfk\") pod \"ceilometer-0\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " pod="openstack/ceilometer-0" Mar 11 09:19:44 crc kubenswrapper[4840]: I0311 09:19:44.527492 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.061808 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.756182 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.837321 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2c9333d-b833-4abe-972c-6263a38f1a97-config-data\") pod \"c2c9333d-b833-4abe-972c-6263a38f1a97\" (UID: \"c2c9333d-b833-4abe-972c-6263a38f1a97\") " Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.838687 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2c9333d-b833-4abe-972c-6263a38f1a97-combined-ca-bundle\") pod \"c2c9333d-b833-4abe-972c-6263a38f1a97\" (UID: \"c2c9333d-b833-4abe-972c-6263a38f1a97\") " Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.838753 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6xgg\" (UniqueName: \"kubernetes.io/projected/c2c9333d-b833-4abe-972c-6263a38f1a97-kube-api-access-m6xgg\") pod \"c2c9333d-b833-4abe-972c-6263a38f1a97\" (UID: \"c2c9333d-b833-4abe-972c-6263a38f1a97\") " Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.844389 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.850082 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2c9333d-b833-4abe-972c-6263a38f1a97-kube-api-access-m6xgg" (OuterVolumeSpecName: "kube-api-access-m6xgg") pod "c2c9333d-b833-4abe-972c-6263a38f1a97" (UID: "c2c9333d-b833-4abe-972c-6263a38f1a97"). InnerVolumeSpecName "kube-api-access-m6xgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.872245 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2c9333d-b833-4abe-972c-6263a38f1a97-config-data" (OuterVolumeSpecName: "config-data") pod "c2c9333d-b833-4abe-972c-6263a38f1a97" (UID: "c2c9333d-b833-4abe-972c-6263a38f1a97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.891306 4840 generic.go:334] "Generic (PLEG): container finished" podID="fc5984d0-a2c5-483e-90f4-f8c38056ff43" containerID="360c1334dca539b14a76da6b532e85612504bb222d990668c6428cca26dd2bf9" exitCode=0 Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.891381 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc5984d0-a2c5-483e-90f4-f8c38056ff43","Type":"ContainerDied","Data":"360c1334dca539b14a76da6b532e85612504bb222d990668c6428cca26dd2bf9"} Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.891413 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc5984d0-a2c5-483e-90f4-f8c38056ff43","Type":"ContainerDied","Data":"5698c59282614add62a521605f972ba57005cfd670a921072e2e126d325ffc17"} Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.891431 4840 scope.go:117] "RemoveContainer" containerID="360c1334dca539b14a76da6b532e85612504bb222d990668c6428cca26dd2bf9" Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.891589 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.896719 4840 generic.go:334] "Generic (PLEG): container finished" podID="c2c9333d-b833-4abe-972c-6263a38f1a97" containerID="808b0dac42783831dcf51f322b9be907700ea65285b996ebeb9820b02a8e205e" exitCode=0 Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.896880 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.897648 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c2c9333d-b833-4abe-972c-6263a38f1a97","Type":"ContainerDied","Data":"808b0dac42783831dcf51f322b9be907700ea65285b996ebeb9820b02a8e205e"} Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.897733 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c2c9333d-b833-4abe-972c-6263a38f1a97","Type":"ContainerDied","Data":"09d0575c087720c30eaa444291e7be9064b53fd9320b244ffe9d8a013f128f9f"} Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.899741 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"603351dd-ecf9-45be-9a52-12a3122cc22d","Type":"ContainerStarted","Data":"d528300d8b8e8bc2b87a67b6a96c7a659552efde84dd7087b374bb5d15f45410"} Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.916913 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2c9333d-b833-4abe-972c-6263a38f1a97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2c9333d-b833-4abe-972c-6263a38f1a97" (UID: "c2c9333d-b833-4abe-972c-6263a38f1a97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.927347 4840 scope.go:117] "RemoveContainer" containerID="027c2c1bb24efcf0502330b87a7ec03d7e20466b13d4bc837283e0ce5988dd62" Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.941734 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc5984d0-a2c5-483e-90f4-f8c38056ff43-logs\") pod \"fc5984d0-a2c5-483e-90f4-f8c38056ff43\" (UID: \"fc5984d0-a2c5-483e-90f4-f8c38056ff43\") " Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.941980 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc5984d0-a2c5-483e-90f4-f8c38056ff43-config-data\") pod \"fc5984d0-a2c5-483e-90f4-f8c38056ff43\" (UID: \"fc5984d0-a2c5-483e-90f4-f8c38056ff43\") " Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.942157 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5984d0-a2c5-483e-90f4-f8c38056ff43-combined-ca-bundle\") pod \"fc5984d0-a2c5-483e-90f4-f8c38056ff43\" (UID: \"fc5984d0-a2c5-483e-90f4-f8c38056ff43\") " Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.942182 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpqn7\" (UniqueName: \"kubernetes.io/projected/fc5984d0-a2c5-483e-90f4-f8c38056ff43-kube-api-access-lpqn7\") pod \"fc5984d0-a2c5-483e-90f4-f8c38056ff43\" (UID: \"fc5984d0-a2c5-483e-90f4-f8c38056ff43\") " Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.942723 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2c9333d-b833-4abe-972c-6263a38f1a97-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.942747 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2c9333d-b833-4abe-972c-6263a38f1a97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.942759 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6xgg\" (UniqueName: \"kubernetes.io/projected/c2c9333d-b833-4abe-972c-6263a38f1a97-kube-api-access-m6xgg\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.944015 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc5984d0-a2c5-483e-90f4-f8c38056ff43-logs" (OuterVolumeSpecName: "logs") pod "fc5984d0-a2c5-483e-90f4-f8c38056ff43" (UID: "fc5984d0-a2c5-483e-90f4-f8c38056ff43"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.947800 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5984d0-a2c5-483e-90f4-f8c38056ff43-kube-api-access-lpqn7" (OuterVolumeSpecName: "kube-api-access-lpqn7") pod "fc5984d0-a2c5-483e-90f4-f8c38056ff43" (UID: "fc5984d0-a2c5-483e-90f4-f8c38056ff43"). InnerVolumeSpecName "kube-api-access-lpqn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.959669 4840 scope.go:117] "RemoveContainer" containerID="360c1334dca539b14a76da6b532e85612504bb222d990668c6428cca26dd2bf9" Mar 11 09:19:45 crc kubenswrapper[4840]: E0311 09:19:45.960379 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"360c1334dca539b14a76da6b532e85612504bb222d990668c6428cca26dd2bf9\": container with ID starting with 360c1334dca539b14a76da6b532e85612504bb222d990668c6428cca26dd2bf9 not found: ID does not exist" containerID="360c1334dca539b14a76da6b532e85612504bb222d990668c6428cca26dd2bf9" Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.960456 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"360c1334dca539b14a76da6b532e85612504bb222d990668c6428cca26dd2bf9"} err="failed to get container status \"360c1334dca539b14a76da6b532e85612504bb222d990668c6428cca26dd2bf9\": rpc error: code = NotFound desc = could not find container \"360c1334dca539b14a76da6b532e85612504bb222d990668c6428cca26dd2bf9\": container with ID starting with 360c1334dca539b14a76da6b532e85612504bb222d990668c6428cca26dd2bf9 not found: ID does not exist" Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.960518 4840 scope.go:117] "RemoveContainer" containerID="027c2c1bb24efcf0502330b87a7ec03d7e20466b13d4bc837283e0ce5988dd62" Mar 11 09:19:45 crc kubenswrapper[4840]: E0311 09:19:45.965689 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"027c2c1bb24efcf0502330b87a7ec03d7e20466b13d4bc837283e0ce5988dd62\": container with ID starting with 027c2c1bb24efcf0502330b87a7ec03d7e20466b13d4bc837283e0ce5988dd62 not found: ID does not exist" containerID="027c2c1bb24efcf0502330b87a7ec03d7e20466b13d4bc837283e0ce5988dd62" Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.965757 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"027c2c1bb24efcf0502330b87a7ec03d7e20466b13d4bc837283e0ce5988dd62"} err="failed to get container status \"027c2c1bb24efcf0502330b87a7ec03d7e20466b13d4bc837283e0ce5988dd62\": rpc error: code = NotFound desc = could not find container \"027c2c1bb24efcf0502330b87a7ec03d7e20466b13d4bc837283e0ce5988dd62\": container with ID starting with 027c2c1bb24efcf0502330b87a7ec03d7e20466b13d4bc837283e0ce5988dd62 not found: ID does not exist" Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.965794 4840 scope.go:117] "RemoveContainer" containerID="808b0dac42783831dcf51f322b9be907700ea65285b996ebeb9820b02a8e205e" Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.977204 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5984d0-a2c5-483e-90f4-f8c38056ff43-config-data" (OuterVolumeSpecName: "config-data") pod "fc5984d0-a2c5-483e-90f4-f8c38056ff43" (UID: "fc5984d0-a2c5-483e-90f4-f8c38056ff43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:19:45 crc kubenswrapper[4840]: I0311 09:19:45.979216 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc5984d0-a2c5-483e-90f4-f8c38056ff43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc5984d0-a2c5-483e-90f4-f8c38056ff43" (UID: "fc5984d0-a2c5-483e-90f4-f8c38056ff43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.013571 4840 scope.go:117] "RemoveContainer" containerID="808b0dac42783831dcf51f322b9be907700ea65285b996ebeb9820b02a8e205e" Mar 11 09:19:46 crc kubenswrapper[4840]: E0311 09:19:46.014366 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"808b0dac42783831dcf51f322b9be907700ea65285b996ebeb9820b02a8e205e\": container with ID starting with 808b0dac42783831dcf51f322b9be907700ea65285b996ebeb9820b02a8e205e not found: ID does not exist" containerID="808b0dac42783831dcf51f322b9be907700ea65285b996ebeb9820b02a8e205e" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.014416 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"808b0dac42783831dcf51f322b9be907700ea65285b996ebeb9820b02a8e205e"} err="failed to get container status \"808b0dac42783831dcf51f322b9be907700ea65285b996ebeb9820b02a8e205e\": rpc error: code = NotFound desc = could not find container \"808b0dac42783831dcf51f322b9be907700ea65285b996ebeb9820b02a8e205e\": container with ID starting with 808b0dac42783831dcf51f322b9be907700ea65285b996ebeb9820b02a8e205e not found: ID does not exist" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.050683 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc5984d0-a2c5-483e-90f4-f8c38056ff43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.050728 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpqn7\" (UniqueName: \"kubernetes.io/projected/fc5984d0-a2c5-483e-90f4-f8c38056ff43-kube-api-access-lpqn7\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.050741 4840 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc5984d0-a2c5-483e-90f4-f8c38056ff43-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.050752 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc5984d0-a2c5-483e-90f4-f8c38056ff43-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.074789 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c0414f2-89d1-45c6-9433-1884e7731f0f" path="/var/lib/kubelet/pods/2c0414f2-89d1-45c6-9433-1884e7731f0f/volumes" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.229185 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.266228 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.279763 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.295343 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.314592 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 11 09:19:46 crc kubenswrapper[4840]: E0311 09:19:46.315354 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc5984d0-a2c5-483e-90f4-f8c38056ff43" containerName="nova-api-log" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.315388 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc5984d0-a2c5-483e-90f4-f8c38056ff43" containerName="nova-api-log" Mar 11 09:19:46 crc kubenswrapper[4840]: E0311 09:19:46.315416 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc5984d0-a2c5-483e-90f4-f8c38056ff43" containerName="nova-api-api" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.315426 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc5984d0-a2c5-483e-90f4-f8c38056ff43" containerName="nova-api-api" Mar 11 09:19:46 crc kubenswrapper[4840]: E0311 09:19:46.315440 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2c9333d-b833-4abe-972c-6263a38f1a97" containerName="nova-scheduler-scheduler" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.315449 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2c9333d-b833-4abe-972c-6263a38f1a97" containerName="nova-scheduler-scheduler" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.315757 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc5984d0-a2c5-483e-90f4-f8c38056ff43" containerName="nova-api-api" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.315787 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc5984d0-a2c5-483e-90f4-f8c38056ff43" containerName="nova-api-log" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.315813 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2c9333d-b833-4abe-972c-6263a38f1a97" containerName="nova-scheduler-scheduler" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.317298 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.320615 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.322111 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.324567 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.324828 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.328182 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.343112 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.360261 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/830558b7-e864-46ad-a88c-be9b9633dd15-config-data\") pod \"nova-scheduler-0\" (UID: \"830558b7-e864-46ad-a88c-be9b9633dd15\") " pod="openstack/nova-scheduler-0" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.360329 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c92a574-9e37-4fee-8503-0a41b4366a23-config-data\") pod \"nova-api-0\" (UID: \"9c92a574-9e37-4fee-8503-0a41b4366a23\") " pod="openstack/nova-api-0" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.360391 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/830558b7-e864-46ad-a88c-be9b9633dd15-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"830558b7-e864-46ad-a88c-be9b9633dd15\") " pod="openstack/nova-scheduler-0" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.360413 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz9x8\" (UniqueName: \"kubernetes.io/projected/9c92a574-9e37-4fee-8503-0a41b4366a23-kube-api-access-pz9x8\") pod \"nova-api-0\" (UID: \"9c92a574-9e37-4fee-8503-0a41b4366a23\") " pod="openstack/nova-api-0" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.360440 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c92a574-9e37-4fee-8503-0a41b4366a23-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9c92a574-9e37-4fee-8503-0a41b4366a23\") " pod="openstack/nova-api-0" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.360503 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt8xv\" (UniqueName: \"kubernetes.io/projected/830558b7-e864-46ad-a88c-be9b9633dd15-kube-api-access-jt8xv\") pod \"nova-scheduler-0\" (UID: \"830558b7-e864-46ad-a88c-be9b9633dd15\") " pod="openstack/nova-scheduler-0" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.360523 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c92a574-9e37-4fee-8503-0a41b4366a23-logs\") pod \"nova-api-0\" (UID: \"9c92a574-9e37-4fee-8503-0a41b4366a23\") " pod="openstack/nova-api-0" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.463235 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/830558b7-e864-46ad-a88c-be9b9633dd15-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"830558b7-e864-46ad-a88c-be9b9633dd15\") " pod="openstack/nova-scheduler-0" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.463311 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz9x8\" (UniqueName: \"kubernetes.io/projected/9c92a574-9e37-4fee-8503-0a41b4366a23-kube-api-access-pz9x8\") pod \"nova-api-0\" (UID: \"9c92a574-9e37-4fee-8503-0a41b4366a23\") " pod="openstack/nova-api-0" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.463358 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c92a574-9e37-4fee-8503-0a41b4366a23-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9c92a574-9e37-4fee-8503-0a41b4366a23\") " pod="openstack/nova-api-0" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.463418 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt8xv\" (UniqueName: \"kubernetes.io/projected/830558b7-e864-46ad-a88c-be9b9633dd15-kube-api-access-jt8xv\") pod \"nova-scheduler-0\" (UID: \"830558b7-e864-46ad-a88c-be9b9633dd15\") " pod="openstack/nova-scheduler-0" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.463444 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c92a574-9e37-4fee-8503-0a41b4366a23-logs\") pod \"nova-api-0\" (UID: \"9c92a574-9e37-4fee-8503-0a41b4366a23\") " pod="openstack/nova-api-0" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.463545 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/830558b7-e864-46ad-a88c-be9b9633dd15-config-data\") pod \"nova-scheduler-0\" (UID: \"830558b7-e864-46ad-a88c-be9b9633dd15\") " pod="openstack/nova-scheduler-0" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.463644 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c92a574-9e37-4fee-8503-0a41b4366a23-config-data\") pod \"nova-api-0\" (UID: \"9c92a574-9e37-4fee-8503-0a41b4366a23\") " pod="openstack/nova-api-0" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.464838 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c92a574-9e37-4fee-8503-0a41b4366a23-logs\") pod \"nova-api-0\" (UID: \"9c92a574-9e37-4fee-8503-0a41b4366a23\") " pod="openstack/nova-api-0" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.470496 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c92a574-9e37-4fee-8503-0a41b4366a23-config-data\") pod \"nova-api-0\" (UID: \"9c92a574-9e37-4fee-8503-0a41b4366a23\") " pod="openstack/nova-api-0" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.472169 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c92a574-9e37-4fee-8503-0a41b4366a23-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9c92a574-9e37-4fee-8503-0a41b4366a23\") " pod="openstack/nova-api-0" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.472902 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/830558b7-e864-46ad-a88c-be9b9633dd15-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"830558b7-e864-46ad-a88c-be9b9633dd15\") " pod="openstack/nova-scheduler-0" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.473245 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/830558b7-e864-46ad-a88c-be9b9633dd15-config-data\") pod \"nova-scheduler-0\" (UID: \"830558b7-e864-46ad-a88c-be9b9633dd15\") " pod="openstack/nova-scheduler-0" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.488736 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt8xv\" (UniqueName: \"kubernetes.io/projected/830558b7-e864-46ad-a88c-be9b9633dd15-kube-api-access-jt8xv\") pod \"nova-scheduler-0\" (UID: \"830558b7-e864-46ad-a88c-be9b9633dd15\") " pod="openstack/nova-scheduler-0" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.490986 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz9x8\" (UniqueName: \"kubernetes.io/projected/9c92a574-9e37-4fee-8503-0a41b4366a23-kube-api-access-pz9x8\") pod \"nova-api-0\" (UID: \"9c92a574-9e37-4fee-8503-0a41b4366a23\") " pod="openstack/nova-api-0" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.709577 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.717948 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 11 09:19:46 crc kubenswrapper[4840]: I0311 09:19:46.925610 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"603351dd-ecf9-45be-9a52-12a3122cc22d","Type":"ContainerStarted","Data":"4fdd50a29025e3ac7d01e74196ffff09875be42f1be211da58caeacc78b4b122"} Mar 11 09:19:47 crc kubenswrapper[4840]: I0311 09:19:47.239124 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:19:47 crc kubenswrapper[4840]: I0311 09:19:47.261516 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 11 09:19:47 crc kubenswrapper[4840]: I0311 09:19:47.332970 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:19:47 crc kubenswrapper[4840]: W0311 09:19:47.364067 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod830558b7_e864_46ad_a88c_be9b9633dd15.slice/crio-2c0b71cd2553d3768482fd0537833fd7f7f85cc32a37b5702df5f767479c9b7a WatchSource:0}: Error finding container 2c0b71cd2553d3768482fd0537833fd7f7f85cc32a37b5702df5f767479c9b7a: Status 404 returned error can't find the container with id 2c0b71cd2553d3768482fd0537833fd7f7f85cc32a37b5702df5f767479c9b7a Mar 11 09:19:47 crc kubenswrapper[4840]: I0311 09:19:47.942617 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"603351dd-ecf9-45be-9a52-12a3122cc22d","Type":"ContainerStarted","Data":"5d744badc2fbacd8f4fc7558e5c49ff94357c3cc302836da2bf24c2d05699823"} Mar 11 09:19:47 crc kubenswrapper[4840]: I0311 09:19:47.943243 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"603351dd-ecf9-45be-9a52-12a3122cc22d","Type":"ContainerStarted","Data":"ee16bf85ee819ef20795fbe3dd65399c7fbd9b2ca5f3a14eaf194880a9f035dd"} Mar 11 09:19:47 crc kubenswrapper[4840]: I0311 09:19:47.945141 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"830558b7-e864-46ad-a88c-be9b9633dd15","Type":"ContainerStarted","Data":"0dbe0d58941c729ef7ebd94f230541b9933fd06d0c4e7dcd3cb845a93a422943"} Mar 11 09:19:47 crc kubenswrapper[4840]: I0311 09:19:47.945197 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"830558b7-e864-46ad-a88c-be9b9633dd15","Type":"ContainerStarted","Data":"2c0b71cd2553d3768482fd0537833fd7f7f85cc32a37b5702df5f767479c9b7a"} Mar 11 09:19:47 crc kubenswrapper[4840]: I0311 09:19:47.949669 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9c92a574-9e37-4fee-8503-0a41b4366a23","Type":"ContainerStarted","Data":"5530264f80389ce86593fb3e727a9828428c222f56c4978ddd814207d3717db3"} Mar 11 09:19:47 crc kubenswrapper[4840]: I0311 09:19:47.949726 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9c92a574-9e37-4fee-8503-0a41b4366a23","Type":"ContainerStarted","Data":"ed7aaf8db8dad28f87bea1f2cc29c5561d85558cdd261ded87065c5ab627c1da"} Mar 11 09:19:47 crc kubenswrapper[4840]: I0311 09:19:47.963355 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.963332786 podStartE2EDuration="1.963332786s" podCreationTimestamp="2026-03-11 09:19:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:19:47.959699415 +0000 UTC m=+1386.625369250" watchObservedRunningTime="2026-03-11 09:19:47.963332786 +0000 UTC m=+1386.629002591" Mar 11 09:19:48 crc kubenswrapper[4840]: I0311 09:19:48.072270 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2c9333d-b833-4abe-972c-6263a38f1a97" path="/var/lib/kubelet/pods/c2c9333d-b833-4abe-972c-6263a38f1a97/volumes" Mar 11 09:19:48 crc kubenswrapper[4840]: I0311 09:19:48.073489 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc5984d0-a2c5-483e-90f4-f8c38056ff43" path="/var/lib/kubelet/pods/fc5984d0-a2c5-483e-90f4-f8c38056ff43/volumes" Mar 11 09:19:48 crc kubenswrapper[4840]: I0311 09:19:48.961539 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9c92a574-9e37-4fee-8503-0a41b4366a23","Type":"ContainerStarted","Data":"1b6c69b08c345a8b70fa35b336b43de7227d1f98fc77b1c77946fa1cab5a1a71"} Mar 11 09:19:48 crc kubenswrapper[4840]: I0311 09:19:48.988264 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.988238282 podStartE2EDuration="2.988238282s" podCreationTimestamp="2026-03-11 09:19:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:19:48.985211016 +0000 UTC m=+1387.650880831" watchObservedRunningTime="2026-03-11 09:19:48.988238282 +0000 UTC m=+1387.653908097" Mar 11 09:19:49 crc kubenswrapper[4840]: I0311 09:19:49.472179 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 11 09:19:49 crc kubenswrapper[4840]: I0311 09:19:49.974243 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"603351dd-ecf9-45be-9a52-12a3122cc22d","Type":"ContainerStarted","Data":"db48f935d34405f41e967d593f45b3f2076a66a8f1733e1908a3fba8f9be5679"} Mar 11 09:19:49 crc kubenswrapper[4840]: I0311 09:19:49.974721 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 11 09:19:50 crc kubenswrapper[4840]: I0311 09:19:49.999561 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.875519319 podStartE2EDuration="5.999538155s" podCreationTimestamp="2026-03-11 09:19:44 +0000 UTC" firstStartedPulling="2026-03-11 09:19:45.077235611 +0000 UTC m=+1383.742905426" lastFinishedPulling="2026-03-11 09:19:49.201254447 +0000 UTC m=+1387.866924262" observedRunningTime="2026-03-11 09:19:49.992697953 +0000 UTC m=+1388.658367768" watchObservedRunningTime="2026-03-11 09:19:49.999538155 +0000 UTC m=+1388.665207970" Mar 11 09:19:51 crc kubenswrapper[4840]: I0311 09:19:51.719184 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 11 09:19:56 crc kubenswrapper[4840]: I0311 09:19:56.710297 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 11 09:19:56 crc kubenswrapper[4840]: I0311 09:19:56.711009 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 11 09:19:56 crc kubenswrapper[4840]: I0311 09:19:56.719023 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 11 09:19:56 crc kubenswrapper[4840]: I0311 09:19:56.748489 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 11 09:19:57 crc kubenswrapper[4840]: I0311 09:19:57.103003 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 11 09:19:57 crc kubenswrapper[4840]: I0311 09:19:57.446037 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:19:57 crc kubenswrapper[4840]: I0311 09:19:57.446107 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:19:57 crc kubenswrapper[4840]: I0311 09:19:57.446186 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 09:19:57 crc kubenswrapper[4840]: I0311 09:19:57.447059 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6d23981dcac257731751dcd40b5609471911fb1c4a3f0ea9223434b40578e948"} pod="openshift-machine-config-operator/machine-config-daemon-brtht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 09:19:57 crc kubenswrapper[4840]: I0311 09:19:57.447114 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" containerID="cri-o://6d23981dcac257731751dcd40b5609471911fb1c4a3f0ea9223434b40578e948" gracePeriod=600 Mar 11 09:19:57 crc kubenswrapper[4840]: I0311 09:19:57.791728 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9c92a574-9e37-4fee-8503-0a41b4366a23" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.201:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 11 09:19:57 crc kubenswrapper[4840]: I0311 09:19:57.791725 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9c92a574-9e37-4fee-8503-0a41b4366a23" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.201:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 11 09:19:58 crc kubenswrapper[4840]: I0311 09:19:58.077221 4840 generic.go:334] "Generic (PLEG): container finished" podID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerID="6d23981dcac257731751dcd40b5609471911fb1c4a3f0ea9223434b40578e948" exitCode=0 Mar 11 09:19:58 crc kubenswrapper[4840]: I0311 09:19:58.077282 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerDied","Data":"6d23981dcac257731751dcd40b5609471911fb1c4a3f0ea9223434b40578e948"} Mar 11 09:19:58 crc kubenswrapper[4840]: I0311 09:19:58.077315 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f"} Mar 11 09:19:58 crc kubenswrapper[4840]: I0311 09:19:58.077332 4840 scope.go:117] "RemoveContainer" containerID="c4693d2b28f962f129bc06a9e78798a61e61b356cfdcc19696be10d164614b1d" Mar 11 09:20:00 crc kubenswrapper[4840]: I0311 09:20:00.158981 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553680-9qvzt"] Mar 11 09:20:00 crc kubenswrapper[4840]: I0311 09:20:00.163545 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553680-9qvzt" Mar 11 09:20:00 crc kubenswrapper[4840]: I0311 09:20:00.167979 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:20:00 crc kubenswrapper[4840]: I0311 09:20:00.168076 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:20:00 crc kubenswrapper[4840]: I0311 09:20:00.167979 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:20:00 crc kubenswrapper[4840]: I0311 09:20:00.170584 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553680-9qvzt"] Mar 11 09:20:00 crc kubenswrapper[4840]: I0311 09:20:00.280199 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2xh5\" (UniqueName: \"kubernetes.io/projected/86f28307-7fbd-48fc-9d46-81c82922ff2a-kube-api-access-b2xh5\") pod \"auto-csr-approver-29553680-9qvzt\" (UID: \"86f28307-7fbd-48fc-9d46-81c82922ff2a\") " pod="openshift-infra/auto-csr-approver-29553680-9qvzt" Mar 11 09:20:00 crc kubenswrapper[4840]: I0311 09:20:00.382410 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2xh5\" (UniqueName: \"kubernetes.io/projected/86f28307-7fbd-48fc-9d46-81c82922ff2a-kube-api-access-b2xh5\") pod \"auto-csr-approver-29553680-9qvzt\" (UID: \"86f28307-7fbd-48fc-9d46-81c82922ff2a\") " pod="openshift-infra/auto-csr-approver-29553680-9qvzt" Mar 11 09:20:00 crc kubenswrapper[4840]: I0311 09:20:00.405879 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2xh5\" (UniqueName: \"kubernetes.io/projected/86f28307-7fbd-48fc-9d46-81c82922ff2a-kube-api-access-b2xh5\") pod \"auto-csr-approver-29553680-9qvzt\" (UID: \"86f28307-7fbd-48fc-9d46-81c82922ff2a\") " pod="openshift-infra/auto-csr-approver-29553680-9qvzt" Mar 11 09:20:00 crc kubenswrapper[4840]: I0311 09:20:00.493737 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553680-9qvzt" Mar 11 09:20:00 crc kubenswrapper[4840]: W0311 09:20:00.992942 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86f28307_7fbd_48fc_9d46_81c82922ff2a.slice/crio-f7ebe6d12328a81a4b58f368f3b78b5ecf32e866bc6a4142af93242c2e94e7f7 WatchSource:0}: Error finding container f7ebe6d12328a81a4b58f368f3b78b5ecf32e866bc6a4142af93242c2e94e7f7: Status 404 returned error can't find the container with id f7ebe6d12328a81a4b58f368f3b78b5ecf32e866bc6a4142af93242c2e94e7f7 Mar 11 09:20:00 crc kubenswrapper[4840]: I0311 09:20:00.995142 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553680-9qvzt"] Mar 11 09:20:01 crc kubenswrapper[4840]: I0311 09:20:01.132417 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553680-9qvzt" event={"ID":"86f28307-7fbd-48fc-9d46-81c82922ff2a","Type":"ContainerStarted","Data":"f7ebe6d12328a81a4b58f368f3b78b5ecf32e866bc6a4142af93242c2e94e7f7"} Mar 11 09:20:03 crc kubenswrapper[4840]: I0311 09:20:03.153833 4840 generic.go:334] "Generic (PLEG): container finished" podID="86f28307-7fbd-48fc-9d46-81c82922ff2a" containerID="147a6e0fe52bd51fcc1ea39e1aa1513f8d5a2df2f4c665e22d2c4a5fbfada6cf" exitCode=0 Mar 11 09:20:03 crc kubenswrapper[4840]: I0311 09:20:03.153890 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553680-9qvzt" event={"ID":"86f28307-7fbd-48fc-9d46-81c82922ff2a","Type":"ContainerDied","Data":"147a6e0fe52bd51fcc1ea39e1aa1513f8d5a2df2f4c665e22d2c4a5fbfada6cf"} Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.172523 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.177173 4840 generic.go:334] "Generic (PLEG): container finished" podID="9610466b-b516-43bc-b95a-7f671caee44b" containerID="e8f1da4d0f3e7a2f8bc4b508dafa4919bf88cefa86b7bf10b4d0be8c2a589de6" exitCode=137 Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.177244 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9610466b-b516-43bc-b95a-7f671caee44b","Type":"ContainerDied","Data":"e8f1da4d0f3e7a2f8bc4b508dafa4919bf88cefa86b7bf10b4d0be8c2a589de6"} Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.177278 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9610466b-b516-43bc-b95a-7f671caee44b","Type":"ContainerDied","Data":"5710c98e70ee4bafdfd35c2821a6446cdd84e307624e1ccb73749b72a1fe578e"} Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.177292 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5710c98e70ee4bafdfd35c2821a6446cdd84e307624e1ccb73749b72a1fe578e" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.180313 4840 generic.go:334] "Generic (PLEG): container finished" podID="b5045b6e-75f6-44c8-b9bb-fd28964f0676" containerID="1bb46ca958f12008f70e1613c2594276c581a8c47f2383163050203d5a0ea214" exitCode=137 Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.180640 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b5045b6e-75f6-44c8-b9bb-fd28964f0676","Type":"ContainerDied","Data":"1bb46ca958f12008f70e1613c2594276c581a8c47f2383163050203d5a0ea214"} Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.180887 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b5045b6e-75f6-44c8-b9bb-fd28964f0676","Type":"ContainerDied","Data":"e03b1f32f18c5147475329f2074cf0319edb631499820a3663d443bc3e37e98e"} Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.180916 4840 scope.go:117] "RemoveContainer" containerID="1bb46ca958f12008f70e1613c2594276c581a8c47f2383163050203d5a0ea214" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.180697 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.181197 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.219268 4840 scope.go:117] "RemoveContainer" containerID="1bb46ca958f12008f70e1613c2594276c581a8c47f2383163050203d5a0ea214" Mar 11 09:20:04 crc kubenswrapper[4840]: E0311 09:20:04.225913 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bb46ca958f12008f70e1613c2594276c581a8c47f2383163050203d5a0ea214\": container with ID starting with 1bb46ca958f12008f70e1613c2594276c581a8c47f2383163050203d5a0ea214 not found: ID does not exist" containerID="1bb46ca958f12008f70e1613c2594276c581a8c47f2383163050203d5a0ea214" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.225979 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bb46ca958f12008f70e1613c2594276c581a8c47f2383163050203d5a0ea214"} err="failed to get container status \"1bb46ca958f12008f70e1613c2594276c581a8c47f2383163050203d5a0ea214\": rpc error: code = NotFound desc = could not find container \"1bb46ca958f12008f70e1613c2594276c581a8c47f2383163050203d5a0ea214\": container with ID starting with 1bb46ca958f12008f70e1613c2594276c581a8c47f2383163050203d5a0ea214 not found: ID does not exist" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.367241 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbxl8\" (UniqueName: \"kubernetes.io/projected/b5045b6e-75f6-44c8-b9bb-fd28964f0676-kube-api-access-cbxl8\") pod \"b5045b6e-75f6-44c8-b9bb-fd28964f0676\" (UID: \"b5045b6e-75f6-44c8-b9bb-fd28964f0676\") " Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.367717 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57tx7\" (UniqueName: \"kubernetes.io/projected/9610466b-b516-43bc-b95a-7f671caee44b-kube-api-access-57tx7\") pod \"9610466b-b516-43bc-b95a-7f671caee44b\" (UID: \"9610466b-b516-43bc-b95a-7f671caee44b\") " Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.367826 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9610466b-b516-43bc-b95a-7f671caee44b-config-data\") pod \"9610466b-b516-43bc-b95a-7f671caee44b\" (UID: \"9610466b-b516-43bc-b95a-7f671caee44b\") " Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.367864 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5045b6e-75f6-44c8-b9bb-fd28964f0676-config-data\") pod \"b5045b6e-75f6-44c8-b9bb-fd28964f0676\" (UID: \"b5045b6e-75f6-44c8-b9bb-fd28964f0676\") " Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.367892 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9610466b-b516-43bc-b95a-7f671caee44b-combined-ca-bundle\") pod \"9610466b-b516-43bc-b95a-7f671caee44b\" (UID: \"9610466b-b516-43bc-b95a-7f671caee44b\") " Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.367915 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5045b6e-75f6-44c8-b9bb-fd28964f0676-combined-ca-bundle\") pod \"b5045b6e-75f6-44c8-b9bb-fd28964f0676\" (UID: \"b5045b6e-75f6-44c8-b9bb-fd28964f0676\") " Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.367939 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9610466b-b516-43bc-b95a-7f671caee44b-logs\") pod \"9610466b-b516-43bc-b95a-7f671caee44b\" (UID: \"9610466b-b516-43bc-b95a-7f671caee44b\") " Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.368879 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9610466b-b516-43bc-b95a-7f671caee44b-logs" (OuterVolumeSpecName: "logs") pod "9610466b-b516-43bc-b95a-7f671caee44b" (UID: "9610466b-b516-43bc-b95a-7f671caee44b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.377066 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5045b6e-75f6-44c8-b9bb-fd28964f0676-kube-api-access-cbxl8" (OuterVolumeSpecName: "kube-api-access-cbxl8") pod "b5045b6e-75f6-44c8-b9bb-fd28964f0676" (UID: "b5045b6e-75f6-44c8-b9bb-fd28964f0676"). InnerVolumeSpecName "kube-api-access-cbxl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.385314 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9610466b-b516-43bc-b95a-7f671caee44b-kube-api-access-57tx7" (OuterVolumeSpecName: "kube-api-access-57tx7") pod "9610466b-b516-43bc-b95a-7f671caee44b" (UID: "9610466b-b516-43bc-b95a-7f671caee44b"). InnerVolumeSpecName "kube-api-access-57tx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.408209 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9610466b-b516-43bc-b95a-7f671caee44b-config-data" (OuterVolumeSpecName: "config-data") pod "9610466b-b516-43bc-b95a-7f671caee44b" (UID: "9610466b-b516-43bc-b95a-7f671caee44b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.416007 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5045b6e-75f6-44c8-b9bb-fd28964f0676-config-data" (OuterVolumeSpecName: "config-data") pod "b5045b6e-75f6-44c8-b9bb-fd28964f0676" (UID: "b5045b6e-75f6-44c8-b9bb-fd28964f0676"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.419697 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9610466b-b516-43bc-b95a-7f671caee44b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9610466b-b516-43bc-b95a-7f671caee44b" (UID: "9610466b-b516-43bc-b95a-7f671caee44b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.433292 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5045b6e-75f6-44c8-b9bb-fd28964f0676-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5045b6e-75f6-44c8-b9bb-fd28964f0676" (UID: "b5045b6e-75f6-44c8-b9bb-fd28964f0676"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.470579 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbxl8\" (UniqueName: \"kubernetes.io/projected/b5045b6e-75f6-44c8-b9bb-fd28964f0676-kube-api-access-cbxl8\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.470613 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57tx7\" (UniqueName: \"kubernetes.io/projected/9610466b-b516-43bc-b95a-7f671caee44b-kube-api-access-57tx7\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.470626 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9610466b-b516-43bc-b95a-7f671caee44b-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.470639 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5045b6e-75f6-44c8-b9bb-fd28964f0676-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.470648 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9610466b-b516-43bc-b95a-7f671caee44b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.470658 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5045b6e-75f6-44c8-b9bb-fd28964f0676-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.470668 4840 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9610466b-b516-43bc-b95a-7f671caee44b-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.510979 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553680-9qvzt" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.524343 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.534607 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.564989 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 11 09:20:04 crc kubenswrapper[4840]: E0311 09:20:04.565657 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9610466b-b516-43bc-b95a-7f671caee44b" containerName="nova-metadata-metadata" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.565683 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="9610466b-b516-43bc-b95a-7f671caee44b" containerName="nova-metadata-metadata" Mar 11 09:20:04 crc kubenswrapper[4840]: E0311 09:20:04.565716 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9610466b-b516-43bc-b95a-7f671caee44b" containerName="nova-metadata-log" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.565725 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="9610466b-b516-43bc-b95a-7f671caee44b" containerName="nova-metadata-log" Mar 11 09:20:04 crc kubenswrapper[4840]: E0311 09:20:04.565738 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5045b6e-75f6-44c8-b9bb-fd28964f0676" containerName="nova-cell1-novncproxy-novncproxy" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.565758 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5045b6e-75f6-44c8-b9bb-fd28964f0676" containerName="nova-cell1-novncproxy-novncproxy" Mar 11 09:20:04 crc kubenswrapper[4840]: E0311 09:20:04.565777 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f28307-7fbd-48fc-9d46-81c82922ff2a" containerName="oc" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.565785 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f28307-7fbd-48fc-9d46-81c82922ff2a" containerName="oc" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.566025 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="9610466b-b516-43bc-b95a-7f671caee44b" containerName="nova-metadata-log" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.566044 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5045b6e-75f6-44c8-b9bb-fd28964f0676" containerName="nova-cell1-novncproxy-novncproxy" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.566067 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="86f28307-7fbd-48fc-9d46-81c82922ff2a" containerName="oc" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.566092 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="9610466b-b516-43bc-b95a-7f671caee44b" containerName="nova-metadata-metadata" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.566973 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.570654 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.570872 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.571014 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.574812 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2xh5\" (UniqueName: \"kubernetes.io/projected/86f28307-7fbd-48fc-9d46-81c82922ff2a-kube-api-access-b2xh5\") pod \"86f28307-7fbd-48fc-9d46-81c82922ff2a\" (UID: \"86f28307-7fbd-48fc-9d46-81c82922ff2a\") " Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.575871 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"666b69a9-5d21-4f9c-83ce-49e6e132e8e9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.575948 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2stz\" (UniqueName: \"kubernetes.io/projected/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-kube-api-access-x2stz\") pod \"nova-cell1-novncproxy-0\" (UID: \"666b69a9-5d21-4f9c-83ce-49e6e132e8e9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.576074 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"666b69a9-5d21-4f9c-83ce-49e6e132e8e9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.576126 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"666b69a9-5d21-4f9c-83ce-49e6e132e8e9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.576161 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"666b69a9-5d21-4f9c-83ce-49e6e132e8e9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.578953 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86f28307-7fbd-48fc-9d46-81c82922ff2a-kube-api-access-b2xh5" (OuterVolumeSpecName: "kube-api-access-b2xh5") pod "86f28307-7fbd-48fc-9d46-81c82922ff2a" (UID: "86f28307-7fbd-48fc-9d46-81c82922ff2a"). InnerVolumeSpecName "kube-api-access-b2xh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.592922 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.676880 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"666b69a9-5d21-4f9c-83ce-49e6e132e8e9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.676936 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"666b69a9-5d21-4f9c-83ce-49e6e132e8e9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.676963 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"666b69a9-5d21-4f9c-83ce-49e6e132e8e9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.676984 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"666b69a9-5d21-4f9c-83ce-49e6e132e8e9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.677023 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2stz\" (UniqueName: \"kubernetes.io/projected/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-kube-api-access-x2stz\") pod \"nova-cell1-novncproxy-0\" (UID: \"666b69a9-5d21-4f9c-83ce-49e6e132e8e9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.677139 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2xh5\" (UniqueName: \"kubernetes.io/projected/86f28307-7fbd-48fc-9d46-81c82922ff2a-kube-api-access-b2xh5\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.681093 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"666b69a9-5d21-4f9c-83ce-49e6e132e8e9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.681163 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"666b69a9-5d21-4f9c-83ce-49e6e132e8e9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.681165 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"666b69a9-5d21-4f9c-83ce-49e6e132e8e9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.682319 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"666b69a9-5d21-4f9c-83ce-49e6e132e8e9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.692839 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2stz\" (UniqueName: \"kubernetes.io/projected/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-kube-api-access-x2stz\") pod \"nova-cell1-novncproxy-0\" (UID: \"666b69a9-5d21-4f9c-83ce-49e6e132e8e9\") " pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:20:04 crc kubenswrapper[4840]: I0311 09:20:04.894327 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.180682 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.191782 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553680-9qvzt" event={"ID":"86f28307-7fbd-48fc-9d46-81c82922ff2a","Type":"ContainerDied","Data":"f7ebe6d12328a81a4b58f368f3b78b5ecf32e866bc6a4142af93242c2e94e7f7"} Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.191825 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7ebe6d12328a81a4b58f368f3b78b5ecf32e866bc6a4142af93242c2e94e7f7" Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.191902 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553680-9qvzt" Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.194773 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.259225 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.274227 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.292150 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.293990 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.297006 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.297216 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.322886 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.391792 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a4033f1-ae55-4d94-90d1-397d031770ac-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7a4033f1-ae55-4d94-90d1-397d031770ac\") " pod="openstack/nova-metadata-0" Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.391968 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q47wp\" (UniqueName: \"kubernetes.io/projected/7a4033f1-ae55-4d94-90d1-397d031770ac-kube-api-access-q47wp\") pod \"nova-metadata-0\" (UID: \"7a4033f1-ae55-4d94-90d1-397d031770ac\") " pod="openstack/nova-metadata-0" Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.392076 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a4033f1-ae55-4d94-90d1-397d031770ac-config-data\") pod \"nova-metadata-0\" (UID: \"7a4033f1-ae55-4d94-90d1-397d031770ac\") " pod="openstack/nova-metadata-0" Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.392186 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a4033f1-ae55-4d94-90d1-397d031770ac-logs\") pod \"nova-metadata-0\" (UID: \"7a4033f1-ae55-4d94-90d1-397d031770ac\") " pod="openstack/nova-metadata-0" Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.392250 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a4033f1-ae55-4d94-90d1-397d031770ac-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7a4033f1-ae55-4d94-90d1-397d031770ac\") " pod="openstack/nova-metadata-0" Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.494333 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a4033f1-ae55-4d94-90d1-397d031770ac-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7a4033f1-ae55-4d94-90d1-397d031770ac\") " pod="openstack/nova-metadata-0" Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.494745 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a4033f1-ae55-4d94-90d1-397d031770ac-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7a4033f1-ae55-4d94-90d1-397d031770ac\") " pod="openstack/nova-metadata-0" Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.494893 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q47wp\" (UniqueName: \"kubernetes.io/projected/7a4033f1-ae55-4d94-90d1-397d031770ac-kube-api-access-q47wp\") pod \"nova-metadata-0\" (UID: \"7a4033f1-ae55-4d94-90d1-397d031770ac\") " pod="openstack/nova-metadata-0" Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.495043 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a4033f1-ae55-4d94-90d1-397d031770ac-config-data\") pod \"nova-metadata-0\" (UID: \"7a4033f1-ae55-4d94-90d1-397d031770ac\") " pod="openstack/nova-metadata-0" Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.495219 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a4033f1-ae55-4d94-90d1-397d031770ac-logs\") pod \"nova-metadata-0\" (UID: \"7a4033f1-ae55-4d94-90d1-397d031770ac\") " pod="openstack/nova-metadata-0" Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.495646 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a4033f1-ae55-4d94-90d1-397d031770ac-logs\") pod \"nova-metadata-0\" (UID: \"7a4033f1-ae55-4d94-90d1-397d031770ac\") " pod="openstack/nova-metadata-0" Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.499713 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a4033f1-ae55-4d94-90d1-397d031770ac-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7a4033f1-ae55-4d94-90d1-397d031770ac\") " pod="openstack/nova-metadata-0" Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.499935 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a4033f1-ae55-4d94-90d1-397d031770ac-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7a4033f1-ae55-4d94-90d1-397d031770ac\") " pod="openstack/nova-metadata-0" Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.500307 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a4033f1-ae55-4d94-90d1-397d031770ac-config-data\") pod \"nova-metadata-0\" (UID: \"7a4033f1-ae55-4d94-90d1-397d031770ac\") " pod="openstack/nova-metadata-0" Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.531588 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q47wp\" (UniqueName: \"kubernetes.io/projected/7a4033f1-ae55-4d94-90d1-397d031770ac-kube-api-access-q47wp\") pod \"nova-metadata-0\" (UID: \"7a4033f1-ae55-4d94-90d1-397d031770ac\") " pod="openstack/nova-metadata-0" Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.610267 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553674-khft7"] Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.619637 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553674-khft7"] Mar 11 09:20:05 crc kubenswrapper[4840]: I0311 09:20:05.627791 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:20:06 crc kubenswrapper[4840]: I0311 09:20:06.092317 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="219c315e-397b-4f3a-b7b3-cded51345c0a" path="/var/lib/kubelet/pods/219c315e-397b-4f3a-b7b3-cded51345c0a/volumes" Mar 11 09:20:06 crc kubenswrapper[4840]: I0311 09:20:06.094804 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9610466b-b516-43bc-b95a-7f671caee44b" path="/var/lib/kubelet/pods/9610466b-b516-43bc-b95a-7f671caee44b/volumes" Mar 11 09:20:06 crc kubenswrapper[4840]: I0311 09:20:06.095893 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5045b6e-75f6-44c8-b9bb-fd28964f0676" path="/var/lib/kubelet/pods/b5045b6e-75f6-44c8-b9bb-fd28964f0676/volumes" Mar 11 09:20:06 crc kubenswrapper[4840]: I0311 09:20:06.129983 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:20:06 crc kubenswrapper[4840]: W0311 09:20:06.149043 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a4033f1_ae55_4d94_90d1_397d031770ac.slice/crio-cf486f95735557f40cb1ab341055c949501b19648c449c75b59f4b668a065ba5 WatchSource:0}: Error finding container cf486f95735557f40cb1ab341055c949501b19648c449c75b59f4b668a065ba5: Status 404 returned error can't find the container with id cf486f95735557f40cb1ab341055c949501b19648c449c75b59f4b668a065ba5 Mar 11 09:20:06 crc kubenswrapper[4840]: I0311 09:20:06.207148 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"666b69a9-5d21-4f9c-83ce-49e6e132e8e9","Type":"ContainerStarted","Data":"bc0ab41733673759705d9fad6da2ab073f93f91b5041cad5c966fdd3db1c9c9b"} Mar 11 09:20:06 crc kubenswrapper[4840]: I0311 09:20:06.207192 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"666b69a9-5d21-4f9c-83ce-49e6e132e8e9","Type":"ContainerStarted","Data":"c8dd15eda1c58bb68283006caea24257ef51479f9335c7236545f1571d004289"} Mar 11 09:20:06 crc kubenswrapper[4840]: I0311 09:20:06.231672 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.231647526 podStartE2EDuration="2.231647526s" podCreationTimestamp="2026-03-11 09:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:20:06.224840325 +0000 UTC m=+1404.890510140" watchObservedRunningTime="2026-03-11 09:20:06.231647526 +0000 UTC m=+1404.897317341" Mar 11 09:20:06 crc kubenswrapper[4840]: I0311 09:20:06.231693 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a4033f1-ae55-4d94-90d1-397d031770ac","Type":"ContainerStarted","Data":"cf486f95735557f40cb1ab341055c949501b19648c449c75b59f4b668a065ba5"} Mar 11 09:20:06 crc kubenswrapper[4840]: I0311 09:20:06.851796 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 11 09:20:06 crc kubenswrapper[4840]: I0311 09:20:06.852635 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 11 09:20:06 crc kubenswrapper[4840]: I0311 09:20:06.860215 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 11 09:20:06 crc kubenswrapper[4840]: I0311 09:20:06.873217 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.249759 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a4033f1-ae55-4d94-90d1-397d031770ac","Type":"ContainerStarted","Data":"1ea70b8c18c89fcbd1f7150b59f5078da643e9b3a43500d999a04f2556454bf1"} Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.249798 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a4033f1-ae55-4d94-90d1-397d031770ac","Type":"ContainerStarted","Data":"f9bfa7fdef9acd43337efefd3e57d9971ad8d57b63e03a3f021b13e132073ec7"} Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.250220 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.256237 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.276247 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.276225566 podStartE2EDuration="2.276225566s" podCreationTimestamp="2026-03-11 09:20:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:20:07.26962017 +0000 UTC m=+1405.935289995" watchObservedRunningTime="2026-03-11 09:20:07.276225566 +0000 UTC m=+1405.941895381" Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.458996 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7749c44969-mwk5x"] Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.461587 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7749c44969-mwk5x" Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.497601 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7749c44969-mwk5x"] Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.554330 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-dns-swift-storage-0\") pod \"dnsmasq-dns-7749c44969-mwk5x\" (UID: \"56c8323a-4163-45af-8e67-2490198805f2\") " pod="openstack/dnsmasq-dns-7749c44969-mwk5x" Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.554420 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-dns-svc\") pod \"dnsmasq-dns-7749c44969-mwk5x\" (UID: \"56c8323a-4163-45af-8e67-2490198805f2\") " pod="openstack/dnsmasq-dns-7749c44969-mwk5x" Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.554491 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7zl8\" (UniqueName: \"kubernetes.io/projected/56c8323a-4163-45af-8e67-2490198805f2-kube-api-access-w7zl8\") pod \"dnsmasq-dns-7749c44969-mwk5x\" (UID: \"56c8323a-4163-45af-8e67-2490198805f2\") " pod="openstack/dnsmasq-dns-7749c44969-mwk5x" Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.554537 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-config\") pod \"dnsmasq-dns-7749c44969-mwk5x\" (UID: \"56c8323a-4163-45af-8e67-2490198805f2\") " pod="openstack/dnsmasq-dns-7749c44969-mwk5x" Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.554567 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-ovsdbserver-sb\") pod \"dnsmasq-dns-7749c44969-mwk5x\" (UID: \"56c8323a-4163-45af-8e67-2490198805f2\") " pod="openstack/dnsmasq-dns-7749c44969-mwk5x" Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.554614 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-ovsdbserver-nb\") pod \"dnsmasq-dns-7749c44969-mwk5x\" (UID: \"56c8323a-4163-45af-8e67-2490198805f2\") " pod="openstack/dnsmasq-dns-7749c44969-mwk5x" Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.656984 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-config\") pod \"dnsmasq-dns-7749c44969-mwk5x\" (UID: \"56c8323a-4163-45af-8e67-2490198805f2\") " pod="openstack/dnsmasq-dns-7749c44969-mwk5x" Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.657053 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-ovsdbserver-sb\") pod \"dnsmasq-dns-7749c44969-mwk5x\" (UID: \"56c8323a-4163-45af-8e67-2490198805f2\") " pod="openstack/dnsmasq-dns-7749c44969-mwk5x" Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.657123 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-ovsdbserver-nb\") pod \"dnsmasq-dns-7749c44969-mwk5x\" (UID: \"56c8323a-4163-45af-8e67-2490198805f2\") " pod="openstack/dnsmasq-dns-7749c44969-mwk5x" Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.657188 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-dns-swift-storage-0\") pod \"dnsmasq-dns-7749c44969-mwk5x\" (UID: \"56c8323a-4163-45af-8e67-2490198805f2\") " pod="openstack/dnsmasq-dns-7749c44969-mwk5x" Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.657249 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-dns-svc\") pod \"dnsmasq-dns-7749c44969-mwk5x\" (UID: \"56c8323a-4163-45af-8e67-2490198805f2\") " pod="openstack/dnsmasq-dns-7749c44969-mwk5x" Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.657760 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7zl8\" (UniqueName: \"kubernetes.io/projected/56c8323a-4163-45af-8e67-2490198805f2-kube-api-access-w7zl8\") pod \"dnsmasq-dns-7749c44969-mwk5x\" (UID: \"56c8323a-4163-45af-8e67-2490198805f2\") " pod="openstack/dnsmasq-dns-7749c44969-mwk5x" Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.658360 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-config\") pod \"dnsmasq-dns-7749c44969-mwk5x\" (UID: \"56c8323a-4163-45af-8e67-2490198805f2\") " pod="openstack/dnsmasq-dns-7749c44969-mwk5x" Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.658390 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-ovsdbserver-nb\") pod \"dnsmasq-dns-7749c44969-mwk5x\" (UID: \"56c8323a-4163-45af-8e67-2490198805f2\") " pod="openstack/dnsmasq-dns-7749c44969-mwk5x" Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.658327 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-dns-svc\") pod \"dnsmasq-dns-7749c44969-mwk5x\" (UID: \"56c8323a-4163-45af-8e67-2490198805f2\") " pod="openstack/dnsmasq-dns-7749c44969-mwk5x" Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.658437 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-dns-swift-storage-0\") pod \"dnsmasq-dns-7749c44969-mwk5x\" (UID: \"56c8323a-4163-45af-8e67-2490198805f2\") " pod="openstack/dnsmasq-dns-7749c44969-mwk5x" Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.658636 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-ovsdbserver-sb\") pod \"dnsmasq-dns-7749c44969-mwk5x\" (UID: \"56c8323a-4163-45af-8e67-2490198805f2\") " pod="openstack/dnsmasq-dns-7749c44969-mwk5x" Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.680983 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7zl8\" (UniqueName: \"kubernetes.io/projected/56c8323a-4163-45af-8e67-2490198805f2-kube-api-access-w7zl8\") pod \"dnsmasq-dns-7749c44969-mwk5x\" (UID: \"56c8323a-4163-45af-8e67-2490198805f2\") " pod="openstack/dnsmasq-dns-7749c44969-mwk5x" Mar 11 09:20:07 crc kubenswrapper[4840]: I0311 09:20:07.790493 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7749c44969-mwk5x" Mar 11 09:20:08 crc kubenswrapper[4840]: I0311 09:20:08.368695 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7749c44969-mwk5x"] Mar 11 09:20:09 crc kubenswrapper[4840]: I0311 09:20:09.272999 4840 generic.go:334] "Generic (PLEG): container finished" podID="56c8323a-4163-45af-8e67-2490198805f2" containerID="f9500782e434442aa22be0ab97cf2a255b5b9cecd44ab6561484a6263b0f2598" exitCode=0 Mar 11 09:20:09 crc kubenswrapper[4840]: I0311 09:20:09.273096 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7749c44969-mwk5x" event={"ID":"56c8323a-4163-45af-8e67-2490198805f2","Type":"ContainerDied","Data":"f9500782e434442aa22be0ab97cf2a255b5b9cecd44ab6561484a6263b0f2598"} Mar 11 09:20:09 crc kubenswrapper[4840]: I0311 09:20:09.273562 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7749c44969-mwk5x" event={"ID":"56c8323a-4163-45af-8e67-2490198805f2","Type":"ContainerStarted","Data":"9e25ab2894f031fc64d812de265c80ff66737c3823d4eb2abdd9efba72966490"} Mar 11 09:20:09 crc kubenswrapper[4840]: I0311 09:20:09.826871 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:20:09 crc kubenswrapper[4840]: I0311 09:20:09.829480 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="603351dd-ecf9-45be-9a52-12a3122cc22d" containerName="ceilometer-central-agent" containerID="cri-o://4fdd50a29025e3ac7d01e74196ffff09875be42f1be211da58caeacc78b4b122" gracePeriod=30 Mar 11 09:20:09 crc kubenswrapper[4840]: I0311 09:20:09.829701 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="603351dd-ecf9-45be-9a52-12a3122cc22d" containerName="proxy-httpd" containerID="cri-o://db48f935d34405f41e967d593f45b3f2076a66a8f1733e1908a3fba8f9be5679" gracePeriod=30 Mar 11 09:20:09 crc kubenswrapper[4840]: I0311 09:20:09.829789 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="603351dd-ecf9-45be-9a52-12a3122cc22d" containerName="sg-core" containerID="cri-o://ee16bf85ee819ef20795fbe3dd65399c7fbd9b2ca5f3a14eaf194880a9f035dd" gracePeriod=30 Mar 11 09:20:09 crc kubenswrapper[4840]: I0311 09:20:09.829838 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="603351dd-ecf9-45be-9a52-12a3122cc22d" containerName="ceilometer-notification-agent" containerID="cri-o://5d744badc2fbacd8f4fc7558e5c49ff94357c3cc302836da2bf24c2d05699823" gracePeriod=30 Mar 11 09:20:09 crc kubenswrapper[4840]: I0311 09:20:09.849418 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 11 09:20:09 crc kubenswrapper[4840]: I0311 09:20:09.895414 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:20:10 crc kubenswrapper[4840]: I0311 09:20:10.297995 4840 generic.go:334] "Generic (PLEG): container finished" podID="603351dd-ecf9-45be-9a52-12a3122cc22d" containerID="ee16bf85ee819ef20795fbe3dd65399c7fbd9b2ca5f3a14eaf194880a9f035dd" exitCode=2 Mar 11 09:20:10 crc kubenswrapper[4840]: I0311 09:20:10.298085 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"603351dd-ecf9-45be-9a52-12a3122cc22d","Type":"ContainerDied","Data":"ee16bf85ee819ef20795fbe3dd65399c7fbd9b2ca5f3a14eaf194880a9f035dd"} Mar 11 09:20:10 crc kubenswrapper[4840]: I0311 09:20:10.300091 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7749c44969-mwk5x" event={"ID":"56c8323a-4163-45af-8e67-2490198805f2","Type":"ContainerStarted","Data":"dc66b182d7e7d4162f25d405953a697d877ebcc573fde71e7656196962265a32"} Mar 11 09:20:10 crc kubenswrapper[4840]: I0311 09:20:10.300682 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7749c44969-mwk5x" Mar 11 09:20:10 crc kubenswrapper[4840]: I0311 09:20:10.317609 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:20:10 crc kubenswrapper[4840]: I0311 09:20:10.317823 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9c92a574-9e37-4fee-8503-0a41b4366a23" containerName="nova-api-log" containerID="cri-o://5530264f80389ce86593fb3e727a9828428c222f56c4978ddd814207d3717db3" gracePeriod=30 Mar 11 09:20:10 crc kubenswrapper[4840]: I0311 09:20:10.317848 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9c92a574-9e37-4fee-8503-0a41b4366a23" containerName="nova-api-api" containerID="cri-o://1b6c69b08c345a8b70fa35b336b43de7227d1f98fc77b1c77946fa1cab5a1a71" gracePeriod=30 Mar 11 09:20:10 crc kubenswrapper[4840]: I0311 09:20:10.342523 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7749c44969-mwk5x" podStartSLOduration=3.342502512 podStartE2EDuration="3.342502512s" podCreationTimestamp="2026-03-11 09:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:20:10.337371423 +0000 UTC m=+1409.003041238" watchObservedRunningTime="2026-03-11 09:20:10.342502512 +0000 UTC m=+1409.008172327" Mar 11 09:20:10 crc kubenswrapper[4840]: I0311 09:20:10.628874 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 11 09:20:10 crc kubenswrapper[4840]: I0311 09:20:10.628941 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 11 09:20:10 crc kubenswrapper[4840]: I0311 09:20:10.958326 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.130806 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-combined-ca-bundle\") pod \"603351dd-ecf9-45be-9a52-12a3122cc22d\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.131533 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/603351dd-ecf9-45be-9a52-12a3122cc22d-run-httpd\") pod \"603351dd-ecf9-45be-9a52-12a3122cc22d\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.131596 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-sg-core-conf-yaml\") pod \"603351dd-ecf9-45be-9a52-12a3122cc22d\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.131646 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-scripts\") pod \"603351dd-ecf9-45be-9a52-12a3122cc22d\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.131675 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-ceilometer-tls-certs\") pod \"603351dd-ecf9-45be-9a52-12a3122cc22d\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.132201 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/603351dd-ecf9-45be-9a52-12a3122cc22d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "603351dd-ecf9-45be-9a52-12a3122cc22d" (UID: "603351dd-ecf9-45be-9a52-12a3122cc22d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.132417 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/603351dd-ecf9-45be-9a52-12a3122cc22d-log-httpd\") pod \"603351dd-ecf9-45be-9a52-12a3122cc22d\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.132453 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdlfk\" (UniqueName: \"kubernetes.io/projected/603351dd-ecf9-45be-9a52-12a3122cc22d-kube-api-access-kdlfk\") pod \"603351dd-ecf9-45be-9a52-12a3122cc22d\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.132502 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-config-data\") pod \"603351dd-ecf9-45be-9a52-12a3122cc22d\" (UID: \"603351dd-ecf9-45be-9a52-12a3122cc22d\") " Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.133217 4840 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/603351dd-ecf9-45be-9a52-12a3122cc22d-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.133943 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/603351dd-ecf9-45be-9a52-12a3122cc22d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "603351dd-ecf9-45be-9a52-12a3122cc22d" (UID: "603351dd-ecf9-45be-9a52-12a3122cc22d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.137942 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-scripts" (OuterVolumeSpecName: "scripts") pod "603351dd-ecf9-45be-9a52-12a3122cc22d" (UID: "603351dd-ecf9-45be-9a52-12a3122cc22d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.139715 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/603351dd-ecf9-45be-9a52-12a3122cc22d-kube-api-access-kdlfk" (OuterVolumeSpecName: "kube-api-access-kdlfk") pod "603351dd-ecf9-45be-9a52-12a3122cc22d" (UID: "603351dd-ecf9-45be-9a52-12a3122cc22d"). InnerVolumeSpecName "kube-api-access-kdlfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.168062 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "603351dd-ecf9-45be-9a52-12a3122cc22d" (UID: "603351dd-ecf9-45be-9a52-12a3122cc22d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.204128 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "603351dd-ecf9-45be-9a52-12a3122cc22d" (UID: "603351dd-ecf9-45be-9a52-12a3122cc22d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.236127 4840 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.236177 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.236193 4840 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.236206 4840 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/603351dd-ecf9-45be-9a52-12a3122cc22d-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.236222 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdlfk\" (UniqueName: \"kubernetes.io/projected/603351dd-ecf9-45be-9a52-12a3122cc22d-kube-api-access-kdlfk\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.236403 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "603351dd-ecf9-45be-9a52-12a3122cc22d" (UID: "603351dd-ecf9-45be-9a52-12a3122cc22d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.261290 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-config-data" (OuterVolumeSpecName: "config-data") pod "603351dd-ecf9-45be-9a52-12a3122cc22d" (UID: "603351dd-ecf9-45be-9a52-12a3122cc22d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.314532 4840 generic.go:334] "Generic (PLEG): container finished" podID="603351dd-ecf9-45be-9a52-12a3122cc22d" containerID="db48f935d34405f41e967d593f45b3f2076a66a8f1733e1908a3fba8f9be5679" exitCode=0 Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.314584 4840 generic.go:334] "Generic (PLEG): container finished" podID="603351dd-ecf9-45be-9a52-12a3122cc22d" containerID="5d744badc2fbacd8f4fc7558e5c49ff94357c3cc302836da2bf24c2d05699823" exitCode=0 Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.314600 4840 generic.go:334] "Generic (PLEG): container finished" podID="603351dd-ecf9-45be-9a52-12a3122cc22d" containerID="4fdd50a29025e3ac7d01e74196ffff09875be42f1be211da58caeacc78b4b122" exitCode=0 Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.314639 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"603351dd-ecf9-45be-9a52-12a3122cc22d","Type":"ContainerDied","Data":"db48f935d34405f41e967d593f45b3f2076a66a8f1733e1908a3fba8f9be5679"} Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.314660 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.314722 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"603351dd-ecf9-45be-9a52-12a3122cc22d","Type":"ContainerDied","Data":"5d744badc2fbacd8f4fc7558e5c49ff94357c3cc302836da2bf24c2d05699823"} Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.315194 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"603351dd-ecf9-45be-9a52-12a3122cc22d","Type":"ContainerDied","Data":"4fdd50a29025e3ac7d01e74196ffff09875be42f1be211da58caeacc78b4b122"} Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.315214 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"603351dd-ecf9-45be-9a52-12a3122cc22d","Type":"ContainerDied","Data":"d528300d8b8e8bc2b87a67b6a96c7a659552efde84dd7087b374bb5d15f45410"} Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.314743 4840 scope.go:117] "RemoveContainer" containerID="db48f935d34405f41e967d593f45b3f2076a66a8f1733e1908a3fba8f9be5679" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.317464 4840 generic.go:334] "Generic (PLEG): container finished" podID="9c92a574-9e37-4fee-8503-0a41b4366a23" containerID="5530264f80389ce86593fb3e727a9828428c222f56c4978ddd814207d3717db3" exitCode=143 Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.317597 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9c92a574-9e37-4fee-8503-0a41b4366a23","Type":"ContainerDied","Data":"5530264f80389ce86593fb3e727a9828428c222f56c4978ddd814207d3717db3"} Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.339186 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.339216 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/603351dd-ecf9-45be-9a52-12a3122cc22d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.358063 4840 scope.go:117] "RemoveContainer" containerID="ee16bf85ee819ef20795fbe3dd65399c7fbd9b2ca5f3a14eaf194880a9f035dd" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.371317 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.384355 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.406298 4840 scope.go:117] "RemoveContainer" containerID="5d744badc2fbacd8f4fc7558e5c49ff94357c3cc302836da2bf24c2d05699823" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.433349 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:20:11 crc kubenswrapper[4840]: E0311 09:20:11.435533 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603351dd-ecf9-45be-9a52-12a3122cc22d" containerName="ceilometer-notification-agent" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.435556 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="603351dd-ecf9-45be-9a52-12a3122cc22d" containerName="ceilometer-notification-agent" Mar 11 09:20:11 crc kubenswrapper[4840]: E0311 09:20:11.435584 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603351dd-ecf9-45be-9a52-12a3122cc22d" containerName="proxy-httpd" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.435592 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="603351dd-ecf9-45be-9a52-12a3122cc22d" containerName="proxy-httpd" Mar 11 09:20:11 crc kubenswrapper[4840]: E0311 09:20:11.435602 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603351dd-ecf9-45be-9a52-12a3122cc22d" containerName="ceilometer-central-agent" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.435608 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="603351dd-ecf9-45be-9a52-12a3122cc22d" containerName="ceilometer-central-agent" Mar 11 09:20:11 crc kubenswrapper[4840]: E0311 09:20:11.435636 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603351dd-ecf9-45be-9a52-12a3122cc22d" containerName="sg-core" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.435642 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="603351dd-ecf9-45be-9a52-12a3122cc22d" containerName="sg-core" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.436047 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="603351dd-ecf9-45be-9a52-12a3122cc22d" containerName="ceilometer-notification-agent" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.436078 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="603351dd-ecf9-45be-9a52-12a3122cc22d" containerName="ceilometer-central-agent" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.436095 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="603351dd-ecf9-45be-9a52-12a3122cc22d" containerName="proxy-httpd" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.436104 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="603351dd-ecf9-45be-9a52-12a3122cc22d" containerName="sg-core" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.439645 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.444399 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.446910 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.447117 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.454854 4840 scope.go:117] "RemoveContainer" containerID="4fdd50a29025e3ac7d01e74196ffff09875be42f1be211da58caeacc78b4b122" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.466221 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.484917 4840 scope.go:117] "RemoveContainer" containerID="db48f935d34405f41e967d593f45b3f2076a66a8f1733e1908a3fba8f9be5679" Mar 11 09:20:11 crc kubenswrapper[4840]: E0311 09:20:11.486253 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db48f935d34405f41e967d593f45b3f2076a66a8f1733e1908a3fba8f9be5679\": container with ID starting with db48f935d34405f41e967d593f45b3f2076a66a8f1733e1908a3fba8f9be5679 not found: ID does not exist" containerID="db48f935d34405f41e967d593f45b3f2076a66a8f1733e1908a3fba8f9be5679" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.486325 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db48f935d34405f41e967d593f45b3f2076a66a8f1733e1908a3fba8f9be5679"} err="failed to get container status \"db48f935d34405f41e967d593f45b3f2076a66a8f1733e1908a3fba8f9be5679\": rpc error: code = NotFound desc = could not find container \"db48f935d34405f41e967d593f45b3f2076a66a8f1733e1908a3fba8f9be5679\": container with ID starting with db48f935d34405f41e967d593f45b3f2076a66a8f1733e1908a3fba8f9be5679 not found: ID does not exist" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.486366 4840 scope.go:117] "RemoveContainer" containerID="ee16bf85ee819ef20795fbe3dd65399c7fbd9b2ca5f3a14eaf194880a9f035dd" Mar 11 09:20:11 crc kubenswrapper[4840]: E0311 09:20:11.487021 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee16bf85ee819ef20795fbe3dd65399c7fbd9b2ca5f3a14eaf194880a9f035dd\": container with ID starting with ee16bf85ee819ef20795fbe3dd65399c7fbd9b2ca5f3a14eaf194880a9f035dd not found: ID does not exist" containerID="ee16bf85ee819ef20795fbe3dd65399c7fbd9b2ca5f3a14eaf194880a9f035dd" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.487060 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee16bf85ee819ef20795fbe3dd65399c7fbd9b2ca5f3a14eaf194880a9f035dd"} err="failed to get container status \"ee16bf85ee819ef20795fbe3dd65399c7fbd9b2ca5f3a14eaf194880a9f035dd\": rpc error: code = NotFound desc = could not find container \"ee16bf85ee819ef20795fbe3dd65399c7fbd9b2ca5f3a14eaf194880a9f035dd\": container with ID starting with ee16bf85ee819ef20795fbe3dd65399c7fbd9b2ca5f3a14eaf194880a9f035dd not found: ID does not exist" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.487091 4840 scope.go:117] "RemoveContainer" containerID="5d744badc2fbacd8f4fc7558e5c49ff94357c3cc302836da2bf24c2d05699823" Mar 11 09:20:11 crc kubenswrapper[4840]: E0311 09:20:11.487520 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d744badc2fbacd8f4fc7558e5c49ff94357c3cc302836da2bf24c2d05699823\": container with ID starting with 5d744badc2fbacd8f4fc7558e5c49ff94357c3cc302836da2bf24c2d05699823 not found: ID does not exist" containerID="5d744badc2fbacd8f4fc7558e5c49ff94357c3cc302836da2bf24c2d05699823" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.487547 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d744badc2fbacd8f4fc7558e5c49ff94357c3cc302836da2bf24c2d05699823"} err="failed to get container status \"5d744badc2fbacd8f4fc7558e5c49ff94357c3cc302836da2bf24c2d05699823\": rpc error: code = NotFound desc = could not find container \"5d744badc2fbacd8f4fc7558e5c49ff94357c3cc302836da2bf24c2d05699823\": container with ID starting with 5d744badc2fbacd8f4fc7558e5c49ff94357c3cc302836da2bf24c2d05699823 not found: ID does not exist" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.487562 4840 scope.go:117] "RemoveContainer" containerID="4fdd50a29025e3ac7d01e74196ffff09875be42f1be211da58caeacc78b4b122" Mar 11 09:20:11 crc kubenswrapper[4840]: E0311 09:20:11.487852 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fdd50a29025e3ac7d01e74196ffff09875be42f1be211da58caeacc78b4b122\": container with ID starting with 4fdd50a29025e3ac7d01e74196ffff09875be42f1be211da58caeacc78b4b122 not found: ID does not exist" containerID="4fdd50a29025e3ac7d01e74196ffff09875be42f1be211da58caeacc78b4b122" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.487892 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fdd50a29025e3ac7d01e74196ffff09875be42f1be211da58caeacc78b4b122"} err="failed to get container status \"4fdd50a29025e3ac7d01e74196ffff09875be42f1be211da58caeacc78b4b122\": rpc error: code = NotFound desc = could not find container \"4fdd50a29025e3ac7d01e74196ffff09875be42f1be211da58caeacc78b4b122\": container with ID starting with 4fdd50a29025e3ac7d01e74196ffff09875be42f1be211da58caeacc78b4b122 not found: ID does not exist" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.487907 4840 scope.go:117] "RemoveContainer" containerID="db48f935d34405f41e967d593f45b3f2076a66a8f1733e1908a3fba8f9be5679" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.488236 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db48f935d34405f41e967d593f45b3f2076a66a8f1733e1908a3fba8f9be5679"} err="failed to get container status \"db48f935d34405f41e967d593f45b3f2076a66a8f1733e1908a3fba8f9be5679\": rpc error: code = NotFound desc = could not find container \"db48f935d34405f41e967d593f45b3f2076a66a8f1733e1908a3fba8f9be5679\": container with ID starting with db48f935d34405f41e967d593f45b3f2076a66a8f1733e1908a3fba8f9be5679 not found: ID does not exist" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.488256 4840 scope.go:117] "RemoveContainer" containerID="ee16bf85ee819ef20795fbe3dd65399c7fbd9b2ca5f3a14eaf194880a9f035dd" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.488812 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee16bf85ee819ef20795fbe3dd65399c7fbd9b2ca5f3a14eaf194880a9f035dd"} err="failed to get container status \"ee16bf85ee819ef20795fbe3dd65399c7fbd9b2ca5f3a14eaf194880a9f035dd\": rpc error: code = NotFound desc = could not find container \"ee16bf85ee819ef20795fbe3dd65399c7fbd9b2ca5f3a14eaf194880a9f035dd\": container with ID starting with ee16bf85ee819ef20795fbe3dd65399c7fbd9b2ca5f3a14eaf194880a9f035dd not found: ID does not exist" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.488837 4840 scope.go:117] "RemoveContainer" containerID="5d744badc2fbacd8f4fc7558e5c49ff94357c3cc302836da2bf24c2d05699823" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.489125 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d744badc2fbacd8f4fc7558e5c49ff94357c3cc302836da2bf24c2d05699823"} err="failed to get container status \"5d744badc2fbacd8f4fc7558e5c49ff94357c3cc302836da2bf24c2d05699823\": rpc error: code = NotFound desc = could not find container \"5d744badc2fbacd8f4fc7558e5c49ff94357c3cc302836da2bf24c2d05699823\": container with ID starting with 5d744badc2fbacd8f4fc7558e5c49ff94357c3cc302836da2bf24c2d05699823 not found: ID does not exist" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.489149 4840 scope.go:117] "RemoveContainer" containerID="4fdd50a29025e3ac7d01e74196ffff09875be42f1be211da58caeacc78b4b122" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.489426 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fdd50a29025e3ac7d01e74196ffff09875be42f1be211da58caeacc78b4b122"} err="failed to get container status \"4fdd50a29025e3ac7d01e74196ffff09875be42f1be211da58caeacc78b4b122\": rpc error: code = NotFound desc = could not find container \"4fdd50a29025e3ac7d01e74196ffff09875be42f1be211da58caeacc78b4b122\": container with ID starting with 4fdd50a29025e3ac7d01e74196ffff09875be42f1be211da58caeacc78b4b122 not found: ID does not exist" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.489450 4840 scope.go:117] "RemoveContainer" containerID="db48f935d34405f41e967d593f45b3f2076a66a8f1733e1908a3fba8f9be5679" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.490000 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db48f935d34405f41e967d593f45b3f2076a66a8f1733e1908a3fba8f9be5679"} err="failed to get container status \"db48f935d34405f41e967d593f45b3f2076a66a8f1733e1908a3fba8f9be5679\": rpc error: code = NotFound desc = could not find container \"db48f935d34405f41e967d593f45b3f2076a66a8f1733e1908a3fba8f9be5679\": container with ID starting with db48f935d34405f41e967d593f45b3f2076a66a8f1733e1908a3fba8f9be5679 not found: ID does not exist" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.490019 4840 scope.go:117] "RemoveContainer" containerID="ee16bf85ee819ef20795fbe3dd65399c7fbd9b2ca5f3a14eaf194880a9f035dd" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.490382 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee16bf85ee819ef20795fbe3dd65399c7fbd9b2ca5f3a14eaf194880a9f035dd"} err="failed to get container status \"ee16bf85ee819ef20795fbe3dd65399c7fbd9b2ca5f3a14eaf194880a9f035dd\": rpc error: code = NotFound desc = could not find container \"ee16bf85ee819ef20795fbe3dd65399c7fbd9b2ca5f3a14eaf194880a9f035dd\": container with ID starting with ee16bf85ee819ef20795fbe3dd65399c7fbd9b2ca5f3a14eaf194880a9f035dd not found: ID does not exist" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.490400 4840 scope.go:117] "RemoveContainer" containerID="5d744badc2fbacd8f4fc7558e5c49ff94357c3cc302836da2bf24c2d05699823" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.490704 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d744badc2fbacd8f4fc7558e5c49ff94357c3cc302836da2bf24c2d05699823"} err="failed to get container status \"5d744badc2fbacd8f4fc7558e5c49ff94357c3cc302836da2bf24c2d05699823\": rpc error: code = NotFound desc = could not find container \"5d744badc2fbacd8f4fc7558e5c49ff94357c3cc302836da2bf24c2d05699823\": container with ID starting with 5d744badc2fbacd8f4fc7558e5c49ff94357c3cc302836da2bf24c2d05699823 not found: ID does not exist" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.490722 4840 scope.go:117] "RemoveContainer" containerID="4fdd50a29025e3ac7d01e74196ffff09875be42f1be211da58caeacc78b4b122" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.491008 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fdd50a29025e3ac7d01e74196ffff09875be42f1be211da58caeacc78b4b122"} err="failed to get container status \"4fdd50a29025e3ac7d01e74196ffff09875be42f1be211da58caeacc78b4b122\": rpc error: code = NotFound desc = could not find container \"4fdd50a29025e3ac7d01e74196ffff09875be42f1be211da58caeacc78b4b122\": container with ID starting with 4fdd50a29025e3ac7d01e74196ffff09875be42f1be211da58caeacc78b4b122 not found: ID does not exist" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.547212 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-config-data\") pod \"ceilometer-0\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.547296 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.547336 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.547365 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.547500 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-scripts\") pod \"ceilometer-0\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.547572 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/083af501-c709-475c-91ad-b89eabfc0b92-log-httpd\") pod \"ceilometer-0\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.547799 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsdfz\" (UniqueName: \"kubernetes.io/projected/083af501-c709-475c-91ad-b89eabfc0b92-kube-api-access-qsdfz\") pod \"ceilometer-0\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.547934 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/083af501-c709-475c-91ad-b89eabfc0b92-run-httpd\") pod \"ceilometer-0\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.649499 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsdfz\" (UniqueName: \"kubernetes.io/projected/083af501-c709-475c-91ad-b89eabfc0b92-kube-api-access-qsdfz\") pod \"ceilometer-0\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.649598 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/083af501-c709-475c-91ad-b89eabfc0b92-run-httpd\") pod \"ceilometer-0\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.649646 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-config-data\") pod \"ceilometer-0\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.649730 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.649769 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.649797 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.649825 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-scripts\") pod \"ceilometer-0\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.649852 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/083af501-c709-475c-91ad-b89eabfc0b92-log-httpd\") pod \"ceilometer-0\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.650502 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/083af501-c709-475c-91ad-b89eabfc0b92-log-httpd\") pod \"ceilometer-0\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.653286 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/083af501-c709-475c-91ad-b89eabfc0b92-run-httpd\") pod \"ceilometer-0\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.659073 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.664098 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.674696 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.675409 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-config-data\") pod \"ceilometer-0\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.676220 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-scripts\") pod \"ceilometer-0\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.686653 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsdfz\" (UniqueName: \"kubernetes.io/projected/083af501-c709-475c-91ad-b89eabfc0b92-kube-api-access-qsdfz\") pod \"ceilometer-0\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " pod="openstack/ceilometer-0" Mar 11 09:20:11 crc kubenswrapper[4840]: I0311 09:20:11.762616 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:20:12 crc kubenswrapper[4840]: I0311 09:20:12.074092 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="603351dd-ecf9-45be-9a52-12a3122cc22d" path="/var/lib/kubelet/pods/603351dd-ecf9-45be-9a52-12a3122cc22d/volumes" Mar 11 09:20:12 crc kubenswrapper[4840]: I0311 09:20:12.272180 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:20:12 crc kubenswrapper[4840]: W0311 09:20:12.280369 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod083af501_c709_475c_91ad_b89eabfc0b92.slice/crio-4da5c124aada918f36e1e0813a348dbda422ec4de0fe8f9a2a4b35036424e777 WatchSource:0}: Error finding container 4da5c124aada918f36e1e0813a348dbda422ec4de0fe8f9a2a4b35036424e777: Status 404 returned error can't find the container with id 4da5c124aada918f36e1e0813a348dbda422ec4de0fe8f9a2a4b35036424e777 Mar 11 09:20:12 crc kubenswrapper[4840]: I0311 09:20:12.330529 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"083af501-c709-475c-91ad-b89eabfc0b92","Type":"ContainerStarted","Data":"4da5c124aada918f36e1e0813a348dbda422ec4de0fe8f9a2a4b35036424e777"} Mar 11 09:20:12 crc kubenswrapper[4840]: I0311 09:20:12.423784 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:20:13 crc kubenswrapper[4840]: I0311 09:20:13.341913 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"083af501-c709-475c-91ad-b89eabfc0b92","Type":"ContainerStarted","Data":"cbdc983bfb8041d43339abed3a3e8729bb4bcfcda8b274b42238651b2b8ccc47"} Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.080074 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.203270 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c92a574-9e37-4fee-8503-0a41b4366a23-config-data\") pod \"9c92a574-9e37-4fee-8503-0a41b4366a23\" (UID: \"9c92a574-9e37-4fee-8503-0a41b4366a23\") " Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.204086 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz9x8\" (UniqueName: \"kubernetes.io/projected/9c92a574-9e37-4fee-8503-0a41b4366a23-kube-api-access-pz9x8\") pod \"9c92a574-9e37-4fee-8503-0a41b4366a23\" (UID: \"9c92a574-9e37-4fee-8503-0a41b4366a23\") " Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.204172 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c92a574-9e37-4fee-8503-0a41b4366a23-logs\") pod \"9c92a574-9e37-4fee-8503-0a41b4366a23\" (UID: \"9c92a574-9e37-4fee-8503-0a41b4366a23\") " Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.204362 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c92a574-9e37-4fee-8503-0a41b4366a23-combined-ca-bundle\") pod \"9c92a574-9e37-4fee-8503-0a41b4366a23\" (UID: \"9c92a574-9e37-4fee-8503-0a41b4366a23\") " Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.208105 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c92a574-9e37-4fee-8503-0a41b4366a23-logs" (OuterVolumeSpecName: "logs") pod "9c92a574-9e37-4fee-8503-0a41b4366a23" (UID: "9c92a574-9e37-4fee-8503-0a41b4366a23"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.212654 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c92a574-9e37-4fee-8503-0a41b4366a23-kube-api-access-pz9x8" (OuterVolumeSpecName: "kube-api-access-pz9x8") pod "9c92a574-9e37-4fee-8503-0a41b4366a23" (UID: "9c92a574-9e37-4fee-8503-0a41b4366a23"). InnerVolumeSpecName "kube-api-access-pz9x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.263977 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c92a574-9e37-4fee-8503-0a41b4366a23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c92a574-9e37-4fee-8503-0a41b4366a23" (UID: "9c92a574-9e37-4fee-8503-0a41b4366a23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.270652 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c92a574-9e37-4fee-8503-0a41b4366a23-config-data" (OuterVolumeSpecName: "config-data") pod "9c92a574-9e37-4fee-8503-0a41b4366a23" (UID: "9c92a574-9e37-4fee-8503-0a41b4366a23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.312729 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c92a574-9e37-4fee-8503-0a41b4366a23-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.312978 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz9x8\" (UniqueName: \"kubernetes.io/projected/9c92a574-9e37-4fee-8503-0a41b4366a23-kube-api-access-pz9x8\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.313096 4840 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c92a574-9e37-4fee-8503-0a41b4366a23-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.313193 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c92a574-9e37-4fee-8503-0a41b4366a23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.355412 4840 generic.go:334] "Generic (PLEG): container finished" podID="9c92a574-9e37-4fee-8503-0a41b4366a23" containerID="1b6c69b08c345a8b70fa35b336b43de7227d1f98fc77b1c77946fa1cab5a1a71" exitCode=0 Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.355658 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9c92a574-9e37-4fee-8503-0a41b4366a23","Type":"ContainerDied","Data":"1b6c69b08c345a8b70fa35b336b43de7227d1f98fc77b1c77946fa1cab5a1a71"} Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.356785 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9c92a574-9e37-4fee-8503-0a41b4366a23","Type":"ContainerDied","Data":"ed7aaf8db8dad28f87bea1f2cc29c5561d85558cdd261ded87065c5ab627c1da"} Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.356887 4840 scope.go:117] "RemoveContainer" containerID="1b6c69b08c345a8b70fa35b336b43de7227d1f98fc77b1c77946fa1cab5a1a71" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.355743 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.366804 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"083af501-c709-475c-91ad-b89eabfc0b92","Type":"ContainerStarted","Data":"15913e9a46b9cfb7b0c06f541c849836f1d2ff252b41ff966e4681e81dcfdc0e"} Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.414903 4840 scope.go:117] "RemoveContainer" containerID="5530264f80389ce86593fb3e727a9828428c222f56c4978ddd814207d3717db3" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.416697 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.434539 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.449536 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 11 09:20:14 crc kubenswrapper[4840]: E0311 09:20:14.453993 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c92a574-9e37-4fee-8503-0a41b4366a23" containerName="nova-api-api" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.454035 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c92a574-9e37-4fee-8503-0a41b4366a23" containerName="nova-api-api" Mar 11 09:20:14 crc kubenswrapper[4840]: E0311 09:20:14.454124 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c92a574-9e37-4fee-8503-0a41b4366a23" containerName="nova-api-log" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.454134 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c92a574-9e37-4fee-8503-0a41b4366a23" containerName="nova-api-log" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.456321 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c92a574-9e37-4fee-8503-0a41b4366a23" containerName="nova-api-log" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.456363 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c92a574-9e37-4fee-8503-0a41b4366a23" containerName="nova-api-api" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.461244 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.517917 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.520484 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59lhx\" (UniqueName: \"kubernetes.io/projected/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-kube-api-access-59lhx\") pod \"nova-api-0\" (UID: \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\") " pod="openstack/nova-api-0" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.520529 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\") " pod="openstack/nova-api-0" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.520563 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\") " pod="openstack/nova-api-0" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.521198 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.521299 4840 scope.go:117] "RemoveContainer" containerID="1b6c69b08c345a8b70fa35b336b43de7227d1f98fc77b1c77946fa1cab5a1a71" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.521336 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-config-data\") pod \"nova-api-0\" (UID: \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\") " pod="openstack/nova-api-0" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.521384 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-public-tls-certs\") pod \"nova-api-0\" (UID: \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\") " pod="openstack/nova-api-0" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.521644 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-logs\") pod \"nova-api-0\" (UID: \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\") " pod="openstack/nova-api-0" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.522537 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 11 09:20:14 crc kubenswrapper[4840]: E0311 09:20:14.527287 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b6c69b08c345a8b70fa35b336b43de7227d1f98fc77b1c77946fa1cab5a1a71\": container with ID starting with 1b6c69b08c345a8b70fa35b336b43de7227d1f98fc77b1c77946fa1cab5a1a71 not found: ID does not exist" containerID="1b6c69b08c345a8b70fa35b336b43de7227d1f98fc77b1c77946fa1cab5a1a71" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.527442 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b6c69b08c345a8b70fa35b336b43de7227d1f98fc77b1c77946fa1cab5a1a71"} err="failed to get container status \"1b6c69b08c345a8b70fa35b336b43de7227d1f98fc77b1c77946fa1cab5a1a71\": rpc error: code = NotFound desc = could not find container \"1b6c69b08c345a8b70fa35b336b43de7227d1f98fc77b1c77946fa1cab5a1a71\": container with ID starting with 1b6c69b08c345a8b70fa35b336b43de7227d1f98fc77b1c77946fa1cab5a1a71 not found: ID does not exist" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.527509 4840 scope.go:117] "RemoveContainer" containerID="5530264f80389ce86593fb3e727a9828428c222f56c4978ddd814207d3717db3" Mar 11 09:20:14 crc kubenswrapper[4840]: E0311 09:20:14.530552 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5530264f80389ce86593fb3e727a9828428c222f56c4978ddd814207d3717db3\": container with ID starting with 5530264f80389ce86593fb3e727a9828428c222f56c4978ddd814207d3717db3 not found: ID does not exist" containerID="5530264f80389ce86593fb3e727a9828428c222f56c4978ddd814207d3717db3" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.530659 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5530264f80389ce86593fb3e727a9828428c222f56c4978ddd814207d3717db3"} err="failed to get container status \"5530264f80389ce86593fb3e727a9828428c222f56c4978ddd814207d3717db3\": rpc error: code = NotFound desc = could not find container \"5530264f80389ce86593fb3e727a9828428c222f56c4978ddd814207d3717db3\": container with ID starting with 5530264f80389ce86593fb3e727a9828428c222f56c4978ddd814207d3717db3 not found: ID does not exist" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.533969 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.623863 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-public-tls-certs\") pod \"nova-api-0\" (UID: \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\") " pod="openstack/nova-api-0" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.624451 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-logs\") pod \"nova-api-0\" (UID: \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\") " pod="openstack/nova-api-0" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.624548 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59lhx\" (UniqueName: \"kubernetes.io/projected/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-kube-api-access-59lhx\") pod \"nova-api-0\" (UID: \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\") " pod="openstack/nova-api-0" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.624576 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\") " pod="openstack/nova-api-0" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.624599 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\") " pod="openstack/nova-api-0" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.624660 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-config-data\") pod \"nova-api-0\" (UID: \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\") " pod="openstack/nova-api-0" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.628776 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-config-data\") pod \"nova-api-0\" (UID: \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\") " pod="openstack/nova-api-0" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.628948 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-logs\") pod \"nova-api-0\" (UID: \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\") " pod="openstack/nova-api-0" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.634158 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-public-tls-certs\") pod \"nova-api-0\" (UID: \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\") " pod="openstack/nova-api-0" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.641403 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\") " pod="openstack/nova-api-0" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.644139 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\") " pod="openstack/nova-api-0" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.648200 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59lhx\" (UniqueName: \"kubernetes.io/projected/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-kube-api-access-59lhx\") pod \"nova-api-0\" (UID: \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\") " pod="openstack/nova-api-0" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.861118 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.895183 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:20:14 crc kubenswrapper[4840]: I0311 09:20:14.925373 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:20:15 crc kubenswrapper[4840]: I0311 09:20:15.382867 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"083af501-c709-475c-91ad-b89eabfc0b92","Type":"ContainerStarted","Data":"b18fe8e64228774ed5de9809cfdebb2981c21af04fe1a99a766b3b1adf9a4984"} Mar 11 09:20:15 crc kubenswrapper[4840]: I0311 09:20:15.414723 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:20:15 crc kubenswrapper[4840]: I0311 09:20:15.444138 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:20:15 crc kubenswrapper[4840]: I0311 09:20:15.629140 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 11 09:20:15 crc kubenswrapper[4840]: I0311 09:20:15.629454 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 11 09:20:15 crc kubenswrapper[4840]: I0311 09:20:15.727942 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-7xxk4"] Mar 11 09:20:15 crc kubenswrapper[4840]: I0311 09:20:15.729449 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7xxk4" Mar 11 09:20:15 crc kubenswrapper[4840]: I0311 09:20:15.736786 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 11 09:20:15 crc kubenswrapper[4840]: I0311 09:20:15.736972 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 11 09:20:15 crc kubenswrapper[4840]: I0311 09:20:15.752430 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535-config-data\") pod \"nova-cell1-cell-mapping-7xxk4\" (UID: \"233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535\") " pod="openstack/nova-cell1-cell-mapping-7xxk4" Mar 11 09:20:15 crc kubenswrapper[4840]: I0311 09:20:15.752500 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnt7q\" (UniqueName: \"kubernetes.io/projected/233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535-kube-api-access-dnt7q\") pod \"nova-cell1-cell-mapping-7xxk4\" (UID: \"233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535\") " pod="openstack/nova-cell1-cell-mapping-7xxk4" Mar 11 09:20:15 crc kubenswrapper[4840]: I0311 09:20:15.752584 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-7xxk4\" (UID: \"233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535\") " pod="openstack/nova-cell1-cell-mapping-7xxk4" Mar 11 09:20:15 crc kubenswrapper[4840]: I0311 09:20:15.752617 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535-scripts\") pod \"nova-cell1-cell-mapping-7xxk4\" (UID: \"233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535\") " pod="openstack/nova-cell1-cell-mapping-7xxk4" Mar 11 09:20:15 crc kubenswrapper[4840]: I0311 09:20:15.781226 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-7xxk4"] Mar 11 09:20:15 crc kubenswrapper[4840]: I0311 09:20:15.854958 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535-config-data\") pod \"nova-cell1-cell-mapping-7xxk4\" (UID: \"233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535\") " pod="openstack/nova-cell1-cell-mapping-7xxk4" Mar 11 09:20:15 crc kubenswrapper[4840]: I0311 09:20:15.856531 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnt7q\" (UniqueName: \"kubernetes.io/projected/233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535-kube-api-access-dnt7q\") pod \"nova-cell1-cell-mapping-7xxk4\" (UID: \"233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535\") " pod="openstack/nova-cell1-cell-mapping-7xxk4" Mar 11 09:20:15 crc kubenswrapper[4840]: I0311 09:20:15.856907 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-7xxk4\" (UID: \"233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535\") " pod="openstack/nova-cell1-cell-mapping-7xxk4" Mar 11 09:20:15 crc kubenswrapper[4840]: I0311 09:20:15.857122 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535-scripts\") pod \"nova-cell1-cell-mapping-7xxk4\" (UID: \"233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535\") " pod="openstack/nova-cell1-cell-mapping-7xxk4" Mar 11 09:20:15 crc kubenswrapper[4840]: I0311 09:20:15.860981 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535-scripts\") pod \"nova-cell1-cell-mapping-7xxk4\" (UID: \"233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535\") " pod="openstack/nova-cell1-cell-mapping-7xxk4" Mar 11 09:20:15 crc kubenswrapper[4840]: I0311 09:20:15.861729 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535-config-data\") pod \"nova-cell1-cell-mapping-7xxk4\" (UID: \"233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535\") " pod="openstack/nova-cell1-cell-mapping-7xxk4" Mar 11 09:20:15 crc kubenswrapper[4840]: I0311 09:20:15.877133 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnt7q\" (UniqueName: \"kubernetes.io/projected/233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535-kube-api-access-dnt7q\") pod \"nova-cell1-cell-mapping-7xxk4\" (UID: \"233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535\") " pod="openstack/nova-cell1-cell-mapping-7xxk4" Mar 11 09:20:15 crc kubenswrapper[4840]: I0311 09:20:15.879505 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-7xxk4\" (UID: \"233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535\") " pod="openstack/nova-cell1-cell-mapping-7xxk4" Mar 11 09:20:15 crc kubenswrapper[4840]: I0311 09:20:15.950429 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7xxk4" Mar 11 09:20:16 crc kubenswrapper[4840]: I0311 09:20:16.073389 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c92a574-9e37-4fee-8503-0a41b4366a23" path="/var/lib/kubelet/pods/9c92a574-9e37-4fee-8503-0a41b4366a23/volumes" Mar 11 09:20:16 crc kubenswrapper[4840]: I0311 09:20:16.447113 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b0f3a14-a956-40c2-8aa6-a4ceda179a11","Type":"ContainerStarted","Data":"dafc27aee1cad152676078bb638ce833823be214f8fe5adc3a7f4d1e1fcf1c69"} Mar 11 09:20:16 crc kubenswrapper[4840]: I0311 09:20:16.447195 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b0f3a14-a956-40c2-8aa6-a4ceda179a11","Type":"ContainerStarted","Data":"db852a4879b287ab6ce624ffbb68178c60e1dba37c3ccee6543671ba22b66c3b"} Mar 11 09:20:16 crc kubenswrapper[4840]: I0311 09:20:16.447212 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b0f3a14-a956-40c2-8aa6-a4ceda179a11","Type":"ContainerStarted","Data":"e865ed57bb069703893269f080f456c17c71a8aa85256889fdb3c3de2a1236a6"} Mar 11 09:20:16 crc kubenswrapper[4840]: I0311 09:20:16.503201 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.503170179 podStartE2EDuration="2.503170179s" podCreationTimestamp="2026-03-11 09:20:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:20:16.48612404 +0000 UTC m=+1415.151793855" watchObservedRunningTime="2026-03-11 09:20:16.503170179 +0000 UTC m=+1415.168839994" Mar 11 09:20:16 crc kubenswrapper[4840]: I0311 09:20:16.559344 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-7xxk4"] Mar 11 09:20:16 crc kubenswrapper[4840]: I0311 09:20:16.710550 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7a4033f1-ae55-4d94-90d1-397d031770ac" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 11 09:20:16 crc kubenswrapper[4840]: I0311 09:20:16.710842 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7a4033f1-ae55-4d94-90d1-397d031770ac" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 11 09:20:17 crc kubenswrapper[4840]: I0311 09:20:17.464077 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7xxk4" event={"ID":"233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535","Type":"ContainerStarted","Data":"79d6c360cf6670ce989cf158051911b571a246bf0cfafff3518f20c7eac6b15f"} Mar 11 09:20:17 crc kubenswrapper[4840]: I0311 09:20:17.464848 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7xxk4" event={"ID":"233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535","Type":"ContainerStarted","Data":"6b4a9fe03c099c8eb2f26c153c4dbe0d4d8287487e6d8cb23d841b6838c30de0"} Mar 11 09:20:17 crc kubenswrapper[4840]: I0311 09:20:17.474597 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="083af501-c709-475c-91ad-b89eabfc0b92" containerName="ceilometer-central-agent" containerID="cri-o://cbdc983bfb8041d43339abed3a3e8729bb4bcfcda8b274b42238651b2b8ccc47" gracePeriod=30 Mar 11 09:20:17 crc kubenswrapper[4840]: I0311 09:20:17.474948 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"083af501-c709-475c-91ad-b89eabfc0b92","Type":"ContainerStarted","Data":"7d100dd6bbfb352b51d5ad590936707a9eb4d686c58576b70a142a42be312f6e"} Mar 11 09:20:17 crc kubenswrapper[4840]: I0311 09:20:17.475013 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 11 09:20:17 crc kubenswrapper[4840]: I0311 09:20:17.475069 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="083af501-c709-475c-91ad-b89eabfc0b92" containerName="proxy-httpd" containerID="cri-o://7d100dd6bbfb352b51d5ad590936707a9eb4d686c58576b70a142a42be312f6e" gracePeriod=30 Mar 11 09:20:17 crc kubenswrapper[4840]: I0311 09:20:17.475136 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="083af501-c709-475c-91ad-b89eabfc0b92" containerName="sg-core" containerID="cri-o://b18fe8e64228774ed5de9809cfdebb2981c21af04fe1a99a766b3b1adf9a4984" gracePeriod=30 Mar 11 09:20:17 crc kubenswrapper[4840]: I0311 09:20:17.475193 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="083af501-c709-475c-91ad-b89eabfc0b92" containerName="ceilometer-notification-agent" containerID="cri-o://15913e9a46b9cfb7b0c06f541c849836f1d2ff252b41ff966e4681e81dcfdc0e" gracePeriod=30 Mar 11 09:20:17 crc kubenswrapper[4840]: I0311 09:20:17.530774 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.013091809 podStartE2EDuration="6.530742312s" podCreationTimestamp="2026-03-11 09:20:11 +0000 UTC" firstStartedPulling="2026-03-11 09:20:12.283060486 +0000 UTC m=+1410.948730301" lastFinishedPulling="2026-03-11 09:20:16.800710989 +0000 UTC m=+1415.466380804" observedRunningTime="2026-03-11 09:20:17.52272377 +0000 UTC m=+1416.188393595" watchObservedRunningTime="2026-03-11 09:20:17.530742312 +0000 UTC m=+1416.196412127" Mar 11 09:20:17 crc kubenswrapper[4840]: I0311 09:20:17.537495 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-7xxk4" podStartSLOduration=2.53742995 podStartE2EDuration="2.53742995s" podCreationTimestamp="2026-03-11 09:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:20:17.493875095 +0000 UTC m=+1416.159544920" watchObservedRunningTime="2026-03-11 09:20:17.53742995 +0000 UTC m=+1416.203099765" Mar 11 09:20:17 crc kubenswrapper[4840]: I0311 09:20:17.791715 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7749c44969-mwk5x" Mar 11 09:20:17 crc kubenswrapper[4840]: I0311 09:20:17.861555 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bd5679c8c-g7rsc"] Mar 11 09:20:17 crc kubenswrapper[4840]: I0311 09:20:17.861912 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" podUID="df0f59d5-d70d-49d0-aa71-9265ea29995b" containerName="dnsmasq-dns" containerID="cri-o://5711d8729ac8b979c9fd3f48c06225e869b5494ff41f422a71510c13b667f9ef" gracePeriod=10 Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.493316 4840 generic.go:334] "Generic (PLEG): container finished" podID="083af501-c709-475c-91ad-b89eabfc0b92" containerID="7d100dd6bbfb352b51d5ad590936707a9eb4d686c58576b70a142a42be312f6e" exitCode=0 Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.493737 4840 generic.go:334] "Generic (PLEG): container finished" podID="083af501-c709-475c-91ad-b89eabfc0b92" containerID="b18fe8e64228774ed5de9809cfdebb2981c21af04fe1a99a766b3b1adf9a4984" exitCode=2 Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.493752 4840 generic.go:334] "Generic (PLEG): container finished" podID="083af501-c709-475c-91ad-b89eabfc0b92" containerID="15913e9a46b9cfb7b0c06f541c849836f1d2ff252b41ff966e4681e81dcfdc0e" exitCode=0 Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.493515 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"083af501-c709-475c-91ad-b89eabfc0b92","Type":"ContainerDied","Data":"7d100dd6bbfb352b51d5ad590936707a9eb4d686c58576b70a142a42be312f6e"} Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.493839 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"083af501-c709-475c-91ad-b89eabfc0b92","Type":"ContainerDied","Data":"b18fe8e64228774ed5de9809cfdebb2981c21af04fe1a99a766b3b1adf9a4984"} Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.493857 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"083af501-c709-475c-91ad-b89eabfc0b92","Type":"ContainerDied","Data":"15913e9a46b9cfb7b0c06f541c849836f1d2ff252b41ff966e4681e81dcfdc0e"} Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.512799 4840 generic.go:334] "Generic (PLEG): container finished" podID="df0f59d5-d70d-49d0-aa71-9265ea29995b" containerID="5711d8729ac8b979c9fd3f48c06225e869b5494ff41f422a71510c13b667f9ef" exitCode=0 Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.513853 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" event={"ID":"df0f59d5-d70d-49d0-aa71-9265ea29995b","Type":"ContainerDied","Data":"5711d8729ac8b979c9fd3f48c06225e869b5494ff41f422a71510c13b667f9ef"} Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.558579 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.653182 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-dns-swift-storage-0\") pod \"df0f59d5-d70d-49d0-aa71-9265ea29995b\" (UID: \"df0f59d5-d70d-49d0-aa71-9265ea29995b\") " Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.653284 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-dns-svc\") pod \"df0f59d5-d70d-49d0-aa71-9265ea29995b\" (UID: \"df0f59d5-d70d-49d0-aa71-9265ea29995b\") " Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.653325 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-config\") pod \"df0f59d5-d70d-49d0-aa71-9265ea29995b\" (UID: \"df0f59d5-d70d-49d0-aa71-9265ea29995b\") " Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.653352 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-ovsdbserver-sb\") pod \"df0f59d5-d70d-49d0-aa71-9265ea29995b\" (UID: \"df0f59d5-d70d-49d0-aa71-9265ea29995b\") " Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.653411 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stv7k\" (UniqueName: \"kubernetes.io/projected/df0f59d5-d70d-49d0-aa71-9265ea29995b-kube-api-access-stv7k\") pod \"df0f59d5-d70d-49d0-aa71-9265ea29995b\" (UID: \"df0f59d5-d70d-49d0-aa71-9265ea29995b\") " Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.654335 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-ovsdbserver-nb\") pod \"df0f59d5-d70d-49d0-aa71-9265ea29995b\" (UID: \"df0f59d5-d70d-49d0-aa71-9265ea29995b\") " Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.663731 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df0f59d5-d70d-49d0-aa71-9265ea29995b-kube-api-access-stv7k" (OuterVolumeSpecName: "kube-api-access-stv7k") pod "df0f59d5-d70d-49d0-aa71-9265ea29995b" (UID: "df0f59d5-d70d-49d0-aa71-9265ea29995b"). InnerVolumeSpecName "kube-api-access-stv7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.743560 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-config" (OuterVolumeSpecName: "config") pod "df0f59d5-d70d-49d0-aa71-9265ea29995b" (UID: "df0f59d5-d70d-49d0-aa71-9265ea29995b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.752949 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "df0f59d5-d70d-49d0-aa71-9265ea29995b" (UID: "df0f59d5-d70d-49d0-aa71-9265ea29995b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.753034 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "df0f59d5-d70d-49d0-aa71-9265ea29995b" (UID: "df0f59d5-d70d-49d0-aa71-9265ea29995b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.757232 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.757274 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.757285 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.757295 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stv7k\" (UniqueName: \"kubernetes.io/projected/df0f59d5-d70d-49d0-aa71-9265ea29995b-kube-api-access-stv7k\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.760919 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "df0f59d5-d70d-49d0-aa71-9265ea29995b" (UID: "df0f59d5-d70d-49d0-aa71-9265ea29995b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.767733 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "df0f59d5-d70d-49d0-aa71-9265ea29995b" (UID: "df0f59d5-d70d-49d0-aa71-9265ea29995b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.859496 4840 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:18 crc kubenswrapper[4840]: I0311 09:20:18.859542 4840 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/df0f59d5-d70d-49d0-aa71-9265ea29995b-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:19 crc kubenswrapper[4840]: I0311 09:20:19.525358 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" event={"ID":"df0f59d5-d70d-49d0-aa71-9265ea29995b","Type":"ContainerDied","Data":"6c0846901006e72b1dced1d3713e43d4082e775944e0fb467c77a9a96c9fa677"} Mar 11 09:20:19 crc kubenswrapper[4840]: I0311 09:20:19.525439 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" Mar 11 09:20:19 crc kubenswrapper[4840]: I0311 09:20:19.525990 4840 scope.go:117] "RemoveContainer" containerID="5711d8729ac8b979c9fd3f48c06225e869b5494ff41f422a71510c13b667f9ef" Mar 11 09:20:19 crc kubenswrapper[4840]: I0311 09:20:19.559854 4840 scope.go:117] "RemoveContainer" containerID="9151fca0a30fbd98856e0676ce8c461e2164057414cba8383e689bf0cfda6ec8" Mar 11 09:20:19 crc kubenswrapper[4840]: I0311 09:20:19.578288 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bd5679c8c-g7rsc"] Mar 11 09:20:19 crc kubenswrapper[4840]: I0311 09:20:19.590611 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bd5679c8c-g7rsc"] Mar 11 09:20:20 crc kubenswrapper[4840]: I0311 09:20:20.074114 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df0f59d5-d70d-49d0-aa71-9265ea29995b" path="/var/lib/kubelet/pods/df0f59d5-d70d-49d0-aa71-9265ea29995b/volumes" Mar 11 09:20:21 crc kubenswrapper[4840]: I0311 09:20:21.573039 4840 generic.go:334] "Generic (PLEG): container finished" podID="083af501-c709-475c-91ad-b89eabfc0b92" containerID="cbdc983bfb8041d43339abed3a3e8729bb4bcfcda8b274b42238651b2b8ccc47" exitCode=0 Mar 11 09:20:21 crc kubenswrapper[4840]: I0311 09:20:21.573246 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"083af501-c709-475c-91ad-b89eabfc0b92","Type":"ContainerDied","Data":"cbdc983bfb8041d43339abed3a3e8729bb4bcfcda8b274b42238651b2b8ccc47"} Mar 11 09:20:21 crc kubenswrapper[4840]: I0311 09:20:21.920906 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:20:21 crc kubenswrapper[4840]: I0311 09:20:21.924708 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-ceilometer-tls-certs\") pod \"083af501-c709-475c-91ad-b89eabfc0b92\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " Mar 11 09:20:21 crc kubenswrapper[4840]: I0311 09:20:21.924771 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/083af501-c709-475c-91ad-b89eabfc0b92-log-httpd\") pod \"083af501-c709-475c-91ad-b89eabfc0b92\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " Mar 11 09:20:21 crc kubenswrapper[4840]: I0311 09:20:21.924974 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-combined-ca-bundle\") pod \"083af501-c709-475c-91ad-b89eabfc0b92\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " Mar 11 09:20:21 crc kubenswrapper[4840]: I0311 09:20:21.925015 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsdfz\" (UniqueName: \"kubernetes.io/projected/083af501-c709-475c-91ad-b89eabfc0b92-kube-api-access-qsdfz\") pod \"083af501-c709-475c-91ad-b89eabfc0b92\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " Mar 11 09:20:21 crc kubenswrapper[4840]: I0311 09:20:21.925084 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/083af501-c709-475c-91ad-b89eabfc0b92-run-httpd\") pod \"083af501-c709-475c-91ad-b89eabfc0b92\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " Mar 11 09:20:21 crc kubenswrapper[4840]: I0311 09:20:21.925132 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-config-data\") pod \"083af501-c709-475c-91ad-b89eabfc0b92\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " Mar 11 09:20:21 crc kubenswrapper[4840]: I0311 09:20:21.925175 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-scripts\") pod \"083af501-c709-475c-91ad-b89eabfc0b92\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " Mar 11 09:20:21 crc kubenswrapper[4840]: I0311 09:20:21.925197 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-sg-core-conf-yaml\") pod \"083af501-c709-475c-91ad-b89eabfc0b92\" (UID: \"083af501-c709-475c-91ad-b89eabfc0b92\") " Mar 11 09:20:21 crc kubenswrapper[4840]: I0311 09:20:21.927960 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/083af501-c709-475c-91ad-b89eabfc0b92-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "083af501-c709-475c-91ad-b89eabfc0b92" (UID: "083af501-c709-475c-91ad-b89eabfc0b92"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:20:21 crc kubenswrapper[4840]: I0311 09:20:21.928901 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/083af501-c709-475c-91ad-b89eabfc0b92-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "083af501-c709-475c-91ad-b89eabfc0b92" (UID: "083af501-c709-475c-91ad-b89eabfc0b92"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:20:21 crc kubenswrapper[4840]: I0311 09:20:21.934730 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-scripts" (OuterVolumeSpecName: "scripts") pod "083af501-c709-475c-91ad-b89eabfc0b92" (UID: "083af501-c709-475c-91ad-b89eabfc0b92"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:21 crc kubenswrapper[4840]: I0311 09:20:21.934989 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/083af501-c709-475c-91ad-b89eabfc0b92-kube-api-access-qsdfz" (OuterVolumeSpecName: "kube-api-access-qsdfz") pod "083af501-c709-475c-91ad-b89eabfc0b92" (UID: "083af501-c709-475c-91ad-b89eabfc0b92"). InnerVolumeSpecName "kube-api-access-qsdfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:20:21 crc kubenswrapper[4840]: I0311 09:20:21.975949 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "083af501-c709-475c-91ad-b89eabfc0b92" (UID: "083af501-c709-475c-91ad-b89eabfc0b92"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.012584 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "083af501-c709-475c-91ad-b89eabfc0b92" (UID: "083af501-c709-475c-91ad-b89eabfc0b92"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.035063 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsdfz\" (UniqueName: \"kubernetes.io/projected/083af501-c709-475c-91ad-b89eabfc0b92-kube-api-access-qsdfz\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.035100 4840 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/083af501-c709-475c-91ad-b89eabfc0b92-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.035112 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.035128 4840 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.035138 4840 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.035146 4840 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/083af501-c709-475c-91ad-b89eabfc0b92-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.037275 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "083af501-c709-475c-91ad-b89eabfc0b92" (UID: "083af501-c709-475c-91ad-b89eabfc0b92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.071937 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-config-data" (OuterVolumeSpecName: "config-data") pod "083af501-c709-475c-91ad-b89eabfc0b92" (UID: "083af501-c709-475c-91ad-b89eabfc0b92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.185093 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.185130 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/083af501-c709-475c-91ad-b89eabfc0b92-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.587360 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"083af501-c709-475c-91ad-b89eabfc0b92","Type":"ContainerDied","Data":"4da5c124aada918f36e1e0813a348dbda422ec4de0fe8f9a2a4b35036424e777"} Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.587400 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.587853 4840 scope.go:117] "RemoveContainer" containerID="7d100dd6bbfb352b51d5ad590936707a9eb4d686c58576b70a142a42be312f6e" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.629042 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.636035 4840 scope.go:117] "RemoveContainer" containerID="b18fe8e64228774ed5de9809cfdebb2981c21af04fe1a99a766b3b1adf9a4984" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.648619 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.671075 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:20:22 crc kubenswrapper[4840]: E0311 09:20:22.671756 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0f59d5-d70d-49d0-aa71-9265ea29995b" containerName="init" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.671780 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0f59d5-d70d-49d0-aa71-9265ea29995b" containerName="init" Mar 11 09:20:22 crc kubenswrapper[4840]: E0311 09:20:22.671810 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0f59d5-d70d-49d0-aa71-9265ea29995b" containerName="dnsmasq-dns" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.671818 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0f59d5-d70d-49d0-aa71-9265ea29995b" containerName="dnsmasq-dns" Mar 11 09:20:22 crc kubenswrapper[4840]: E0311 09:20:22.671839 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="083af501-c709-475c-91ad-b89eabfc0b92" containerName="sg-core" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.671845 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="083af501-c709-475c-91ad-b89eabfc0b92" containerName="sg-core" Mar 11 09:20:22 crc kubenswrapper[4840]: E0311 09:20:22.671860 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="083af501-c709-475c-91ad-b89eabfc0b92" containerName="proxy-httpd" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.671869 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="083af501-c709-475c-91ad-b89eabfc0b92" containerName="proxy-httpd" Mar 11 09:20:22 crc kubenswrapper[4840]: E0311 09:20:22.671891 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="083af501-c709-475c-91ad-b89eabfc0b92" containerName="ceilometer-notification-agent" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.671898 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="083af501-c709-475c-91ad-b89eabfc0b92" containerName="ceilometer-notification-agent" Mar 11 09:20:22 crc kubenswrapper[4840]: E0311 09:20:22.671922 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="083af501-c709-475c-91ad-b89eabfc0b92" containerName="ceilometer-central-agent" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.671929 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="083af501-c709-475c-91ad-b89eabfc0b92" containerName="ceilometer-central-agent" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.672125 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="083af501-c709-475c-91ad-b89eabfc0b92" containerName="ceilometer-notification-agent" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.672142 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="083af501-c709-475c-91ad-b89eabfc0b92" containerName="ceilometer-central-agent" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.672160 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="083af501-c709-475c-91ad-b89eabfc0b92" containerName="sg-core" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.672172 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="083af501-c709-475c-91ad-b89eabfc0b92" containerName="proxy-httpd" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.672185 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="df0f59d5-d70d-49d0-aa71-9265ea29995b" containerName="dnsmasq-dns" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.674264 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.676455 4840 scope.go:117] "RemoveContainer" containerID="15913e9a46b9cfb7b0c06f541c849836f1d2ff252b41ff966e4681e81dcfdc0e" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.677538 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.677857 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.679444 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.698236 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-scripts\") pod \"ceilometer-0\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " pod="openstack/ceilometer-0" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.698390 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx6wx\" (UniqueName: \"kubernetes.io/projected/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-kube-api-access-xx6wx\") pod \"ceilometer-0\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " pod="openstack/ceilometer-0" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.698493 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " pod="openstack/ceilometer-0" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.698524 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " pod="openstack/ceilometer-0" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.698578 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-run-httpd\") pod \"ceilometer-0\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " pod="openstack/ceilometer-0" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.698728 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-config-data\") pod \"ceilometer-0\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " pod="openstack/ceilometer-0" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.698965 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " pod="openstack/ceilometer-0" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.699029 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-log-httpd\") pod \"ceilometer-0\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " pod="openstack/ceilometer-0" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.712339 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.723586 4840 scope.go:117] "RemoveContainer" containerID="cbdc983bfb8041d43339abed3a3e8729bb4bcfcda8b274b42238651b2b8ccc47" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.799609 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx6wx\" (UniqueName: \"kubernetes.io/projected/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-kube-api-access-xx6wx\") pod \"ceilometer-0\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " pod="openstack/ceilometer-0" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.799659 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " pod="openstack/ceilometer-0" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.799699 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " pod="openstack/ceilometer-0" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.799725 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-run-httpd\") pod \"ceilometer-0\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " pod="openstack/ceilometer-0" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.800554 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-config-data\") pod \"ceilometer-0\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " pod="openstack/ceilometer-0" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.800634 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " pod="openstack/ceilometer-0" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.800663 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-log-httpd\") pod \"ceilometer-0\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " pod="openstack/ceilometer-0" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.800691 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-scripts\") pod \"ceilometer-0\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " pod="openstack/ceilometer-0" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.800656 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-run-httpd\") pod \"ceilometer-0\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " pod="openstack/ceilometer-0" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.802348 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-log-httpd\") pod \"ceilometer-0\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " pod="openstack/ceilometer-0" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.806118 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " pod="openstack/ceilometer-0" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.806196 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " pod="openstack/ceilometer-0" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.806213 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " pod="openstack/ceilometer-0" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.807225 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-config-data\") pod \"ceilometer-0\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " pod="openstack/ceilometer-0" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.809379 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-scripts\") pod \"ceilometer-0\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " pod="openstack/ceilometer-0" Mar 11 09:20:22 crc kubenswrapper[4840]: I0311 09:20:22.819574 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx6wx\" (UniqueName: \"kubernetes.io/projected/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-kube-api-access-xx6wx\") pod \"ceilometer-0\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " pod="openstack/ceilometer-0" Mar 11 09:20:23 crc kubenswrapper[4840]: I0311 09:20:23.021192 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:20:23 crc kubenswrapper[4840]: I0311 09:20:23.441732 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7bd5679c8c-g7rsc" podUID="df0f59d5-d70d-49d0-aa71-9265ea29995b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.196:5353: i/o timeout" Mar 11 09:20:23 crc kubenswrapper[4840]: I0311 09:20:23.511610 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:20:23 crc kubenswrapper[4840]: W0311 09:20:23.512093 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5fc86d0_4547_42ed_a880_f58c9e29d8a4.slice/crio-05764fbcdcbd0463ee21c6466f7617c4ecaee82251dc219e10e55e67e393bc9a WatchSource:0}: Error finding container 05764fbcdcbd0463ee21c6466f7617c4ecaee82251dc219e10e55e67e393bc9a: Status 404 returned error can't find the container with id 05764fbcdcbd0463ee21c6466f7617c4ecaee82251dc219e10e55e67e393bc9a Mar 11 09:20:23 crc kubenswrapper[4840]: I0311 09:20:23.601264 4840 generic.go:334] "Generic (PLEG): container finished" podID="233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535" containerID="79d6c360cf6670ce989cf158051911b571a246bf0cfafff3518f20c7eac6b15f" exitCode=0 Mar 11 09:20:23 crc kubenswrapper[4840]: I0311 09:20:23.601328 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7xxk4" event={"ID":"233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535","Type":"ContainerDied","Data":"79d6c360cf6670ce989cf158051911b571a246bf0cfafff3518f20c7eac6b15f"} Mar 11 09:20:23 crc kubenswrapper[4840]: I0311 09:20:23.605204 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5fc86d0-4547-42ed-a880-f58c9e29d8a4","Type":"ContainerStarted","Data":"05764fbcdcbd0463ee21c6466f7617c4ecaee82251dc219e10e55e67e393bc9a"} Mar 11 09:20:24 crc kubenswrapper[4840]: I0311 09:20:24.073074 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="083af501-c709-475c-91ad-b89eabfc0b92" path="/var/lib/kubelet/pods/083af501-c709-475c-91ad-b89eabfc0b92/volumes" Mar 11 09:20:24 crc kubenswrapper[4840]: I0311 09:20:24.869226 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 11 09:20:24 crc kubenswrapper[4840]: I0311 09:20:24.870092 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.084460 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7xxk4" Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.156356 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535-combined-ca-bundle\") pod \"233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535\" (UID: \"233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535\") " Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.156541 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535-scripts\") pod \"233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535\" (UID: \"233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535\") " Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.156572 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535-config-data\") pod \"233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535\" (UID: \"233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535\") " Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.156693 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnt7q\" (UniqueName: \"kubernetes.io/projected/233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535-kube-api-access-dnt7q\") pod \"233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535\" (UID: \"233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535\") " Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.163747 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535-scripts" (OuterVolumeSpecName: "scripts") pod "233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535" (UID: "233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.165292 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535-kube-api-access-dnt7q" (OuterVolumeSpecName: "kube-api-access-dnt7q") pod "233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535" (UID: "233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535"). InnerVolumeSpecName "kube-api-access-dnt7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.191784 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535-config-data" (OuterVolumeSpecName: "config-data") pod "233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535" (UID: "233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.195174 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535" (UID: "233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.259144 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnt7q\" (UniqueName: \"kubernetes.io/projected/233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535-kube-api-access-dnt7q\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.259196 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.259209 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.259221 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.628698 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-7xxk4" Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.628753 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-7xxk4" event={"ID":"233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535","Type":"ContainerDied","Data":"6b4a9fe03c099c8eb2f26c153c4dbe0d4d8287487e6d8cb23d841b6838c30de0"} Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.628878 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b4a9fe03c099c8eb2f26c153c4dbe0d4d8287487e6d8cb23d841b6838c30de0" Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.630860 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5fc86d0-4547-42ed-a880-f58c9e29d8a4","Type":"ContainerStarted","Data":"34979c436634b8f0e018557d27d62f341c687de81ef20f3c5d4a9fa769f1e4fb"} Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.644164 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.651487 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.655343 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.778204 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.811582 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.826761 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.827530 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="830558b7-e864-46ad-a88c-be9b9633dd15" containerName="nova-scheduler-scheduler" containerID="cri-o://0dbe0d58941c729ef7ebd94f230541b9933fd06d0c4e7dcd3cb845a93a422943" gracePeriod=30 Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.891849 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8b0f3a14-a956-40c2-8aa6-a4ceda179a11" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 11 09:20:25 crc kubenswrapper[4840]: I0311 09:20:25.891872 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8b0f3a14-a956-40c2-8aa6-a4ceda179a11" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 11 09:20:26 crc kubenswrapper[4840]: I0311 09:20:26.648119 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8b0f3a14-a956-40c2-8aa6-a4ceda179a11" containerName="nova-api-log" containerID="cri-o://db852a4879b287ab6ce624ffbb68178c60e1dba37c3ccee6543671ba22b66c3b" gracePeriod=30 Mar 11 09:20:26 crc kubenswrapper[4840]: I0311 09:20:26.648543 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5fc86d0-4547-42ed-a880-f58c9e29d8a4","Type":"ContainerStarted","Data":"786131d91770a763753e005b03e25ce929741941d3ba83d0e07a62fe71b27388"} Mar 11 09:20:26 crc kubenswrapper[4840]: I0311 09:20:26.651116 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8b0f3a14-a956-40c2-8aa6-a4ceda179a11" containerName="nova-api-api" containerID="cri-o://dafc27aee1cad152676078bb638ce833823be214f8fe5adc3a7f4d1e1fcf1c69" gracePeriod=30 Mar 11 09:20:26 crc kubenswrapper[4840]: I0311 09:20:26.665073 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 11 09:20:26 crc kubenswrapper[4840]: E0311 09:20:26.765193 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0dbe0d58941c729ef7ebd94f230541b9933fd06d0c4e7dcd3cb845a93a422943" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 11 09:20:26 crc kubenswrapper[4840]: E0311 09:20:26.786841 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0dbe0d58941c729ef7ebd94f230541b9933fd06d0c4e7dcd3cb845a93a422943" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 11 09:20:26 crc kubenswrapper[4840]: E0311 09:20:26.798576 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0dbe0d58941c729ef7ebd94f230541b9933fd06d0c4e7dcd3cb845a93a422943" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 11 09:20:26 crc kubenswrapper[4840]: E0311 09:20:26.798652 4840 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="830558b7-e864-46ad-a88c-be9b9633dd15" containerName="nova-scheduler-scheduler" Mar 11 09:20:27 crc kubenswrapper[4840]: I0311 09:20:27.661316 4840 generic.go:334] "Generic (PLEG): container finished" podID="8b0f3a14-a956-40c2-8aa6-a4ceda179a11" containerID="db852a4879b287ab6ce624ffbb68178c60e1dba37c3ccee6543671ba22b66c3b" exitCode=143 Mar 11 09:20:27 crc kubenswrapper[4840]: I0311 09:20:27.661397 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b0f3a14-a956-40c2-8aa6-a4ceda179a11","Type":"ContainerDied","Data":"db852a4879b287ab6ce624ffbb68178c60e1dba37c3ccee6543671ba22b66c3b"} Mar 11 09:20:27 crc kubenswrapper[4840]: I0311 09:20:27.667339 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7a4033f1-ae55-4d94-90d1-397d031770ac" containerName="nova-metadata-log" containerID="cri-o://f9bfa7fdef9acd43337efefd3e57d9971ad8d57b63e03a3f021b13e132073ec7" gracePeriod=30 Mar 11 09:20:27 crc kubenswrapper[4840]: I0311 09:20:27.667735 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5fc86d0-4547-42ed-a880-f58c9e29d8a4","Type":"ContainerStarted","Data":"c1366968afef948e7ccf78e045f5f679b4fcb3ebe863e591eb2d9cc24c2f872b"} Mar 11 09:20:27 crc kubenswrapper[4840]: I0311 09:20:27.667851 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7a4033f1-ae55-4d94-90d1-397d031770ac" containerName="nova-metadata-metadata" containerID="cri-o://1ea70b8c18c89fcbd1f7150b59f5078da643e9b3a43500d999a04f2556454bf1" gracePeriod=30 Mar 11 09:20:28 crc kubenswrapper[4840]: I0311 09:20:28.681057 4840 generic.go:334] "Generic (PLEG): container finished" podID="7a4033f1-ae55-4d94-90d1-397d031770ac" containerID="f9bfa7fdef9acd43337efefd3e57d9971ad8d57b63e03a3f021b13e132073ec7" exitCode=143 Mar 11 09:20:28 crc kubenswrapper[4840]: I0311 09:20:28.681138 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a4033f1-ae55-4d94-90d1-397d031770ac","Type":"ContainerDied","Data":"f9bfa7fdef9acd43337efefd3e57d9971ad8d57b63e03a3f021b13e132073ec7"} Mar 11 09:20:29 crc kubenswrapper[4840]: I0311 09:20:29.695572 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5fc86d0-4547-42ed-a880-f58c9e29d8a4","Type":"ContainerStarted","Data":"f68b76db653f501b6c2cade1e67b940b2a3ee35bd560abe70b7979ccd257edaf"} Mar 11 09:20:29 crc kubenswrapper[4840]: I0311 09:20:29.695973 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 11 09:20:29 crc kubenswrapper[4840]: I0311 09:20:29.720259 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.683645364 podStartE2EDuration="7.720230272s" podCreationTimestamp="2026-03-11 09:20:22 +0000 UTC" firstStartedPulling="2026-03-11 09:20:23.515387004 +0000 UTC m=+1422.181056819" lastFinishedPulling="2026-03-11 09:20:28.551971912 +0000 UTC m=+1427.217641727" observedRunningTime="2026-03-11 09:20:29.719329119 +0000 UTC m=+1428.384998954" watchObservedRunningTime="2026-03-11 09:20:29.720230272 +0000 UTC m=+1428.385900087" Mar 11 09:20:30 crc kubenswrapper[4840]: I0311 09:20:30.860107 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7a4033f1-ae55-4d94-90d1-397d031770ac" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": read tcp 10.217.0.2:37732->10.217.0.205:8775: read: connection reset by peer" Mar 11 09:20:30 crc kubenswrapper[4840]: I0311 09:20:30.860183 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7a4033f1-ae55-4d94-90d1-397d031770ac" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": read tcp 10.217.0.2:37746->10.217.0.205:8775: read: connection reset by peer" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.347116 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.515568 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a4033f1-ae55-4d94-90d1-397d031770ac-config-data\") pod \"7a4033f1-ae55-4d94-90d1-397d031770ac\" (UID: \"7a4033f1-ae55-4d94-90d1-397d031770ac\") " Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.515677 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a4033f1-ae55-4d94-90d1-397d031770ac-logs\") pod \"7a4033f1-ae55-4d94-90d1-397d031770ac\" (UID: \"7a4033f1-ae55-4d94-90d1-397d031770ac\") " Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.515702 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q47wp\" (UniqueName: \"kubernetes.io/projected/7a4033f1-ae55-4d94-90d1-397d031770ac-kube-api-access-q47wp\") pod \"7a4033f1-ae55-4d94-90d1-397d031770ac\" (UID: \"7a4033f1-ae55-4d94-90d1-397d031770ac\") " Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.515829 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a4033f1-ae55-4d94-90d1-397d031770ac-combined-ca-bundle\") pod \"7a4033f1-ae55-4d94-90d1-397d031770ac\" (UID: \"7a4033f1-ae55-4d94-90d1-397d031770ac\") " Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.515914 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a4033f1-ae55-4d94-90d1-397d031770ac-nova-metadata-tls-certs\") pod \"7a4033f1-ae55-4d94-90d1-397d031770ac\" (UID: \"7a4033f1-ae55-4d94-90d1-397d031770ac\") " Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.516279 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a4033f1-ae55-4d94-90d1-397d031770ac-logs" (OuterVolumeSpecName: "logs") pod "7a4033f1-ae55-4d94-90d1-397d031770ac" (UID: "7a4033f1-ae55-4d94-90d1-397d031770ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.516454 4840 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a4033f1-ae55-4d94-90d1-397d031770ac-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.527646 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a4033f1-ae55-4d94-90d1-397d031770ac-kube-api-access-q47wp" (OuterVolumeSpecName: "kube-api-access-q47wp") pod "7a4033f1-ae55-4d94-90d1-397d031770ac" (UID: "7a4033f1-ae55-4d94-90d1-397d031770ac"). InnerVolumeSpecName "kube-api-access-q47wp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.565793 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a4033f1-ae55-4d94-90d1-397d031770ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a4033f1-ae55-4d94-90d1-397d031770ac" (UID: "7a4033f1-ae55-4d94-90d1-397d031770ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.570729 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a4033f1-ae55-4d94-90d1-397d031770ac-config-data" (OuterVolumeSpecName: "config-data") pod "7a4033f1-ae55-4d94-90d1-397d031770ac" (UID: "7a4033f1-ae55-4d94-90d1-397d031770ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.584288 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a4033f1-ae55-4d94-90d1-397d031770ac-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7a4033f1-ae55-4d94-90d1-397d031770ac" (UID: "7a4033f1-ae55-4d94-90d1-397d031770ac"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.619689 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a4033f1-ae55-4d94-90d1-397d031770ac-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.619740 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q47wp\" (UniqueName: \"kubernetes.io/projected/7a4033f1-ae55-4d94-90d1-397d031770ac-kube-api-access-q47wp\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.619750 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a4033f1-ae55-4d94-90d1-397d031770ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.619761 4840 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a4033f1-ae55-4d94-90d1-397d031770ac-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:32 crc kubenswrapper[4840]: E0311 09:20:31.724059 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0dbe0d58941c729ef7ebd94f230541b9933fd06d0c4e7dcd3cb845a93a422943 is running failed: container process not found" containerID="0dbe0d58941c729ef7ebd94f230541b9933fd06d0c4e7dcd3cb845a93a422943" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 11 09:20:32 crc kubenswrapper[4840]: E0311 09:20:31.724820 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0dbe0d58941c729ef7ebd94f230541b9933fd06d0c4e7dcd3cb845a93a422943 is running failed: container process not found" containerID="0dbe0d58941c729ef7ebd94f230541b9933fd06d0c4e7dcd3cb845a93a422943" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 11 09:20:32 crc kubenswrapper[4840]: E0311 09:20:31.725130 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0dbe0d58941c729ef7ebd94f230541b9933fd06d0c4e7dcd3cb845a93a422943 is running failed: container process not found" containerID="0dbe0d58941c729ef7ebd94f230541b9933fd06d0c4e7dcd3cb845a93a422943" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 11 09:20:32 crc kubenswrapper[4840]: E0311 09:20:31.725153 4840 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0dbe0d58941c729ef7ebd94f230541b9933fd06d0c4e7dcd3cb845a93a422943 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="830558b7-e864-46ad-a88c-be9b9633dd15" containerName="nova-scheduler-scheduler" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.726115 4840 generic.go:334] "Generic (PLEG): container finished" podID="830558b7-e864-46ad-a88c-be9b9633dd15" containerID="0dbe0d58941c729ef7ebd94f230541b9933fd06d0c4e7dcd3cb845a93a422943" exitCode=0 Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.726164 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"830558b7-e864-46ad-a88c-be9b9633dd15","Type":"ContainerDied","Data":"0dbe0d58941c729ef7ebd94f230541b9933fd06d0c4e7dcd3cb845a93a422943"} Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.729688 4840 generic.go:334] "Generic (PLEG): container finished" podID="7a4033f1-ae55-4d94-90d1-397d031770ac" containerID="1ea70b8c18c89fcbd1f7150b59f5078da643e9b3a43500d999a04f2556454bf1" exitCode=0 Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.729731 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a4033f1-ae55-4d94-90d1-397d031770ac","Type":"ContainerDied","Data":"1ea70b8c18c89fcbd1f7150b59f5078da643e9b3a43500d999a04f2556454bf1"} Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.729753 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.729761 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7a4033f1-ae55-4d94-90d1-397d031770ac","Type":"ContainerDied","Data":"cf486f95735557f40cb1ab341055c949501b19648c449c75b59f4b668a065ba5"} Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.729784 4840 scope.go:117] "RemoveContainer" containerID="1ea70b8c18c89fcbd1f7150b59f5078da643e9b3a43500d999a04f2556454bf1" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.763272 4840 scope.go:117] "RemoveContainer" containerID="f9bfa7fdef9acd43337efefd3e57d9971ad8d57b63e03a3f021b13e132073ec7" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.778639 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.822097 4840 scope.go:117] "RemoveContainer" containerID="1ea70b8c18c89fcbd1f7150b59f5078da643e9b3a43500d999a04f2556454bf1" Mar 11 09:20:32 crc kubenswrapper[4840]: E0311 09:20:31.823630 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ea70b8c18c89fcbd1f7150b59f5078da643e9b3a43500d999a04f2556454bf1\": container with ID starting with 1ea70b8c18c89fcbd1f7150b59f5078da643e9b3a43500d999a04f2556454bf1 not found: ID does not exist" containerID="1ea70b8c18c89fcbd1f7150b59f5078da643e9b3a43500d999a04f2556454bf1" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.823704 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ea70b8c18c89fcbd1f7150b59f5078da643e9b3a43500d999a04f2556454bf1"} err="failed to get container status \"1ea70b8c18c89fcbd1f7150b59f5078da643e9b3a43500d999a04f2556454bf1\": rpc error: code = NotFound desc = could not find container \"1ea70b8c18c89fcbd1f7150b59f5078da643e9b3a43500d999a04f2556454bf1\": container with ID starting with 1ea70b8c18c89fcbd1f7150b59f5078da643e9b3a43500d999a04f2556454bf1 not found: ID does not exist" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.823726 4840 scope.go:117] "RemoveContainer" containerID="f9bfa7fdef9acd43337efefd3e57d9971ad8d57b63e03a3f021b13e132073ec7" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.823785 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:20:32 crc kubenswrapper[4840]: E0311 09:20:31.824184 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9bfa7fdef9acd43337efefd3e57d9971ad8d57b63e03a3f021b13e132073ec7\": container with ID starting with f9bfa7fdef9acd43337efefd3e57d9971ad8d57b63e03a3f021b13e132073ec7 not found: ID does not exist" containerID="f9bfa7fdef9acd43337efefd3e57d9971ad8d57b63e03a3f021b13e132073ec7" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.824235 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9bfa7fdef9acd43337efefd3e57d9971ad8d57b63e03a3f021b13e132073ec7"} err="failed to get container status \"f9bfa7fdef9acd43337efefd3e57d9971ad8d57b63e03a3f021b13e132073ec7\": rpc error: code = NotFound desc = could not find container \"f9bfa7fdef9acd43337efefd3e57d9971ad8d57b63e03a3f021b13e132073ec7\": container with ID starting with f9bfa7fdef9acd43337efefd3e57d9971ad8d57b63e03a3f021b13e132073ec7 not found: ID does not exist" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.845180 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:20:32 crc kubenswrapper[4840]: E0311 09:20:31.850502 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a4033f1-ae55-4d94-90d1-397d031770ac" containerName="nova-metadata-log" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.850530 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a4033f1-ae55-4d94-90d1-397d031770ac" containerName="nova-metadata-log" Mar 11 09:20:32 crc kubenswrapper[4840]: E0311 09:20:31.850547 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535" containerName="nova-manage" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.850553 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535" containerName="nova-manage" Mar 11 09:20:32 crc kubenswrapper[4840]: E0311 09:20:31.850595 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a4033f1-ae55-4d94-90d1-397d031770ac" containerName="nova-metadata-metadata" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.850602 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a4033f1-ae55-4d94-90d1-397d031770ac" containerName="nova-metadata-metadata" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.850820 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535" containerName="nova-manage" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.850832 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a4033f1-ae55-4d94-90d1-397d031770ac" containerName="nova-metadata-metadata" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.850857 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a4033f1-ae55-4d94-90d1-397d031770ac" containerName="nova-metadata-log" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.851866 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.854425 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.854484 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:31.854517 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.026837 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915955ff-c1d8-4f99-a621-f28d463c512f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"915955ff-c1d8-4f99-a621-f28d463c512f\") " pod="openstack/nova-metadata-0" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.026886 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/915955ff-c1d8-4f99-a621-f28d463c512f-logs\") pod \"nova-metadata-0\" (UID: \"915955ff-c1d8-4f99-a621-f28d463c512f\") " pod="openstack/nova-metadata-0" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.026910 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/915955ff-c1d8-4f99-a621-f28d463c512f-config-data\") pod \"nova-metadata-0\" (UID: \"915955ff-c1d8-4f99-a621-f28d463c512f\") " pod="openstack/nova-metadata-0" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.027066 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/915955ff-c1d8-4f99-a621-f28d463c512f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"915955ff-c1d8-4f99-a621-f28d463c512f\") " pod="openstack/nova-metadata-0" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.027099 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52m5l\" (UniqueName: \"kubernetes.io/projected/915955ff-c1d8-4f99-a621-f28d463c512f-kube-api-access-52m5l\") pod \"nova-metadata-0\" (UID: \"915955ff-c1d8-4f99-a621-f28d463c512f\") " pod="openstack/nova-metadata-0" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.078840 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a4033f1-ae55-4d94-90d1-397d031770ac" path="/var/lib/kubelet/pods/7a4033f1-ae55-4d94-90d1-397d031770ac/volumes" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.153979 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/915955ff-c1d8-4f99-a621-f28d463c512f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"915955ff-c1d8-4f99-a621-f28d463c512f\") " pod="openstack/nova-metadata-0" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.154024 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52m5l\" (UniqueName: \"kubernetes.io/projected/915955ff-c1d8-4f99-a621-f28d463c512f-kube-api-access-52m5l\") pod \"nova-metadata-0\" (UID: \"915955ff-c1d8-4f99-a621-f28d463c512f\") " pod="openstack/nova-metadata-0" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.154120 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915955ff-c1d8-4f99-a621-f28d463c512f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"915955ff-c1d8-4f99-a621-f28d463c512f\") " pod="openstack/nova-metadata-0" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.154141 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/915955ff-c1d8-4f99-a621-f28d463c512f-logs\") pod \"nova-metadata-0\" (UID: \"915955ff-c1d8-4f99-a621-f28d463c512f\") " pod="openstack/nova-metadata-0" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.154157 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/915955ff-c1d8-4f99-a621-f28d463c512f-config-data\") pod \"nova-metadata-0\" (UID: \"915955ff-c1d8-4f99-a621-f28d463c512f\") " pod="openstack/nova-metadata-0" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.163189 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/915955ff-c1d8-4f99-a621-f28d463c512f-logs\") pod \"nova-metadata-0\" (UID: \"915955ff-c1d8-4f99-a621-f28d463c512f\") " pod="openstack/nova-metadata-0" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.171796 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/915955ff-c1d8-4f99-a621-f28d463c512f-config-data\") pod \"nova-metadata-0\" (UID: \"915955ff-c1d8-4f99-a621-f28d463c512f\") " pod="openstack/nova-metadata-0" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.174154 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/915955ff-c1d8-4f99-a621-f28d463c512f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"915955ff-c1d8-4f99-a621-f28d463c512f\") " pod="openstack/nova-metadata-0" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.186157 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915955ff-c1d8-4f99-a621-f28d463c512f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"915955ff-c1d8-4f99-a621-f28d463c512f\") " pod="openstack/nova-metadata-0" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.205775 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52m5l\" (UniqueName: \"kubernetes.io/projected/915955ff-c1d8-4f99-a621-f28d463c512f-kube-api-access-52m5l\") pod \"nova-metadata-0\" (UID: \"915955ff-c1d8-4f99-a621-f28d463c512f\") " pod="openstack/nova-metadata-0" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.218058 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.536893 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.669582 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt8xv\" (UniqueName: \"kubernetes.io/projected/830558b7-e864-46ad-a88c-be9b9633dd15-kube-api-access-jt8xv\") pod \"830558b7-e864-46ad-a88c-be9b9633dd15\" (UID: \"830558b7-e864-46ad-a88c-be9b9633dd15\") " Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.669808 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/830558b7-e864-46ad-a88c-be9b9633dd15-config-data\") pod \"830558b7-e864-46ad-a88c-be9b9633dd15\" (UID: \"830558b7-e864-46ad-a88c-be9b9633dd15\") " Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.669882 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/830558b7-e864-46ad-a88c-be9b9633dd15-combined-ca-bundle\") pod \"830558b7-e864-46ad-a88c-be9b9633dd15\" (UID: \"830558b7-e864-46ad-a88c-be9b9633dd15\") " Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.677687 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/830558b7-e864-46ad-a88c-be9b9633dd15-kube-api-access-jt8xv" (OuterVolumeSpecName: "kube-api-access-jt8xv") pod "830558b7-e864-46ad-a88c-be9b9633dd15" (UID: "830558b7-e864-46ad-a88c-be9b9633dd15"). InnerVolumeSpecName "kube-api-access-jt8xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.726157 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/830558b7-e864-46ad-a88c-be9b9633dd15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "830558b7-e864-46ad-a88c-be9b9633dd15" (UID: "830558b7-e864-46ad-a88c-be9b9633dd15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.728634 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/830558b7-e864-46ad-a88c-be9b9633dd15-config-data" (OuterVolumeSpecName: "config-data") pod "830558b7-e864-46ad-a88c-be9b9633dd15" (UID: "830558b7-e864-46ad-a88c-be9b9633dd15"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.750444 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"830558b7-e864-46ad-a88c-be9b9633dd15","Type":"ContainerDied","Data":"2c0b71cd2553d3768482fd0537833fd7f7f85cc32a37b5702df5f767479c9b7a"} Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.750513 4840 scope.go:117] "RemoveContainer" containerID="0dbe0d58941c729ef7ebd94f230541b9933fd06d0c4e7dcd3cb845a93a422943" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.750635 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.757337 4840 generic.go:334] "Generic (PLEG): container finished" podID="8b0f3a14-a956-40c2-8aa6-a4ceda179a11" containerID="dafc27aee1cad152676078bb638ce833823be214f8fe5adc3a7f4d1e1fcf1c69" exitCode=0 Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.757388 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b0f3a14-a956-40c2-8aa6-a4ceda179a11","Type":"ContainerDied","Data":"dafc27aee1cad152676078bb638ce833823be214f8fe5adc3a7f4d1e1fcf1c69"} Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.771688 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt8xv\" (UniqueName: \"kubernetes.io/projected/830558b7-e864-46ad-a88c-be9b9633dd15-kube-api-access-jt8xv\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.771747 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/830558b7-e864-46ad-a88c-be9b9633dd15-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.771761 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/830558b7-e864-46ad-a88c-be9b9633dd15-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.788489 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.809016 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.826273 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:20:32 crc kubenswrapper[4840]: E0311 09:20:32.826968 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="830558b7-e864-46ad-a88c-be9b9633dd15" containerName="nova-scheduler-scheduler" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.826993 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="830558b7-e864-46ad-a88c-be9b9633dd15" containerName="nova-scheduler-scheduler" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.827240 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="830558b7-e864-46ad-a88c-be9b9633dd15" containerName="nova-scheduler-scheduler" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.828175 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.834954 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.837460 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.863253 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.921482 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4m6s\" (UniqueName: \"kubernetes.io/projected/ada053fd-c71a-4425-8220-b950f0cab229-kube-api-access-f4m6s\") pod \"nova-scheduler-0\" (UID: \"ada053fd-c71a-4425-8220-b950f0cab229\") " pod="openstack/nova-scheduler-0" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.921556 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada053fd-c71a-4425-8220-b950f0cab229-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ada053fd-c71a-4425-8220-b950f0cab229\") " pod="openstack/nova-scheduler-0" Mar 11 09:20:32 crc kubenswrapper[4840]: I0311 09:20:32.921757 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ada053fd-c71a-4425-8220-b950f0cab229-config-data\") pod \"nova-scheduler-0\" (UID: \"ada053fd-c71a-4425-8220-b950f0cab229\") " pod="openstack/nova-scheduler-0" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.006759 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:20:33 crc kubenswrapper[4840]: W0311 09:20:33.009046 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod915955ff_c1d8_4f99_a621_f28d463c512f.slice/crio-219e8ccbb2a92e75b89fe3b31edf67b7d0cebff0f30f5179b7f0e500621611de WatchSource:0}: Error finding container 219e8ccbb2a92e75b89fe3b31edf67b7d0cebff0f30f5179b7f0e500621611de: Status 404 returned error can't find the container with id 219e8ccbb2a92e75b89fe3b31edf67b7d0cebff0f30f5179b7f0e500621611de Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.022757 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-internal-tls-certs\") pod \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\" (UID: \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\") " Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.022822 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-public-tls-certs\") pod \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\" (UID: \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\") " Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.022913 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-config-data\") pod \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\" (UID: \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\") " Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.023121 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-combined-ca-bundle\") pod \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\" (UID: \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\") " Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.023164 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-logs\") pod \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\" (UID: \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\") " Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.023193 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59lhx\" (UniqueName: \"kubernetes.io/projected/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-kube-api-access-59lhx\") pod \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\" (UID: \"8b0f3a14-a956-40c2-8aa6-a4ceda179a11\") " Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.023525 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4m6s\" (UniqueName: \"kubernetes.io/projected/ada053fd-c71a-4425-8220-b950f0cab229-kube-api-access-f4m6s\") pod \"nova-scheduler-0\" (UID: \"ada053fd-c71a-4425-8220-b950f0cab229\") " pod="openstack/nova-scheduler-0" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.023558 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada053fd-c71a-4425-8220-b950f0cab229-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ada053fd-c71a-4425-8220-b950f0cab229\") " pod="openstack/nova-scheduler-0" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.023621 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ada053fd-c71a-4425-8220-b950f0cab229-config-data\") pod \"nova-scheduler-0\" (UID: \"ada053fd-c71a-4425-8220-b950f0cab229\") " pod="openstack/nova-scheduler-0" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.024013 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-logs" (OuterVolumeSpecName: "logs") pod "8b0f3a14-a956-40c2-8aa6-a4ceda179a11" (UID: "8b0f3a14-a956-40c2-8aa6-a4ceda179a11"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.028746 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-kube-api-access-59lhx" (OuterVolumeSpecName: "kube-api-access-59lhx") pod "8b0f3a14-a956-40c2-8aa6-a4ceda179a11" (UID: "8b0f3a14-a956-40c2-8aa6-a4ceda179a11"). InnerVolumeSpecName "kube-api-access-59lhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.029266 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada053fd-c71a-4425-8220-b950f0cab229-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ada053fd-c71a-4425-8220-b950f0cab229\") " pod="openstack/nova-scheduler-0" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.030202 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ada053fd-c71a-4425-8220-b950f0cab229-config-data\") pod \"nova-scheduler-0\" (UID: \"ada053fd-c71a-4425-8220-b950f0cab229\") " pod="openstack/nova-scheduler-0" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.041599 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4m6s\" (UniqueName: \"kubernetes.io/projected/ada053fd-c71a-4425-8220-b950f0cab229-kube-api-access-f4m6s\") pod \"nova-scheduler-0\" (UID: \"ada053fd-c71a-4425-8220-b950f0cab229\") " pod="openstack/nova-scheduler-0" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.052289 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-config-data" (OuterVolumeSpecName: "config-data") pod "8b0f3a14-a956-40c2-8aa6-a4ceda179a11" (UID: "8b0f3a14-a956-40c2-8aa6-a4ceda179a11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.058014 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b0f3a14-a956-40c2-8aa6-a4ceda179a11" (UID: "8b0f3a14-a956-40c2-8aa6-a4ceda179a11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.085276 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8b0f3a14-a956-40c2-8aa6-a4ceda179a11" (UID: "8b0f3a14-a956-40c2-8aa6-a4ceda179a11"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.087219 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8b0f3a14-a956-40c2-8aa6-a4ceda179a11" (UID: "8b0f3a14-a956-40c2-8aa6-a4ceda179a11"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.125264 4840 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.125307 4840 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.125324 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.125340 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.125354 4840 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.125366 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59lhx\" (UniqueName: \"kubernetes.io/projected/8b0f3a14-a956-40c2-8aa6-a4ceda179a11-kube-api-access-59lhx\") on node \"crc\" DevicePath \"\"" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.160091 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.663534 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:20:33 crc kubenswrapper[4840]: W0311 09:20:33.666415 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podada053fd_c71a_4425_8220_b950f0cab229.slice/crio-da52197a8f497123829fe04146eb74a2412a70d31de945d59a3faaeee05b8f39 WatchSource:0}: Error finding container da52197a8f497123829fe04146eb74a2412a70d31de945d59a3faaeee05b8f39: Status 404 returned error can't find the container with id da52197a8f497123829fe04146eb74a2412a70d31de945d59a3faaeee05b8f39 Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.787354 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b0f3a14-a956-40c2-8aa6-a4ceda179a11","Type":"ContainerDied","Data":"e865ed57bb069703893269f080f456c17c71a8aa85256889fdb3c3de2a1236a6"} Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.787446 4840 scope.go:117] "RemoveContainer" containerID="dafc27aee1cad152676078bb638ce833823be214f8fe5adc3a7f4d1e1fcf1c69" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.787596 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.793554 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ada053fd-c71a-4425-8220-b950f0cab229","Type":"ContainerStarted","Data":"da52197a8f497123829fe04146eb74a2412a70d31de945d59a3faaeee05b8f39"} Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.801356 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"915955ff-c1d8-4f99-a621-f28d463c512f","Type":"ContainerStarted","Data":"904fce62c27e75c0afd1b04ee9c4e1dd4f36346fdb9943a482344076f39797f2"} Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.802182 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"915955ff-c1d8-4f99-a621-f28d463c512f","Type":"ContainerStarted","Data":"a0aea92352d16543560402efd876df16008f6e3e1715ebf0bc7c85603335be96"} Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.802258 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"915955ff-c1d8-4f99-a621-f28d463c512f","Type":"ContainerStarted","Data":"219e8ccbb2a92e75b89fe3b31edf67b7d0cebff0f30f5179b7f0e500621611de"} Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.818100 4840 scope.go:117] "RemoveContainer" containerID="db852a4879b287ab6ce624ffbb68178c60e1dba37c3ccee6543671ba22b66c3b" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.852358 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.852316047 podStartE2EDuration="2.852316047s" podCreationTimestamp="2026-03-11 09:20:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:20:33.828540399 +0000 UTC m=+1432.494210214" watchObservedRunningTime="2026-03-11 09:20:33.852316047 +0000 UTC m=+1432.517985862" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.865896 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.890488 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.904157 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 11 09:20:33 crc kubenswrapper[4840]: E0311 09:20:33.904734 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b0f3a14-a956-40c2-8aa6-a4ceda179a11" containerName="nova-api-log" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.904758 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b0f3a14-a956-40c2-8aa6-a4ceda179a11" containerName="nova-api-log" Mar 11 09:20:33 crc kubenswrapper[4840]: E0311 09:20:33.904773 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b0f3a14-a956-40c2-8aa6-a4ceda179a11" containerName="nova-api-api" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.904781 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b0f3a14-a956-40c2-8aa6-a4ceda179a11" containerName="nova-api-api" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.905049 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b0f3a14-a956-40c2-8aa6-a4ceda179a11" containerName="nova-api-api" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.905070 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b0f3a14-a956-40c2-8aa6-a4ceda179a11" containerName="nova-api-log" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.906340 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.910930 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.911106 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.911206 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 11 09:20:33 crc kubenswrapper[4840]: I0311 09:20:33.936408 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:20:34 crc kubenswrapper[4840]: I0311 09:20:34.040451 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/629115e9-6bcf-45e8-a0da-d7c06386b7b7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\") " pod="openstack/nova-api-0" Mar 11 09:20:34 crc kubenswrapper[4840]: I0311 09:20:34.040602 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/629115e9-6bcf-45e8-a0da-d7c06386b7b7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\") " pod="openstack/nova-api-0" Mar 11 09:20:34 crc kubenswrapper[4840]: I0311 09:20:34.040642 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/629115e9-6bcf-45e8-a0da-d7c06386b7b7-config-data\") pod \"nova-api-0\" (UID: \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\") " pod="openstack/nova-api-0" Mar 11 09:20:34 crc kubenswrapper[4840]: I0311 09:20:34.040673 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm5m6\" (UniqueName: \"kubernetes.io/projected/629115e9-6bcf-45e8-a0da-d7c06386b7b7-kube-api-access-pm5m6\") pod \"nova-api-0\" (UID: \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\") " pod="openstack/nova-api-0" Mar 11 09:20:34 crc kubenswrapper[4840]: I0311 09:20:34.040866 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/629115e9-6bcf-45e8-a0da-d7c06386b7b7-public-tls-certs\") pod \"nova-api-0\" (UID: \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\") " pod="openstack/nova-api-0" Mar 11 09:20:34 crc kubenswrapper[4840]: I0311 09:20:34.041042 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/629115e9-6bcf-45e8-a0da-d7c06386b7b7-logs\") pod \"nova-api-0\" (UID: \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\") " pod="openstack/nova-api-0" Mar 11 09:20:34 crc kubenswrapper[4840]: I0311 09:20:34.072211 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="830558b7-e864-46ad-a88c-be9b9633dd15" path="/var/lib/kubelet/pods/830558b7-e864-46ad-a88c-be9b9633dd15/volumes" Mar 11 09:20:34 crc kubenswrapper[4840]: I0311 09:20:34.072878 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b0f3a14-a956-40c2-8aa6-a4ceda179a11" path="/var/lib/kubelet/pods/8b0f3a14-a956-40c2-8aa6-a4ceda179a11/volumes" Mar 11 09:20:34 crc kubenswrapper[4840]: I0311 09:20:34.143190 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/629115e9-6bcf-45e8-a0da-d7c06386b7b7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\") " pod="openstack/nova-api-0" Mar 11 09:20:34 crc kubenswrapper[4840]: I0311 09:20:34.143279 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/629115e9-6bcf-45e8-a0da-d7c06386b7b7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\") " pod="openstack/nova-api-0" Mar 11 09:20:34 crc kubenswrapper[4840]: I0311 09:20:34.143314 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/629115e9-6bcf-45e8-a0da-d7c06386b7b7-config-data\") pod \"nova-api-0\" (UID: \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\") " pod="openstack/nova-api-0" Mar 11 09:20:34 crc kubenswrapper[4840]: I0311 09:20:34.143335 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm5m6\" (UniqueName: \"kubernetes.io/projected/629115e9-6bcf-45e8-a0da-d7c06386b7b7-kube-api-access-pm5m6\") pod \"nova-api-0\" (UID: \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\") " pod="openstack/nova-api-0" Mar 11 09:20:34 crc kubenswrapper[4840]: I0311 09:20:34.143380 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/629115e9-6bcf-45e8-a0da-d7c06386b7b7-public-tls-certs\") pod \"nova-api-0\" (UID: \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\") " pod="openstack/nova-api-0" Mar 11 09:20:34 crc kubenswrapper[4840]: I0311 09:20:34.143420 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/629115e9-6bcf-45e8-a0da-d7c06386b7b7-logs\") pod \"nova-api-0\" (UID: \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\") " pod="openstack/nova-api-0" Mar 11 09:20:34 crc kubenswrapper[4840]: I0311 09:20:34.143894 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/629115e9-6bcf-45e8-a0da-d7c06386b7b7-logs\") pod \"nova-api-0\" (UID: \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\") " pod="openstack/nova-api-0" Mar 11 09:20:34 crc kubenswrapper[4840]: I0311 09:20:34.148272 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/629115e9-6bcf-45e8-a0da-d7c06386b7b7-internal-tls-certs\") pod \"nova-api-0\" (UID: \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\") " pod="openstack/nova-api-0" Mar 11 09:20:34 crc kubenswrapper[4840]: I0311 09:20:34.148610 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/629115e9-6bcf-45e8-a0da-d7c06386b7b7-config-data\") pod \"nova-api-0\" (UID: \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\") " pod="openstack/nova-api-0" Mar 11 09:20:34 crc kubenswrapper[4840]: I0311 09:20:34.148609 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/629115e9-6bcf-45e8-a0da-d7c06386b7b7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\") " pod="openstack/nova-api-0" Mar 11 09:20:34 crc kubenswrapper[4840]: I0311 09:20:34.149182 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/629115e9-6bcf-45e8-a0da-d7c06386b7b7-public-tls-certs\") pod \"nova-api-0\" (UID: \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\") " pod="openstack/nova-api-0" Mar 11 09:20:34 crc kubenswrapper[4840]: I0311 09:20:34.162319 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm5m6\" (UniqueName: \"kubernetes.io/projected/629115e9-6bcf-45e8-a0da-d7c06386b7b7-kube-api-access-pm5m6\") pod \"nova-api-0\" (UID: \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\") " pod="openstack/nova-api-0" Mar 11 09:20:34 crc kubenswrapper[4840]: I0311 09:20:34.288669 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:20:34 crc kubenswrapper[4840]: I0311 09:20:34.775202 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:20:34 crc kubenswrapper[4840]: W0311 09:20:34.776360 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod629115e9_6bcf_45e8_a0da_d7c06386b7b7.slice/crio-450f2669f5a2991854ed4736969f267b82b59ce6ac9a94181b523317942841f7 WatchSource:0}: Error finding container 450f2669f5a2991854ed4736969f267b82b59ce6ac9a94181b523317942841f7: Status 404 returned error can't find the container with id 450f2669f5a2991854ed4736969f267b82b59ce6ac9a94181b523317942841f7 Mar 11 09:20:34 crc kubenswrapper[4840]: I0311 09:20:34.812685 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"629115e9-6bcf-45e8-a0da-d7c06386b7b7","Type":"ContainerStarted","Data":"450f2669f5a2991854ed4736969f267b82b59ce6ac9a94181b523317942841f7"} Mar 11 09:20:34 crc kubenswrapper[4840]: I0311 09:20:34.814890 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ada053fd-c71a-4425-8220-b950f0cab229","Type":"ContainerStarted","Data":"344c0fde0e6a0013eefcf0f63c44c10fef7fcbc6d796ca335364063dd3641c28"} Mar 11 09:20:34 crc kubenswrapper[4840]: I0311 09:20:34.844895 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.844874343 podStartE2EDuration="2.844874343s" podCreationTimestamp="2026-03-11 09:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:20:34.842540034 +0000 UTC m=+1433.508209859" watchObservedRunningTime="2026-03-11 09:20:34.844874343 +0000 UTC m=+1433.510544168" Mar 11 09:20:35 crc kubenswrapper[4840]: I0311 09:20:35.829863 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"629115e9-6bcf-45e8-a0da-d7c06386b7b7","Type":"ContainerStarted","Data":"9895b030ee05b1c30d40171df5aaa90b27098c8597a3d0999e257cf13cec7e67"} Mar 11 09:20:35 crc kubenswrapper[4840]: I0311 09:20:35.830223 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"629115e9-6bcf-45e8-a0da-d7c06386b7b7","Type":"ContainerStarted","Data":"964ebeaba64a80f1be2bba1d82ca1a7e7dffe0224141e942622675fc8b28aeb6"} Mar 11 09:20:35 crc kubenswrapper[4840]: I0311 09:20:35.852299 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.8522792519999998 podStartE2EDuration="2.852279252s" podCreationTimestamp="2026-03-11 09:20:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 09:20:35.851654477 +0000 UTC m=+1434.517324282" watchObservedRunningTime="2026-03-11 09:20:35.852279252 +0000 UTC m=+1434.517949067" Mar 11 09:20:37 crc kubenswrapper[4840]: I0311 09:20:37.218988 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 11 09:20:37 crc kubenswrapper[4840]: I0311 09:20:37.219350 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 11 09:20:38 crc kubenswrapper[4840]: I0311 09:20:38.162938 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 11 09:20:42 crc kubenswrapper[4840]: I0311 09:20:42.219228 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 11 09:20:42 crc kubenswrapper[4840]: I0311 09:20:42.219932 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 11 09:20:43 crc kubenswrapper[4840]: I0311 09:20:43.161182 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 11 09:20:43 crc kubenswrapper[4840]: I0311 09:20:43.191957 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 11 09:20:43 crc kubenswrapper[4840]: I0311 09:20:43.237695 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="915955ff-c1d8-4f99-a621-f28d463c512f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.211:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 11 09:20:43 crc kubenswrapper[4840]: I0311 09:20:43.237695 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="915955ff-c1d8-4f99-a621-f28d463c512f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.211:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 11 09:20:43 crc kubenswrapper[4840]: I0311 09:20:43.961962 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 11 09:20:44 crc kubenswrapper[4840]: I0311 09:20:44.290070 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 11 09:20:44 crc kubenswrapper[4840]: I0311 09:20:44.290113 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 11 09:20:45 crc kubenswrapper[4840]: I0311 09:20:45.308646 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="629115e9-6bcf-45e8-a0da-d7c06386b7b7" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.213:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 11 09:20:45 crc kubenswrapper[4840]: I0311 09:20:45.308725 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="629115e9-6bcf-45e8-a0da-d7c06386b7b7" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.213:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 11 09:20:52 crc kubenswrapper[4840]: I0311 09:20:52.226104 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 11 09:20:52 crc kubenswrapper[4840]: I0311 09:20:52.228682 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 11 09:20:52 crc kubenswrapper[4840]: I0311 09:20:52.233192 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 11 09:20:53 crc kubenswrapper[4840]: I0311 09:20:53.028515 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 11 09:20:53 crc kubenswrapper[4840]: I0311 09:20:53.033024 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 11 09:20:54 crc kubenswrapper[4840]: I0311 09:20:54.304013 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 11 09:20:54 crc kubenswrapper[4840]: I0311 09:20:54.304707 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 11 09:20:54 crc kubenswrapper[4840]: I0311 09:20:54.308085 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 11 09:20:54 crc kubenswrapper[4840]: I0311 09:20:54.312499 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 11 09:20:55 crc kubenswrapper[4840]: I0311 09:20:55.047692 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 11 09:20:55 crc kubenswrapper[4840]: I0311 09:20:55.054840 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 11 09:21:06 crc kubenswrapper[4840]: I0311 09:21:06.222443 4840 scope.go:117] "RemoveContainer" containerID="afb3af5f23f2ecb925672e6812c11a09a9f8523eb61ce4ca8277b8f0ad16e3b3" Mar 11 09:21:14 crc kubenswrapper[4840]: I0311 09:21:14.900313 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Mar 11 09:21:14 crc kubenswrapper[4840]: I0311 09:21:14.901513 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="794eb074-baf3-46dc-8f9d-8a92fc9240fd" containerName="openstackclient" containerID="cri-o://037417d9a07e2291989661f88eef7dd45b3d6fb5f3c8e6af6e31e40973d67031" gracePeriod=2 Mar 11 09:21:14 crc kubenswrapper[4840]: I0311 09:21:14.941651 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Mar 11 09:21:14 crc kubenswrapper[4840]: I0311 09:21:14.981563 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-8dnn2"] Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.009577 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-8dnn2"] Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.034389 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-q29dz"] Mar 11 09:21:15 crc kubenswrapper[4840]: E0311 09:21:15.034922 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="794eb074-baf3-46dc-8f9d-8a92fc9240fd" containerName="openstackclient" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.034937 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="794eb074-baf3-46dc-8f9d-8a92fc9240fd" containerName="openstackclient" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.035138 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="794eb074-baf3-46dc-8f9d-8a92fc9240fd" containerName="openstackclient" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.035845 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q29dz" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.069012 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.110892 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-q29dz"] Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.157367 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa334d0b-a179-4905-a660-05bbc12e5c02-operator-scripts\") pod \"root-account-create-update-q29dz\" (UID: \"aa334d0b-a179-4905-a660-05bbc12e5c02\") " pod="openstack/root-account-create-update-q29dz" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.157567 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5bwt\" (UniqueName: \"kubernetes.io/projected/aa334d0b-a179-4905-a660-05bbc12e5c02-kube-api-access-l5bwt\") pod \"root-account-create-update-q29dz\" (UID: \"aa334d0b-a179-4905-a660-05bbc12e5c02\") " pod="openstack/root-account-create-update-q29dz" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.232535 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.232828 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6" containerName="ovn-northd" containerID="cri-o://7adedc06633a9d0c8638d09c8c1e960d90f82138b050dce1fb05836daea4ad71" gracePeriod=30 Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.233302 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6" containerName="openstack-network-exporter" containerID="cri-o://a22560a02062c00ff0e4a3b9e873bd27ab71b2325c47df15e235ac0992d9117c" gracePeriod=30 Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.261997 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa334d0b-a179-4905-a660-05bbc12e5c02-operator-scripts\") pod \"root-account-create-update-q29dz\" (UID: \"aa334d0b-a179-4905-a660-05bbc12e5c02\") " pod="openstack/root-account-create-update-q29dz" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.262174 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5bwt\" (UniqueName: \"kubernetes.io/projected/aa334d0b-a179-4905-a660-05bbc12e5c02-kube-api-access-l5bwt\") pod \"root-account-create-update-q29dz\" (UID: \"aa334d0b-a179-4905-a660-05bbc12e5c02\") " pod="openstack/root-account-create-update-q29dz" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.263524 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa334d0b-a179-4905-a660-05bbc12e5c02-operator-scripts\") pod \"root-account-create-update-q29dz\" (UID: \"aa334d0b-a179-4905-a660-05bbc12e5c02\") " pod="openstack/root-account-create-update-q29dz" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.288197 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-bc41-account-create-update-kjhwj"] Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.331123 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-bc41-account-create-update-kjhwj" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.339271 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.379003 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5bwt\" (UniqueName: \"kubernetes.io/projected/aa334d0b-a179-4905-a660-05bbc12e5c02-kube-api-access-l5bwt\") pod \"root-account-create-update-q29dz\" (UID: \"aa334d0b-a179-4905-a660-05bbc12e5c02\") " pod="openstack/root-account-create-update-q29dz" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.391962 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.472352 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc871f48-4882-49a3-be1f-80d95c2548a9-operator-scripts\") pod \"barbican-bc41-account-create-update-kjhwj\" (UID: \"fc871f48-4882-49a3-be1f-80d95c2548a9\") " pod="openstack/barbican-bc41-account-create-update-kjhwj" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.472489 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xbdt\" (UniqueName: \"kubernetes.io/projected/fc871f48-4882-49a3-be1f-80d95c2548a9-kube-api-access-6xbdt\") pod \"barbican-bc41-account-create-update-kjhwj\" (UID: \"fc871f48-4882-49a3-be1f-80d95c2548a9\") " pod="openstack/barbican-bc41-account-create-update-kjhwj" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.475453 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q29dz" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.528923 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-bc41-account-create-update-cpkrn"] Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.582559 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-bc41-account-create-update-cpkrn"] Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.584208 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc871f48-4882-49a3-be1f-80d95c2548a9-operator-scripts\") pod \"barbican-bc41-account-create-update-kjhwj\" (UID: \"fc871f48-4882-49a3-be1f-80d95c2548a9\") " pod="openstack/barbican-bc41-account-create-update-kjhwj" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.584375 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xbdt\" (UniqueName: \"kubernetes.io/projected/fc871f48-4882-49a3-be1f-80d95c2548a9-kube-api-access-6xbdt\") pod \"barbican-bc41-account-create-update-kjhwj\" (UID: \"fc871f48-4882-49a3-be1f-80d95c2548a9\") " pod="openstack/barbican-bc41-account-create-update-kjhwj" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.584727 4840 generic.go:334] "Generic (PLEG): container finished" podID="1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6" containerID="a22560a02062c00ff0e4a3b9e873bd27ab71b2325c47df15e235ac0992d9117c" exitCode=2 Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.584782 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6","Type":"ContainerDied","Data":"a22560a02062c00ff0e4a3b9e873bd27ab71b2325c47df15e235ac0992d9117c"} Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.585915 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc871f48-4882-49a3-be1f-80d95c2548a9-operator-scripts\") pod \"barbican-bc41-account-create-update-kjhwj\" (UID: \"fc871f48-4882-49a3-be1f-80d95c2548a9\") " pod="openstack/barbican-bc41-account-create-update-kjhwj" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.612527 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-bc41-account-create-update-kjhwj"] Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.659612 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.660460 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="b48aab55-15cb-42c4-a97b-692dbadf3353" containerName="openstack-network-exporter" containerID="cri-o://10e0544be31ad9957ce1bc9464d8fd6efda2ed36676a4427c1dd1344bc4e1a58" gracePeriod=300 Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.695995 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xbdt\" (UniqueName: \"kubernetes.io/projected/fc871f48-4882-49a3-be1f-80d95c2548a9-kube-api-access-6xbdt\") pod \"barbican-bc41-account-create-update-kjhwj\" (UID: \"fc871f48-4882-49a3-be1f-80d95c2548a9\") " pod="openstack/barbican-bc41-account-create-update-kjhwj" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.781443 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="b48aab55-15cb-42c4-a97b-692dbadf3353" containerName="ovsdbserver-sb" containerID="cri-o://c7605e78bc0c06e2ec26eebd5becb94c457edb41d3f28ee05f105bd2a7e5ac3f" gracePeriod=300 Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.781535 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-28ac-account-create-update-4b8kv"] Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.782960 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-28ac-account-create-update-4b8kv" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.855323 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-bc41-account-create-update-kjhwj" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.856798 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-3c72-account-create-update-2dtm6"] Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.858696 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3c72-account-create-update-2dtm6" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.909076 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/822a8132-1b8a-4f19-b42b-b6acd65e7743-operator-scripts\") pod \"nova-api-28ac-account-create-update-4b8kv\" (UID: \"822a8132-1b8a-4f19-b42b-b6acd65e7743\") " pod="openstack/nova-api-28ac-account-create-update-4b8kv" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.909150 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rhcz\" (UniqueName: \"kubernetes.io/projected/822a8132-1b8a-4f19-b42b-b6acd65e7743-kube-api-access-6rhcz\") pod \"nova-api-28ac-account-create-update-4b8kv\" (UID: \"822a8132-1b8a-4f19-b42b-b6acd65e7743\") " pod="openstack/nova-api-28ac-account-create-update-4b8kv" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.917053 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 11 09:21:15 crc kubenswrapper[4840]: I0311 09:21:15.941590 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-28ac-account-create-update-4b8kv"] Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.004944 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.011529 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb2lx\" (UniqueName: \"kubernetes.io/projected/40f2a324-9d85-4e2e-ac26-0b710dc379b2-kube-api-access-jb2lx\") pod \"glance-3c72-account-create-update-2dtm6\" (UID: \"40f2a324-9d85-4e2e-ac26-0b710dc379b2\") " pod="openstack/glance-3c72-account-create-update-2dtm6" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.011604 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/822a8132-1b8a-4f19-b42b-b6acd65e7743-operator-scripts\") pod \"nova-api-28ac-account-create-update-4b8kv\" (UID: \"822a8132-1b8a-4f19-b42b-b6acd65e7743\") " pod="openstack/nova-api-28ac-account-create-update-4b8kv" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.011653 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rhcz\" (UniqueName: \"kubernetes.io/projected/822a8132-1b8a-4f19-b42b-b6acd65e7743-kube-api-access-6rhcz\") pod \"nova-api-28ac-account-create-update-4b8kv\" (UID: \"822a8132-1b8a-4f19-b42b-b6acd65e7743\") " pod="openstack/nova-api-28ac-account-create-update-4b8kv" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.011712 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40f2a324-9d85-4e2e-ac26-0b710dc379b2-operator-scripts\") pod \"glance-3c72-account-create-update-2dtm6\" (UID: \"40f2a324-9d85-4e2e-ac26-0b710dc379b2\") " pod="openstack/glance-3c72-account-create-update-2dtm6" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.028497 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3c72-account-create-update-2dtm6"] Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.029820 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/822a8132-1b8a-4f19-b42b-b6acd65e7743-operator-scripts\") pod \"nova-api-28ac-account-create-update-4b8kv\" (UID: \"822a8132-1b8a-4f19-b42b-b6acd65e7743\") " pod="openstack/nova-api-28ac-account-create-update-4b8kv" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.119714 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40f2a324-9d85-4e2e-ac26-0b710dc379b2-operator-scripts\") pod \"glance-3c72-account-create-update-2dtm6\" (UID: \"40f2a324-9d85-4e2e-ac26-0b710dc379b2\") " pod="openstack/glance-3c72-account-create-update-2dtm6" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.120107 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb2lx\" (UniqueName: \"kubernetes.io/projected/40f2a324-9d85-4e2e-ac26-0b710dc379b2-kube-api-access-jb2lx\") pod \"glance-3c72-account-create-update-2dtm6\" (UID: \"40f2a324-9d85-4e2e-ac26-0b710dc379b2\") " pod="openstack/glance-3c72-account-create-update-2dtm6" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.121106 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40f2a324-9d85-4e2e-ac26-0b710dc379b2-operator-scripts\") pod \"glance-3c72-account-create-update-2dtm6\" (UID: \"40f2a324-9d85-4e2e-ac26-0b710dc379b2\") " pod="openstack/glance-3c72-account-create-update-2dtm6" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.185671 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rhcz\" (UniqueName: \"kubernetes.io/projected/822a8132-1b8a-4f19-b42b-b6acd65e7743-kube-api-access-6rhcz\") pod \"nova-api-28ac-account-create-update-4b8kv\" (UID: \"822a8132-1b8a-4f19-b42b-b6acd65e7743\") " pod="openstack/nova-api-28ac-account-create-update-4b8kv" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.216139 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aed3b47-186d-4af2-b83e-c04526a43095" path="/var/lib/kubelet/pods/0aed3b47-186d-4af2-b83e-c04526a43095/volumes" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.218984 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb2lx\" (UniqueName: \"kubernetes.io/projected/40f2a324-9d85-4e2e-ac26-0b710dc379b2-kube-api-access-jb2lx\") pod \"glance-3c72-account-create-update-2dtm6\" (UID: \"40f2a324-9d85-4e2e-ac26-0b710dc379b2\") " pod="openstack/glance-3c72-account-create-update-2dtm6" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.242355 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66831303-1e03-43d2-aabb-d98617b373a0" path="/var/lib/kubelet/pods/66831303-1e03-43d2-aabb-d98617b373a0/volumes" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.243116 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-f7z67"] Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.254044 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 11 09:21:16 crc kubenswrapper[4840]: E0311 09:21:16.248031 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c7605e78bc0c06e2ec26eebd5becb94c457edb41d3f28ee05f105bd2a7e5ac3f is running failed: container process not found" containerID="c7605e78bc0c06e2ec26eebd5becb94c457edb41d3f28ee05f105bd2a7e5ac3f" cmd=["/usr/bin/pidof","ovsdb-server"] Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.255068 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="0d076df9-9280-425c-9b61-bf84751f11c1" containerName="openstack-network-exporter" containerID="cri-o://5df6036e4383bcc32cc40a5083113e13895062a09a380000d0d1fb0f5b2f30bb" gracePeriod=300 Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.277519 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-f7z67"] Mar 11 09:21:16 crc kubenswrapper[4840]: E0311 09:21:16.281011 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c7605e78bc0c06e2ec26eebd5becb94c457edb41d3f28ee05f105bd2a7e5ac3f is running failed: container process not found" containerID="c7605e78bc0c06e2ec26eebd5becb94c457edb41d3f28ee05f105bd2a7e5ac3f" cmd=["/usr/bin/pidof","ovsdb-server"] Mar 11 09:21:16 crc kubenswrapper[4840]: E0311 09:21:16.311227 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c7605e78bc0c06e2ec26eebd5becb94c457edb41d3f28ee05f105bd2a7e5ac3f is running failed: container process not found" containerID="c7605e78bc0c06e2ec26eebd5becb94c457edb41d3f28ee05f105bd2a7e5ac3f" cmd=["/usr/bin/pidof","ovsdb-server"] Mar 11 09:21:16 crc kubenswrapper[4840]: E0311 09:21:16.311321 4840 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c7605e78bc0c06e2ec26eebd5becb94c457edb41d3f28ee05f105bd2a7e5ac3f is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="b48aab55-15cb-42c4-a97b-692dbadf3353" containerName="ovsdbserver-sb" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.313967 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-28ac-account-create-update-4b8kv" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.351702 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0f23-account-create-update-mbjxd"] Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.362497 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f23-account-create-update-mbjxd" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.410790 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3c72-account-create-update-2dtm6" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.415200 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.428731 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l9l9\" (UniqueName: \"kubernetes.io/projected/b4873de8-8c37-49b7-a8ba-b352a2cf0320-kube-api-access-2l9l9\") pod \"nova-cell0-0f23-account-create-update-mbjxd\" (UID: \"b4873de8-8c37-49b7-a8ba-b352a2cf0320\") " pod="openstack/nova-cell0-0f23-account-create-update-mbjxd" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.465275 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0f23-account-create-update-mbjxd"] Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.468185 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4873de8-8c37-49b7-a8ba-b352a2cf0320-operator-scripts\") pod \"nova-cell0-0f23-account-create-update-mbjxd\" (UID: \"b4873de8-8c37-49b7-a8ba-b352a2cf0320\") " pod="openstack/nova-cell0-0f23-account-create-update-mbjxd" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.534247 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="0d076df9-9280-425c-9b61-bf84751f11c1" containerName="ovsdbserver-nb" containerID="cri-o://3959aad58a8318dc9102553ae85a912f06348c456f581330b23a50a7f00d5ade" gracePeriod=300 Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.571003 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-28ac-account-create-update-9lzd8"] Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.594778 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4873de8-8c37-49b7-a8ba-b352a2cf0320-operator-scripts\") pod \"nova-cell0-0f23-account-create-update-mbjxd\" (UID: \"b4873de8-8c37-49b7-a8ba-b352a2cf0320\") " pod="openstack/nova-cell0-0f23-account-create-update-mbjxd" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.594909 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l9l9\" (UniqueName: \"kubernetes.io/projected/b4873de8-8c37-49b7-a8ba-b352a2cf0320-kube-api-access-2l9l9\") pod \"nova-cell0-0f23-account-create-update-mbjxd\" (UID: \"b4873de8-8c37-49b7-a8ba-b352a2cf0320\") " pod="openstack/nova-cell0-0f23-account-create-update-mbjxd" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.596011 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4873de8-8c37-49b7-a8ba-b352a2cf0320-operator-scripts\") pod \"nova-cell0-0f23-account-create-update-mbjxd\" (UID: \"b4873de8-8c37-49b7-a8ba-b352a2cf0320\") " pod="openstack/nova-cell0-0f23-account-create-update-mbjxd" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.654412 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-3c72-account-create-update-dbmd9"] Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.673547 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l9l9\" (UniqueName: \"kubernetes.io/projected/b4873de8-8c37-49b7-a8ba-b352a2cf0320-kube-api-access-2l9l9\") pod \"nova-cell0-0f23-account-create-update-mbjxd\" (UID: \"b4873de8-8c37-49b7-a8ba-b352a2cf0320\") " pod="openstack/nova-cell0-0f23-account-create-update-mbjxd" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.732612 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-28ac-account-create-update-9lzd8"] Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.741809 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-3c72-account-create-update-dbmd9"] Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.745009 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b48aab55-15cb-42c4-a97b-692dbadf3353/ovsdbserver-sb/0.log" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.745056 4840 generic.go:334] "Generic (PLEG): container finished" podID="b48aab55-15cb-42c4-a97b-692dbadf3353" containerID="10e0544be31ad9957ce1bc9464d8fd6efda2ed36676a4427c1dd1344bc4e1a58" exitCode=2 Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.745083 4840 generic.go:334] "Generic (PLEG): container finished" podID="b48aab55-15cb-42c4-a97b-692dbadf3353" containerID="c7605e78bc0c06e2ec26eebd5becb94c457edb41d3f28ee05f105bd2a7e5ac3f" exitCode=143 Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.745104 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b48aab55-15cb-42c4-a97b-692dbadf3353","Type":"ContainerDied","Data":"10e0544be31ad9957ce1bc9464d8fd6efda2ed36676a4427c1dd1344bc4e1a58"} Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.745132 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b48aab55-15cb-42c4-a97b-692dbadf3353","Type":"ContainerDied","Data":"c7605e78bc0c06e2ec26eebd5becb94c457edb41d3f28ee05f105bd2a7e5ac3f"} Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.792438 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-k7tgg"] Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.822840 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-k7tgg"] Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.879981 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f23-account-create-update-mbjxd" Mar 11 09:21:16 crc kubenswrapper[4840]: E0311 09:21:16.892553 4840 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 11 09:21:16 crc kubenswrapper[4840]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 11 09:21:16 crc kubenswrapper[4840]: Mar 11 09:21:16 crc kubenswrapper[4840]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 11 09:21:16 crc kubenswrapper[4840]: Mar 11 09:21:16 crc kubenswrapper[4840]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 11 09:21:16 crc kubenswrapper[4840]: Mar 11 09:21:16 crc kubenswrapper[4840]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 11 09:21:16 crc kubenswrapper[4840]: Mar 11 09:21:16 crc kubenswrapper[4840]: if [ -n "" ]; then Mar 11 09:21:16 crc kubenswrapper[4840]: GRANT_DATABASE="" Mar 11 09:21:16 crc kubenswrapper[4840]: else Mar 11 09:21:16 crc kubenswrapper[4840]: GRANT_DATABASE="*" Mar 11 09:21:16 crc kubenswrapper[4840]: fi Mar 11 09:21:16 crc kubenswrapper[4840]: Mar 11 09:21:16 crc kubenswrapper[4840]: # going for maximum compatibility here: Mar 11 09:21:16 crc kubenswrapper[4840]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 11 09:21:16 crc kubenswrapper[4840]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 11 09:21:16 crc kubenswrapper[4840]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 11 09:21:16 crc kubenswrapper[4840]: # support updates Mar 11 09:21:16 crc kubenswrapper[4840]: Mar 11 09:21:16 crc kubenswrapper[4840]: $MYSQL_CMD < logger="UnhandledError" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.899514 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-298f-account-create-update-hn765"] Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.900998 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-298f-account-create-update-hn765" Mar 11 09:21:16 crc kubenswrapper[4840]: E0311 09:21:16.906707 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-q29dz" podUID="aa334d0b-a179-4905-a660-05bbc12e5c02" Mar 11 09:21:16 crc kubenswrapper[4840]: I0311 09:21:16.940698 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.021589 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13c22be3-bd7d-45e0-8948-4574e00507c0-operator-scripts\") pod \"nova-cell1-298f-account-create-update-hn765\" (UID: \"13c22be3-bd7d-45e0-8948-4574e00507c0\") " pod="openstack/nova-cell1-298f-account-create-update-hn765" Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.021762 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2657p\" (UniqueName: \"kubernetes.io/projected/13c22be3-bd7d-45e0-8948-4574e00507c0-kube-api-access-2657p\") pod \"nova-cell1-298f-account-create-update-hn765\" (UID: \"13c22be3-bd7d-45e0-8948-4574e00507c0\") " pod="openstack/nova-cell1-298f-account-create-update-hn765" Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.130384 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-298f-account-create-update-hn765"] Mar 11 09:21:17 crc kubenswrapper[4840]: E0311 09:21:17.152283 4840 projected.go:194] Error preparing data for projected volume kube-api-access-2657p for pod openstack/nova-cell1-298f-account-create-update-hn765: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Mar 11 09:21:17 crc kubenswrapper[4840]: E0311 09:21:17.152367 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13c22be3-bd7d-45e0-8948-4574e00507c0-kube-api-access-2657p podName:13c22be3-bd7d-45e0-8948-4574e00507c0 nodeName:}" failed. No retries permitted until 2026-03-11 09:21:17.652339869 +0000 UTC m=+1476.318009684 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2657p" (UniqueName: "kubernetes.io/projected/13c22be3-bd7d-45e0-8948-4574e00507c0-kube-api-access-2657p") pod "nova-cell1-298f-account-create-update-hn765" (UID: "13c22be3-bd7d-45e0-8948-4574e00507c0") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.168568 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2657p\" (UniqueName: \"kubernetes.io/projected/13c22be3-bd7d-45e0-8948-4574e00507c0-kube-api-access-2657p\") pod \"nova-cell1-298f-account-create-update-hn765\" (UID: \"13c22be3-bd7d-45e0-8948-4574e00507c0\") " pod="openstack/nova-cell1-298f-account-create-update-hn765" Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.169309 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13c22be3-bd7d-45e0-8948-4574e00507c0-operator-scripts\") pod \"nova-cell1-298f-account-create-update-hn765\" (UID: \"13c22be3-bd7d-45e0-8948-4574e00507c0\") " pod="openstack/nova-cell1-298f-account-create-update-hn765" Mar 11 09:21:17 crc kubenswrapper[4840]: E0311 09:21:17.169888 4840 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 11 09:21:17 crc kubenswrapper[4840]: E0311 09:21:17.170038 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13c22be3-bd7d-45e0-8948-4574e00507c0-operator-scripts podName:13c22be3-bd7d-45e0-8948-4574e00507c0 nodeName:}" failed. No retries permitted until 2026-03-11 09:21:17.670009604 +0000 UTC m=+1476.335679429 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/13c22be3-bd7d-45e0-8948-4574e00507c0-operator-scripts") pod "nova-cell1-298f-account-create-update-hn765" (UID: "13c22be3-bd7d-45e0-8948-4574e00507c0") : configmap "openstack-cell1-scripts" not found Mar 11 09:21:17 crc kubenswrapper[4840]: E0311 09:21:17.203540 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3959aad58a8318dc9102553ae85a912f06348c456f581330b23a50a7f00d5ade is running failed: container process not found" containerID="3959aad58a8318dc9102553ae85a912f06348c456f581330b23a50a7f00d5ade" cmd=["/usr/bin/pidof","ovsdb-server"] Mar 11 09:21:17 crc kubenswrapper[4840]: E0311 09:21:17.205187 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3959aad58a8318dc9102553ae85a912f06348c456f581330b23a50a7f00d5ade is running failed: container process not found" containerID="3959aad58a8318dc9102553ae85a912f06348c456f581330b23a50a7f00d5ade" cmd=["/usr/bin/pidof","ovsdb-server"] Mar 11 09:21:17 crc kubenswrapper[4840]: E0311 09:21:17.212221 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3959aad58a8318dc9102553ae85a912f06348c456f581330b23a50a7f00d5ade is running failed: container process not found" containerID="3959aad58a8318dc9102553ae85a912f06348c456f581330b23a50a7f00d5ade" cmd=["/usr/bin/pidof","ovsdb-server"] Mar 11 09:21:17 crc kubenswrapper[4840]: E0311 09:21:17.212348 4840 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3959aad58a8318dc9102553ae85a912f06348c456f581330b23a50a7f00d5ade is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-nb-0" podUID="0d076df9-9280-425c-9b61-bf84751f11c1" containerName="ovsdbserver-nb" Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.222903 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.338625 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-dghqs"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.359121 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-dghqs"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.372388 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0f23-account-create-update-5wkdk"] Mar 11 09:21:17 crc kubenswrapper[4840]: E0311 09:21:17.384430 4840 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Mar 11 09:21:17 crc kubenswrapper[4840]: E0311 09:21:17.384507 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-config-data podName:f31748d2-64a9-4839-ac55-691d9682ee8e nodeName:}" failed. No retries permitted until 2026-03-11 09:21:17.884491533 +0000 UTC m=+1476.550161348 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-config-data") pod "rabbitmq-server-0" (UID: "f31748d2-64a9-4839-ac55-691d9682ee8e") : configmap "rabbitmq-config-data" not found Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.389096 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0f23-account-create-update-5wkdk"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.408260 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-298f-account-create-update-bgx4w"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.422882 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-298f-account-create-update-bgx4w"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.438332 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7749c44969-mwk5x"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.438669 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7749c44969-mwk5x" podUID="56c8323a-4163-45af-8e67-2490198805f2" containerName="dnsmasq-dns" containerID="cri-o://dc66b182d7e7d4162f25d405953a697d877ebcc573fde71e7656196962265a32" gracePeriod=10 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.449792 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-r5wvc"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.462982 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-d6pcw"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.482992 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-r5wvc"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.496856 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-d6pcw"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.509533 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5bc9bbdff8-zczrp"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.510045 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5bc9bbdff8-zczrp" podUID="f415e24c-207c-4dc7-b68c-14180ac09391" containerName="placement-log" containerID="cri-o://b9fdfd8bf27758ee9513c698619fd2a58d3ef811e44b476641b6d47eafc6a701" gracePeriod=30 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.510381 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5bc9bbdff8-zczrp" podUID="f415e24c-207c-4dc7-b68c-14180ac09391" containerName="placement-api" containerID="cri-o://9c0b5e241b0b51a690dd9f346219c8bee2d0c8e7039ad56b281f6ca277463616" gracePeriod=30 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.520737 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-6x72v"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.520963 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-6x72v" podUID="f0b554eb-06b2-4670-99df-9b4fcfc6a42f" containerName="openstack-network-exporter" containerID="cri-o://663786306d4268a4dc766ed8fb9c44b26423f4bc146d98039618216faacd06ee" gracePeriod=30 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.529031 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-qtcdv"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.592343 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-g2p7c"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.639383 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-h7w9f"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.666896 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-h7w9f"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.692539 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-7xxk4"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.694542 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2657p\" (UniqueName: \"kubernetes.io/projected/13c22be3-bd7d-45e0-8948-4574e00507c0-kube-api-access-2657p\") pod \"nova-cell1-298f-account-create-update-hn765\" (UID: \"13c22be3-bd7d-45e0-8948-4574e00507c0\") " pod="openstack/nova-cell1-298f-account-create-update-hn765" Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.694780 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13c22be3-bd7d-45e0-8948-4574e00507c0-operator-scripts\") pod \"nova-cell1-298f-account-create-update-hn765\" (UID: \"13c22be3-bd7d-45e0-8948-4574e00507c0\") " pod="openstack/nova-cell1-298f-account-create-update-hn765" Mar 11 09:21:17 crc kubenswrapper[4840]: E0311 09:21:17.694962 4840 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 11 09:21:17 crc kubenswrapper[4840]: E0311 09:21:17.695023 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13c22be3-bd7d-45e0-8948-4574e00507c0-operator-scripts podName:13c22be3-bd7d-45e0-8948-4574e00507c0 nodeName:}" failed. No retries permitted until 2026-03-11 09:21:18.69500431 +0000 UTC m=+1477.360674125 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/13c22be3-bd7d-45e0-8948-4574e00507c0-operator-scripts") pod "nova-cell1-298f-account-create-update-hn765" (UID: "13c22be3-bd7d-45e0-8948-4574e00507c0") : configmap "openstack-cell1-scripts" not found Mar 11 09:21:17 crc kubenswrapper[4840]: E0311 09:21:17.702165 4840 projected.go:194] Error preparing data for projected volume kube-api-access-2657p for pod openstack/nova-cell1-298f-account-create-update-hn765: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Mar 11 09:21:17 crc kubenswrapper[4840]: E0311 09:21:17.702247 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13c22be3-bd7d-45e0-8948-4574e00507c0-kube-api-access-2657p podName:13c22be3-bd7d-45e0-8948-4574e00507c0 nodeName:}" failed. No retries permitted until 2026-03-11 09:21:18.702225502 +0000 UTC m=+1477.367895317 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2657p" (UniqueName: "kubernetes.io/projected/13c22be3-bd7d-45e0-8948-4574e00507c0-kube-api-access-2657p") pod "nova-cell1-298f-account-create-update-hn765" (UID: "13c22be3-bd7d-45e0-8948-4574e00507c0") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.717625 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.718000 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="86c17dbf-d890-4de3-bf5d-29e0aea4d968" containerName="cinder-api-log" containerID="cri-o://de2608ad4638642b2ed36b2182d3eea8ca8dc51168010aa67653e3ba968f01af" gracePeriod=30 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.718434 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="86c17dbf-d890-4de3-bf5d-29e0aea4d968" containerName="cinder-api" containerID="cri-o://97908e7657b276e714bdd7983d1b6b792bc1fff3b99535e851314e2428338b75" gracePeriod=30 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.742678 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-7xxk4"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.787342 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-6x72v_f0b554eb-06b2-4670-99df-9b4fcfc6a42f/openstack-network-exporter/0.log" Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.787407 4840 generic.go:334] "Generic (PLEG): container finished" podID="f0b554eb-06b2-4670-99df-9b4fcfc6a42f" containerID="663786306d4268a4dc766ed8fb9c44b26423f4bc146d98039618216faacd06ee" exitCode=2 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.787494 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6x72v" event={"ID":"f0b554eb-06b2-4670-99df-9b4fcfc6a42f","Type":"ContainerDied","Data":"663786306d4268a4dc766ed8fb9c44b26423f4bc146d98039618216faacd06ee"} Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.791209 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7749c44969-mwk5x" podUID="56c8323a-4163-45af-8e67-2490198805f2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.206:5353: connect: connection refused" Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.802731 4840 generic.go:334] "Generic (PLEG): container finished" podID="f415e24c-207c-4dc7-b68c-14180ac09391" containerID="b9fdfd8bf27758ee9513c698619fd2a58d3ef811e44b476641b6d47eafc6a701" exitCode=143 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.802805 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bc9bbdff8-zczrp" event={"ID":"f415e24c-207c-4dc7-b68c-14180ac09391","Type":"ContainerDied","Data":"b9fdfd8bf27758ee9513c698619fd2a58d3ef811e44b476641b6d47eafc6a701"} Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.808635 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5280-account-create-update-8r5vm"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.818432 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0d076df9-9280-425c-9b61-bf84751f11c1/ovsdbserver-nb/0.log" Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.818626 4840 generic.go:334] "Generic (PLEG): container finished" podID="0d076df9-9280-425c-9b61-bf84751f11c1" containerID="5df6036e4383bcc32cc40a5083113e13895062a09a380000d0d1fb0f5b2f30bb" exitCode=2 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.818652 4840 generic.go:334] "Generic (PLEG): container finished" podID="0d076df9-9280-425c-9b61-bf84751f11c1" containerID="3959aad58a8318dc9102553ae85a912f06348c456f581330b23a50a7f00d5ade" exitCode=143 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.818754 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0d076df9-9280-425c-9b61-bf84751f11c1","Type":"ContainerDied","Data":"5df6036e4383bcc32cc40a5083113e13895062a09a380000d0d1fb0f5b2f30bb"} Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.818796 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0d076df9-9280-425c-9b61-bf84751f11c1","Type":"ContainerDied","Data":"3959aad58a8318dc9102553ae85a912f06348c456f581330b23a50a7f00d5ade"} Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.850402 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-58769b4545-5q2fv"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.851282 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-58769b4545-5q2fv" podUID="550bab70-eacb-4c56-98fd-460c20f22dcc" containerName="neutron-api" containerID="cri-o://990342c797ffb42187c174e55be3c0f82e87b7deb7f1882a35e7f82eeaf9dc84" gracePeriod=30 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.852146 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-58769b4545-5q2fv" podUID="550bab70-eacb-4c56-98fd-460c20f22dcc" containerName="neutron-httpd" containerID="cri-o://de14b2ab95a92c30b8e51cfa0b41eeb1219f8858689935e5b0e0d5e56fbb4fc9" gracePeriod=30 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.860715 4840 generic.go:334] "Generic (PLEG): container finished" podID="56c8323a-4163-45af-8e67-2490198805f2" containerID="dc66b182d7e7d4162f25d405953a697d877ebcc573fde71e7656196962265a32" exitCode=0 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.860807 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7749c44969-mwk5x" event={"ID":"56c8323a-4163-45af-8e67-2490198805f2","Type":"ContainerDied","Data":"dc66b182d7e7d4162f25d405953a697d877ebcc573fde71e7656196962265a32"} Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.876908 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.877344 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="fd8b15e6-0bb6-4d79-99aa-765ded51af1d" containerName="cinder-scheduler" containerID="cri-o://7bb1063b1c325078ccba724db26480966f82fe2b0184c38551e103081dc02f90" gracePeriod=30 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.877538 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="fd8b15e6-0bb6-4d79-99aa-765ded51af1d" containerName="probe" containerID="cri-o://8d27b219acdbf1ed0f6cb152b86c9b44e8ed6f976378d28441b06575714a48ed" gracePeriod=30 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.881149 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q29dz" event={"ID":"aa334d0b-a179-4905-a660-05bbc12e5c02","Type":"ContainerStarted","Data":"bf892370d3ab9e2560925e61ce95e7cd5a97d693a5fea6dd88c30dc813e4e125"} Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.881918 4840 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-q29dz" secret="" err="secret \"galera-openstack-cell1-dockercfg-ntlhs\" not found" Mar 11 09:21:17 crc kubenswrapper[4840]: E0311 09:21:17.885664 4840 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 11 09:21:17 crc kubenswrapper[4840]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 11 09:21:17 crc kubenswrapper[4840]: Mar 11 09:21:17 crc kubenswrapper[4840]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 11 09:21:17 crc kubenswrapper[4840]: Mar 11 09:21:17 crc kubenswrapper[4840]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 11 09:21:17 crc kubenswrapper[4840]: Mar 11 09:21:17 crc kubenswrapper[4840]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 11 09:21:17 crc kubenswrapper[4840]: Mar 11 09:21:17 crc kubenswrapper[4840]: if [ -n "" ]; then Mar 11 09:21:17 crc kubenswrapper[4840]: GRANT_DATABASE="" Mar 11 09:21:17 crc kubenswrapper[4840]: else Mar 11 09:21:17 crc kubenswrapper[4840]: GRANT_DATABASE="*" Mar 11 09:21:17 crc kubenswrapper[4840]: fi Mar 11 09:21:17 crc kubenswrapper[4840]: Mar 11 09:21:17 crc kubenswrapper[4840]: # going for maximum compatibility here: Mar 11 09:21:17 crc kubenswrapper[4840]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 11 09:21:17 crc kubenswrapper[4840]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 11 09:21:17 crc kubenswrapper[4840]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 11 09:21:17 crc kubenswrapper[4840]: # support updates Mar 11 09:21:17 crc kubenswrapper[4840]: Mar 11 09:21:17 crc kubenswrapper[4840]: $MYSQL_CMD < logger="UnhandledError" Mar 11 09:21:17 crc kubenswrapper[4840]: E0311 09:21:17.886914 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-q29dz" podUID="aa334d0b-a179-4905-a660-05bbc12e5c02" Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.895376 4840 generic.go:334] "Generic (PLEG): container finished" podID="794eb074-baf3-46dc-8f9d-8a92fc9240fd" containerID="037417d9a07e2291989661f88eef7dd45b3d6fb5f3c8e6af6e31e40973d67031" exitCode=137 Mar 11 09:21:17 crc kubenswrapper[4840]: E0311 09:21:17.904931 4840 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Mar 11 09:21:17 crc kubenswrapper[4840]: E0311 09:21:17.905007 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-config-data podName:f31748d2-64a9-4839-ac55-691d9682ee8e nodeName:}" failed. No retries permitted until 2026-03-11 09:21:18.904990296 +0000 UTC m=+1477.570660111 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-config-data") pod "rabbitmq-server-0" (UID: "f31748d2-64a9-4839-ac55-691d9682ee8e") : configmap "rabbitmq-config-data" not found Mar 11 09:21:17 crc kubenswrapper[4840]: E0311 09:21:17.905049 4840 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 11 09:21:17 crc kubenswrapper[4840]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 11 09:21:17 crc kubenswrapper[4840]: Mar 11 09:21:17 crc kubenswrapper[4840]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 11 09:21:17 crc kubenswrapper[4840]: Mar 11 09:21:17 crc kubenswrapper[4840]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 11 09:21:17 crc kubenswrapper[4840]: Mar 11 09:21:17 crc kubenswrapper[4840]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 11 09:21:17 crc kubenswrapper[4840]: Mar 11 09:21:17 crc kubenswrapper[4840]: if [ -n "barbican" ]; then Mar 11 09:21:17 crc kubenswrapper[4840]: GRANT_DATABASE="barbican" Mar 11 09:21:17 crc kubenswrapper[4840]: else Mar 11 09:21:17 crc kubenswrapper[4840]: GRANT_DATABASE="*" Mar 11 09:21:17 crc kubenswrapper[4840]: fi Mar 11 09:21:17 crc kubenswrapper[4840]: Mar 11 09:21:17 crc kubenswrapper[4840]: # going for maximum compatibility here: Mar 11 09:21:17 crc kubenswrapper[4840]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 11 09:21:17 crc kubenswrapper[4840]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 11 09:21:17 crc kubenswrapper[4840]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 11 09:21:17 crc kubenswrapper[4840]: # support updates Mar 11 09:21:17 crc kubenswrapper[4840]: Mar 11 09:21:17 crc kubenswrapper[4840]: $MYSQL_CMD < logger="UnhandledError" Mar 11 09:21:17 crc kubenswrapper[4840]: E0311 09:21:17.909426 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-bc41-account-create-update-kjhwj" podUID="fc871f48-4882-49a3-be1f-80d95c2548a9" Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.911427 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5280-account-create-update-8r5vm"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.917517 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b48aab55-15cb-42c4-a97b-692dbadf3353/ovsdbserver-sb/0.log" Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.917607 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.940623 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.947345 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-p8m6d"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.958802 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-cc8xs"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.972888 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.973576 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="account-server" containerID="cri-o://1d4b8fe0d33d601094bd4c0c808ede9155728eb0e8cb7eb8ed2a56feaa32a3ad" gracePeriod=30 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.973746 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="container-updater" containerID="cri-o://22f7a187c7f01516652e793a521148fe2ac2ab6befee1d8ed2d686c2bc8b8adc" gracePeriod=30 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.973769 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="object-auditor" containerID="cri-o://e1983bac8c82d7bd2cb300b8c29f24106251022a1b8393dbcfa2f98ffc26c55b" gracePeriod=30 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.973878 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="container-auditor" containerID="cri-o://25eb7124241e743ab0f23cf5b6fd16ff25372ee1ef90290032d988d844df4427" gracePeriod=30 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.973940 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="container-replicator" containerID="cri-o://9b3746cc36c2137ab5787473b5942125bb1e7796da2df7d082d9e9313eaffba4" gracePeriod=30 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.973998 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="container-server" containerID="cri-o://7ce263946b4448b3a0388e295bd5ae37d6a5148f4b13708343302829cff90b62" gracePeriod=30 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.974064 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="account-reaper" containerID="cri-o://d57ba4418c21d87822207480d1742dfe663f600ae09365578d4cc8cf912a7fa3" gracePeriod=30 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.974117 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="account-auditor" containerID="cri-o://804f1935b7ae1994c956dc341f3afb51d2d0d59c2f35239a216e61bc6f1d79d2" gracePeriod=30 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.974166 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="account-replicator" containerID="cri-o://7c83bfc113dd757cc74c4935f44d862430d391d2b34e5c7877464152ad5f1bdd" gracePeriod=30 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.973612 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="object-expirer" containerID="cri-o://9feba492b523c8dcc3c35b13e0ce09e4fe030b4986a39633f2601ebb6e23baa2" gracePeriod=30 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.974362 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="object-updater" containerID="cri-o://dd4d20030dddae38ba6950210cd3d087c5fd1596090ecd0351b94eeaa971448c" gracePeriod=30 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.974379 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="swift-recon-cron" containerID="cri-o://d1ae76c881c421b02784a6adef0131cd437bfae62ec031f6665865845df2bf74" gracePeriod=30 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.974394 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="rsync" containerID="cri-o://905763d5f4df2194269413da5305c2cade78aefa23c40527af321dc5de2fb39b" gracePeriod=30 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.974416 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="object-replicator" containerID="cri-o://51312dd1b22898c3fcfef3be786a86dd0bc450bace061ed340d18f1903ce9f72" gracePeriod=30 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.975407 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="object-server" containerID="cri-o://322dd29ccdd62a25f4af6457497d29a55dbd6bb69515090d89cfac3caafffe97" gracePeriod=30 Mar 11 09:21:17 crc kubenswrapper[4840]: I0311 09:21:17.992180 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-p8m6d"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.010837 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b48aab55-15cb-42c4-a97b-692dbadf3353-ovsdbserver-sb-tls-certs\") pod \"b48aab55-15cb-42c4-a97b-692dbadf3353\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.011730 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"b48aab55-15cb-42c4-a97b-692dbadf3353\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.011870 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/794eb074-baf3-46dc-8f9d-8a92fc9240fd-combined-ca-bundle\") pod \"794eb074-baf3-46dc-8f9d-8a92fc9240fd\" (UID: \"794eb074-baf3-46dc-8f9d-8a92fc9240fd\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.012011 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9kq9\" (UniqueName: \"kubernetes.io/projected/b48aab55-15cb-42c4-a97b-692dbadf3353-kube-api-access-f9kq9\") pod \"b48aab55-15cb-42c4-a97b-692dbadf3353\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.012143 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b48aab55-15cb-42c4-a97b-692dbadf3353-metrics-certs-tls-certs\") pod \"b48aab55-15cb-42c4-a97b-692dbadf3353\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.013425 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v6h4\" (UniqueName: \"kubernetes.io/projected/794eb074-baf3-46dc-8f9d-8a92fc9240fd-kube-api-access-7v6h4\") pod \"794eb074-baf3-46dc-8f9d-8a92fc9240fd\" (UID: \"794eb074-baf3-46dc-8f9d-8a92fc9240fd\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.013909 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b48aab55-15cb-42c4-a97b-692dbadf3353-combined-ca-bundle\") pod \"b48aab55-15cb-42c4-a97b-692dbadf3353\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.014120 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b48aab55-15cb-42c4-a97b-692dbadf3353-scripts\") pod \"b48aab55-15cb-42c4-a97b-692dbadf3353\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.014861 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b48aab55-15cb-42c4-a97b-692dbadf3353-config\") pod \"b48aab55-15cb-42c4-a97b-692dbadf3353\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.014986 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/794eb074-baf3-46dc-8f9d-8a92fc9240fd-openstack-config\") pod \"794eb074-baf3-46dc-8f9d-8a92fc9240fd\" (UID: \"794eb074-baf3-46dc-8f9d-8a92fc9240fd\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.015159 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/794eb074-baf3-46dc-8f9d-8a92fc9240fd-openstack-config-secret\") pod \"794eb074-baf3-46dc-8f9d-8a92fc9240fd\" (UID: \"794eb074-baf3-46dc-8f9d-8a92fc9240fd\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.015368 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b48aab55-15cb-42c4-a97b-692dbadf3353-ovsdb-rundir\") pod \"b48aab55-15cb-42c4-a97b-692dbadf3353\" (UID: \"b48aab55-15cb-42c4-a97b-692dbadf3353\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.022309 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b48aab55-15cb-42c4-a97b-692dbadf3353-scripts" (OuterVolumeSpecName: "scripts") pod "b48aab55-15cb-42c4-a97b-692dbadf3353" (UID: "b48aab55-15cb-42c4-a97b-692dbadf3353"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: E0311 09:21:18.023720 4840 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 11 09:21:18 crc kubenswrapper[4840]: E0311 09:21:18.023887 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aa334d0b-a179-4905-a660-05bbc12e5c02-operator-scripts podName:aa334d0b-a179-4905-a660-05bbc12e5c02 nodeName:}" failed. No retries permitted until 2026-03-11 09:21:18.523845358 +0000 UTC m=+1477.189515173 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/aa334d0b-a179-4905-a660-05bbc12e5c02-operator-scripts") pod "root-account-create-update-q29dz" (UID: "aa334d0b-a179-4905-a660-05bbc12e5c02") : configmap "openstack-cell1-scripts" not found Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.023745 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b48aab55-15cb-42c4-a97b-692dbadf3353-config" (OuterVolumeSpecName: "config") pod "b48aab55-15cb-42c4-a97b-692dbadf3353" (UID: "b48aab55-15cb-42c4-a97b-692dbadf3353"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.024524 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b48aab55-15cb-42c4-a97b-692dbadf3353-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "b48aab55-15cb-42c4-a97b-692dbadf3353" (UID: "b48aab55-15cb-42c4-a97b-692dbadf3353"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.031183 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c443-account-create-update-r8t8k"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.032601 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "b48aab55-15cb-42c4-a97b-692dbadf3353" (UID: "b48aab55-15cb-42c4-a97b-692dbadf3353"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.041892 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/794eb074-baf3-46dc-8f9d-8a92fc9240fd-kube-api-access-7v6h4" (OuterVolumeSpecName: "kube-api-access-7v6h4") pod "794eb074-baf3-46dc-8f9d-8a92fc9240fd" (UID: "794eb074-baf3-46dc-8f9d-8a92fc9240fd"). InnerVolumeSpecName "kube-api-access-7v6h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.048184 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b48aab55-15cb-42c4-a97b-692dbadf3353-kube-api-access-f9kq9" (OuterVolumeSpecName: "kube-api-access-f9kq9") pod "b48aab55-15cb-42c4-a97b-692dbadf3353" (UID: "b48aab55-15cb-42c4-a97b-692dbadf3353"). InnerVolumeSpecName "kube-api-access-f9kq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.095611 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d383441-a8df-429d-9b04-65b01f06c46e" path="/var/lib/kubelet/pods/0d383441-a8df-429d-9b04-65b01f06c46e/volumes" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.096404 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535" path="/var/lib/kubelet/pods/233b9a6c-cafa-4ec8-aa40-6ef1b0dfc535/volumes" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.097273 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51c6e6d6-9fb9-4042-a8c7-d14ad4067afb" path="/var/lib/kubelet/pods/51c6e6d6-9fb9-4042-a8c7-d14ad4067afb/volumes" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.099238 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="573b362e-582b-43f0-afae-c038cf95f625" path="/var/lib/kubelet/pods/573b362e-582b-43f0-afae-c038cf95f625/volumes" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.100389 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a3982dc-cf14-417d-8fac-b8e811843798" path="/var/lib/kubelet/pods/5a3982dc-cf14-417d-8fac-b8e811843798/volumes" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.101553 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68dbceea-1597-4236-b2d6-ca071bf86d5a" path="/var/lib/kubelet/pods/68dbceea-1597-4236-b2d6-ca071bf86d5a/volumes" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.103027 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8141eccc-ba41-4cfc-ba72-ffeae8858902" path="/var/lib/kubelet/pods/8141eccc-ba41-4cfc-ba72-ffeae8858902/volumes" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.103763 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85f1dc85-62df-44f4-a549-390aeb2709be" path="/var/lib/kubelet/pods/85f1dc85-62df-44f4-a549-390aeb2709be/volumes" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.111361 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adf5efde-cb19-4870-9c8e-e7e139523238" path="/var/lib/kubelet/pods/adf5efde-cb19-4870-9c8e-e7e139523238/volumes" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.118942 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c155188a-ed86-457a-88cb-42db8a510bf7" path="/var/lib/kubelet/pods/c155188a-ed86-457a-88cb-42db8a510bf7/volumes" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.120692 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df6db5cc-aad1-4c53-a726-61432206dd4c" path="/var/lib/kubelet/pods/df6db5cc-aad1-4c53-a726-61432206dd4c/volumes" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.125162 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/794eb074-baf3-46dc-8f9d-8a92fc9240fd-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "794eb074-baf3-46dc-8f9d-8a92fc9240fd" (UID: "794eb074-baf3-46dc-8f9d-8a92fc9240fd"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.125695 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/794eb074-baf3-46dc-8f9d-8a92fc9240fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "794eb074-baf3-46dc-8f9d-8a92fc9240fd" (UID: "794eb074-baf3-46dc-8f9d-8a92fc9240fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.127799 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb9a61ad-a8fb-4968-b32a-ff5756add27b" path="/var/lib/kubelet/pods/eb9a61ad-a8fb-4968-b32a-ff5756add27b/volumes" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.144735 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/794eb074-baf3-46dc-8f9d-8a92fc9240fd-combined-ca-bundle\") pod \"794eb074-baf3-46dc-8f9d-8a92fc9240fd\" (UID: \"794eb074-baf3-46dc-8f9d-8a92fc9240fd\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.149755 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b48aab55-15cb-42c4-a97b-692dbadf3353-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b48aab55-15cb-42c4-a97b-692dbadf3353" (UID: "b48aab55-15cb-42c4-a97b-692dbadf3353"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.150872 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd038b3a-9412-48d6-8aa9-f455897bcfb2" path="/var/lib/kubelet/pods/fd038b3a-9412-48d6-8aa9-f455897bcfb2/volumes" Mar 11 09:21:18 crc kubenswrapper[4840]: W0311 09:21:18.152811 4840 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/794eb074-baf3-46dc-8f9d-8a92fc9240fd/volumes/kubernetes.io~secret/combined-ca-bundle Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.152845 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/794eb074-baf3-46dc-8f9d-8a92fc9240fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "794eb074-baf3-46dc-8f9d-8a92fc9240fd" (UID: "794eb074-baf3-46dc-8f9d-8a92fc9240fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.157380 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b48aab55-15cb-42c4-a97b-692dbadf3353-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.157421 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b48aab55-15cb-42c4-a97b-692dbadf3353-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.157438 4840 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/794eb074-baf3-46dc-8f9d-8a92fc9240fd-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.157452 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b48aab55-15cb-42c4-a97b-692dbadf3353-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.157517 4840 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.157533 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/794eb074-baf3-46dc-8f9d-8a92fc9240fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.157550 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9kq9\" (UniqueName: \"kubernetes.io/projected/b48aab55-15cb-42c4-a97b-692dbadf3353-kube-api-access-f9kq9\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.157569 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v6h4\" (UniqueName: \"kubernetes.io/projected/794eb074-baf3-46dc-8f9d-8a92fc9240fd-kube-api-access-7v6h4\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.183036 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-c443-account-create-update-r8t8k"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.183355 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-cc8xs"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.183481 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-j24h9"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.183601 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-j24h9"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.183735 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-q29dz"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.185928 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/794eb074-baf3-46dc-8f9d-8a92fc9240fd-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "794eb074-baf3-46dc-8f9d-8a92fc9240fd" (UID: "794eb074-baf3-46dc-8f9d-8a92fc9240fd"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.202560 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.203006 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d3ccfc13-7a62-4923-95ab-c68cb93aa03c" containerName="glance-log" containerID="cri-o://fa51fa6846a6391fd4d41433bac4cdd3a55817184d7eb6184af7110475d61e48" gracePeriod=30 Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.203383 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d3ccfc13-7a62-4923-95ab-c68cb93aa03c" containerName="glance-httpd" containerID="cri-o://1443dcc20b53c34d6b5983e69576991044f6bc08da4320cceeb036e8ad539edf" gracePeriod=30 Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.228177 4840 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.236527 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-8daa-account-create-update-rhmhk"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.259285 4840 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.259315 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b48aab55-15cb-42c4-a97b-692dbadf3353-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.259326 4840 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/794eb074-baf3-46dc-8f9d-8a92fc9240fd-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.268128 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b48aab55-15cb-42c4-a97b-692dbadf3353-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "b48aab55-15cb-42c4-a97b-692dbadf3353" (UID: "b48aab55-15cb-42c4-a97b-692dbadf3353"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.268426 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-qtcdv" podUID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerName="ovs-vswitchd" containerID="cri-o://dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d" gracePeriod=30 Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.300625 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-px5zj"] Mar 11 09:21:18 crc kubenswrapper[4840]: E0311 09:21:18.306039 4840 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Mar 11 09:21:18 crc kubenswrapper[4840]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Mar 11 09:21:18 crc kubenswrapper[4840]: + source /usr/local/bin/container-scripts/functions Mar 11 09:21:18 crc kubenswrapper[4840]: ++ OVNBridge=br-int Mar 11 09:21:18 crc kubenswrapper[4840]: ++ OVNRemote=tcp:localhost:6642 Mar 11 09:21:18 crc kubenswrapper[4840]: ++ OVNEncapType=geneve Mar 11 09:21:18 crc kubenswrapper[4840]: ++ OVNAvailabilityZones= Mar 11 09:21:18 crc kubenswrapper[4840]: ++ EnableChassisAsGateway=true Mar 11 09:21:18 crc kubenswrapper[4840]: ++ PhysicalNetworks= Mar 11 09:21:18 crc kubenswrapper[4840]: ++ OVNHostName= Mar 11 09:21:18 crc kubenswrapper[4840]: ++ DB_FILE=/etc/openvswitch/conf.db Mar 11 09:21:18 crc kubenswrapper[4840]: ++ ovs_dir=/var/lib/openvswitch Mar 11 09:21:18 crc kubenswrapper[4840]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Mar 11 09:21:18 crc kubenswrapper[4840]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Mar 11 09:21:18 crc kubenswrapper[4840]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 11 09:21:18 crc kubenswrapper[4840]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 11 09:21:18 crc kubenswrapper[4840]: + sleep 0.5 Mar 11 09:21:18 crc kubenswrapper[4840]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 11 09:21:18 crc kubenswrapper[4840]: + cleanup_ovsdb_server_semaphore Mar 11 09:21:18 crc kubenswrapper[4840]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 11 09:21:18 crc kubenswrapper[4840]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Mar 11 09:21:18 crc kubenswrapper[4840]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-qtcdv" message=< Mar 11 09:21:18 crc kubenswrapper[4840]: Exiting ovsdb-server (5) [ OK ] Mar 11 09:21:18 crc kubenswrapper[4840]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Mar 11 09:21:18 crc kubenswrapper[4840]: + source /usr/local/bin/container-scripts/functions Mar 11 09:21:18 crc kubenswrapper[4840]: ++ OVNBridge=br-int Mar 11 09:21:18 crc kubenswrapper[4840]: ++ OVNRemote=tcp:localhost:6642 Mar 11 09:21:18 crc kubenswrapper[4840]: ++ OVNEncapType=geneve Mar 11 09:21:18 crc kubenswrapper[4840]: ++ OVNAvailabilityZones= Mar 11 09:21:18 crc kubenswrapper[4840]: ++ EnableChassisAsGateway=true Mar 11 09:21:18 crc kubenswrapper[4840]: ++ PhysicalNetworks= Mar 11 09:21:18 crc kubenswrapper[4840]: ++ OVNHostName= Mar 11 09:21:18 crc kubenswrapper[4840]: ++ DB_FILE=/etc/openvswitch/conf.db Mar 11 09:21:18 crc kubenswrapper[4840]: ++ ovs_dir=/var/lib/openvswitch Mar 11 09:21:18 crc kubenswrapper[4840]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Mar 11 09:21:18 crc kubenswrapper[4840]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Mar 11 09:21:18 crc kubenswrapper[4840]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 11 09:21:18 crc kubenswrapper[4840]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 11 09:21:18 crc kubenswrapper[4840]: + sleep 0.5 Mar 11 09:21:18 crc kubenswrapper[4840]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 11 09:21:18 crc kubenswrapper[4840]: + cleanup_ovsdb_server_semaphore Mar 11 09:21:18 crc kubenswrapper[4840]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 11 09:21:18 crc kubenswrapper[4840]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Mar 11 09:21:18 crc kubenswrapper[4840]: > Mar 11 09:21:18 crc kubenswrapper[4840]: E0311 09:21:18.306094 4840 kuberuntime_container.go:691] "PreStop hook failed" err=< Mar 11 09:21:18 crc kubenswrapper[4840]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Mar 11 09:21:18 crc kubenswrapper[4840]: + source /usr/local/bin/container-scripts/functions Mar 11 09:21:18 crc kubenswrapper[4840]: ++ OVNBridge=br-int Mar 11 09:21:18 crc kubenswrapper[4840]: ++ OVNRemote=tcp:localhost:6642 Mar 11 09:21:18 crc kubenswrapper[4840]: ++ OVNEncapType=geneve Mar 11 09:21:18 crc kubenswrapper[4840]: ++ OVNAvailabilityZones= Mar 11 09:21:18 crc kubenswrapper[4840]: ++ EnableChassisAsGateway=true Mar 11 09:21:18 crc kubenswrapper[4840]: ++ PhysicalNetworks= Mar 11 09:21:18 crc kubenswrapper[4840]: ++ OVNHostName= Mar 11 09:21:18 crc kubenswrapper[4840]: ++ DB_FILE=/etc/openvswitch/conf.db Mar 11 09:21:18 crc kubenswrapper[4840]: ++ ovs_dir=/var/lib/openvswitch Mar 11 09:21:18 crc kubenswrapper[4840]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Mar 11 09:21:18 crc kubenswrapper[4840]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Mar 11 09:21:18 crc kubenswrapper[4840]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 11 09:21:18 crc kubenswrapper[4840]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 11 09:21:18 crc kubenswrapper[4840]: + sleep 0.5 Mar 11 09:21:18 crc kubenswrapper[4840]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 11 09:21:18 crc kubenswrapper[4840]: + cleanup_ovsdb_server_semaphore Mar 11 09:21:18 crc kubenswrapper[4840]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 11 09:21:18 crc kubenswrapper[4840]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Mar 11 09:21:18 crc kubenswrapper[4840]: > pod="openstack/ovn-controller-ovs-qtcdv" podUID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerName="ovsdb-server" containerID="cri-o://ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.306137 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-qtcdv" podUID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerName="ovsdb-server" containerID="cri-o://ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09" gracePeriod=30 Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.349417 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0d076df9-9280-425c-9b61-bf84751f11c1/ovsdbserver-nb/0.log" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.349559 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.361185 4840 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b48aab55-15cb-42c4-a97b-692dbadf3353-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.383956 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b48aab55-15cb-42c4-a97b-692dbadf3353-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "b48aab55-15cb-42c4-a97b-692dbadf3353" (UID: "b48aab55-15cb-42c4-a97b-692dbadf3353"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.391005 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-8daa-account-create-update-rhmhk"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.419578 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-px5zj"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.432583 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.432869 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="71aaf352-8b91-4846-8ce4-1d83303ac203" containerName="glance-log" containerID="cri-o://8c20b7cb7d072af1e8fc8505f85ae53009e95d00f58e75523c006bc2e4ffcfc3" gracePeriod=30 Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.433656 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="71aaf352-8b91-4846-8ce4-1d83303ac203" containerName="glance-httpd" containerID="cri-o://7d5a479df92438b43deb38719eb65c2cb14faa128400d6d01ccbd757dae47f94" gracePeriod=30 Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.462227 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7r7w9\" (UniqueName: \"kubernetes.io/projected/0d076df9-9280-425c-9b61-bf84751f11c1-kube-api-access-7r7w9\") pod \"0d076df9-9280-425c-9b61-bf84751f11c1\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.462274 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d076df9-9280-425c-9b61-bf84751f11c1-ovsdbserver-nb-tls-certs\") pod \"0d076df9-9280-425c-9b61-bf84751f11c1\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.462357 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d076df9-9280-425c-9b61-bf84751f11c1-combined-ca-bundle\") pod \"0d076df9-9280-425c-9b61-bf84751f11c1\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.462673 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d076df9-9280-425c-9b61-bf84751f11c1-config\") pod \"0d076df9-9280-425c-9b61-bf84751f11c1\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.462716 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d076df9-9280-425c-9b61-bf84751f11c1-scripts\") pod \"0d076df9-9280-425c-9b61-bf84751f11c1\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.462775 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d076df9-9280-425c-9b61-bf84751f11c1-metrics-certs-tls-certs\") pod \"0d076df9-9280-425c-9b61-bf84751f11c1\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.462797 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"0d076df9-9280-425c-9b61-bf84751f11c1\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.462854 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0d076df9-9280-425c-9b61-bf84751f11c1-ovsdb-rundir\") pod \"0d076df9-9280-425c-9b61-bf84751f11c1\" (UID: \"0d076df9-9280-425c-9b61-bf84751f11c1\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.463430 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b48aab55-15cb-42c4-a97b-692dbadf3353-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.468190 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d076df9-9280-425c-9b61-bf84751f11c1-scripts" (OuterVolumeSpecName: "scripts") pod "0d076df9-9280-425c-9b61-bf84751f11c1" (UID: "0d076df9-9280-425c-9b61-bf84751f11c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.469548 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d076df9-9280-425c-9b61-bf84751f11c1-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "0d076df9-9280-425c-9b61-bf84751f11c1" (UID: "0d076df9-9280-425c-9b61-bf84751f11c1"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.472056 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d076df9-9280-425c-9b61-bf84751f11c1-config" (OuterVolumeSpecName: "config") pod "0d076df9-9280-425c-9b61-bf84751f11c1" (UID: "0d076df9-9280-425c-9b61-bf84751f11c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.481551 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-9c9b9"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.484367 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d076df9-9280-425c-9b61-bf84751f11c1-kube-api-access-7r7w9" (OuterVolumeSpecName: "kube-api-access-7r7w9") pod "0d076df9-9280-425c-9b61-bf84751f11c1" (UID: "0d076df9-9280-425c-9b61-bf84751f11c1"). InnerVolumeSpecName "kube-api-access-7r7w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.494575 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-9c9b9"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.499329 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "0d076df9-9280-425c-9b61-bf84751f11c1" (UID: "0d076df9-9280-425c-9b61-bf84751f11c1"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.505807 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.555558 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.564756 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d076df9-9280-425c-9b61-bf84751f11c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d076df9-9280-425c-9b61-bf84751f11c1" (UID: "0d076df9-9280-425c-9b61-bf84751f11c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.604847 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d076df9-9280-425c-9b61-bf84751f11c1-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:18 crc kubenswrapper[4840]: E0311 09:21:18.604940 4840 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.604958 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0d076df9-9280-425c-9b61-bf84751f11c1-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.604986 4840 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Mar 11 09:21:18 crc kubenswrapper[4840]: E0311 09:21:18.606639 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aa334d0b-a179-4905-a660-05bbc12e5c02-operator-scripts podName:aa334d0b-a179-4905-a660-05bbc12e5c02 nodeName:}" failed. No retries permitted until 2026-03-11 09:21:19.606605347 +0000 UTC m=+1478.272275162 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/aa334d0b-a179-4905-a660-05bbc12e5c02-operator-scripts") pod "root-account-create-update-q29dz" (UID: "aa334d0b-a179-4905-a660-05bbc12e5c02") : configmap "openstack-cell1-scripts" not found Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.607156 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0d076df9-9280-425c-9b61-bf84751f11c1-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.607177 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7r7w9\" (UniqueName: \"kubernetes.io/projected/0d076df9-9280-425c-9b61-bf84751f11c1-kube-api-access-7r7w9\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.607188 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d076df9-9280-425c-9b61-bf84751f11c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.623240 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-bc41-account-create-update-kjhwj"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.633186 4840 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.636735 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="3a964c24-3c53-4a29-98fb-ceaca467c372" containerName="rabbitmq" containerID="cri-o://a744999686b40927e99b68155086c1bd24cdc20d1ac6a31619c39f4421dcb680" gracePeriod=604800 Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.678824 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.679174 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="915955ff-c1d8-4f99-a621-f28d463c512f" containerName="nova-metadata-log" containerID="cri-o://a0aea92352d16543560402efd876df16008f6e3e1715ebf0bc7c85603335be96" gracePeriod=30 Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.679742 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="915955ff-c1d8-4f99-a621-f28d463c512f" containerName="nova-metadata-metadata" containerID="cri-o://904fce62c27e75c0afd1b04ee9c4e1dd4f36346fdb9943a482344076f39797f2" gracePeriod=30 Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.692976 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d076df9-9280-425c-9b61-bf84751f11c1-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "0d076df9-9280-425c-9b61-bf84751f11c1" (UID: "0d076df9-9280-425c-9b61-bf84751f11c1"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.708845 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2657p\" (UniqueName: \"kubernetes.io/projected/13c22be3-bd7d-45e0-8948-4574e00507c0-kube-api-access-2657p\") pod \"nova-cell1-298f-account-create-update-hn765\" (UID: \"13c22be3-bd7d-45e0-8948-4574e00507c0\") " pod="openstack/nova-cell1-298f-account-create-update-hn765" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.709018 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13c22be3-bd7d-45e0-8948-4574e00507c0-operator-scripts\") pod \"nova-cell1-298f-account-create-update-hn765\" (UID: \"13c22be3-bd7d-45e0-8948-4574e00507c0\") " pod="openstack/nova-cell1-298f-account-create-update-hn765" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.709092 4840 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d076df9-9280-425c-9b61-bf84751f11c1-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.709107 4840 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:18 crc kubenswrapper[4840]: E0311 09:21:18.709182 4840 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 11 09:21:18 crc kubenswrapper[4840]: E0311 09:21:18.709231 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13c22be3-bd7d-45e0-8948-4574e00507c0-operator-scripts podName:13c22be3-bd7d-45e0-8948-4574e00507c0 nodeName:}" failed. No retries permitted until 2026-03-11 09:21:20.7092169 +0000 UTC m=+1479.374886715 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/13c22be3-bd7d-45e0-8948-4574e00507c0-operator-scripts") pod "nova-cell1-298f-account-create-update-hn765" (UID: "13c22be3-bd7d-45e0-8948-4574e00507c0") : configmap "openstack-cell1-scripts" not found Mar 11 09:21:18 crc kubenswrapper[4840]: E0311 09:21:18.714926 4840 projected.go:194] Error preparing data for projected volume kube-api-access-2657p for pod openstack/nova-cell1-298f-account-create-update-hn765: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Mar 11 09:21:18 crc kubenswrapper[4840]: E0311 09:21:18.715154 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13c22be3-bd7d-45e0-8948-4574e00507c0-kube-api-access-2657p podName:13c22be3-bd7d-45e0-8948-4574e00507c0 nodeName:}" failed. No retries permitted until 2026-03-11 09:21:20.715130909 +0000 UTC m=+1479.380800724 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2657p" (UniqueName: "kubernetes.io/projected/13c22be3-bd7d-45e0-8948-4574e00507c0-kube-api-access-2657p") pod "nova-cell1-298f-account-create-update-hn765" (UID: "13c22be3-bd7d-45e0-8948-4574e00507c0") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.718256 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-tv2rh"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.747794 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-tv2rh"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.763937 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-nhlzp"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.764005 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-3c72-account-create-update-2dtm6"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.766682 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-6x72v_f0b554eb-06b2-4670-99df-9b4fcfc6a42f/openstack-network-exporter/0.log" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.766751 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6x72v" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.771703 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-nhlzp"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.779783 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-db9f7b9c-cdnm7"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.780155 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-db9f7b9c-cdnm7" podUID="0ab14b76-8d70-44f2-b986-b3d600c73b60" containerName="barbican-worker-log" containerID="cri-o://0a86a4d07fba011f02807e2ed6cc060d80c18aeab359b46ed4a060d9bc9539eb" gracePeriod=30 Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.780263 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-db9f7b9c-cdnm7" podUID="0ab14b76-8d70-44f2-b986-b3d600c73b60" containerName="barbican-worker" containerID="cri-o://403197cd6e2f799a8e5d55b29a90f95a1d072287400393d0ce075725778b4fe2" gracePeriod=30 Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.789229 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7749c44969-mwk5x" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.798606 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-84c78b97c8-frfs9"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.799487 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" podUID="d7acca3e-61e4-495b-adbf-36c435b4a7d2" containerName="barbican-keystone-listener-log" containerID="cri-o://33e49e14fd2afa823c560d68be5b5e4ad273e3e82868e5aba42b52ba56655a7e" gracePeriod=30 Mar 11 09:21:18 crc kubenswrapper[4840]: E0311 09:21:18.799843 4840 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 11 09:21:18 crc kubenswrapper[4840]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 11 09:21:18 crc kubenswrapper[4840]: Mar 11 09:21:18 crc kubenswrapper[4840]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 11 09:21:18 crc kubenswrapper[4840]: Mar 11 09:21:18 crc kubenswrapper[4840]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 11 09:21:18 crc kubenswrapper[4840]: Mar 11 09:21:18 crc kubenswrapper[4840]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 11 09:21:18 crc kubenswrapper[4840]: Mar 11 09:21:18 crc kubenswrapper[4840]: if [ -n "nova_api" ]; then Mar 11 09:21:18 crc kubenswrapper[4840]: GRANT_DATABASE="nova_api" Mar 11 09:21:18 crc kubenswrapper[4840]: else Mar 11 09:21:18 crc kubenswrapper[4840]: GRANT_DATABASE="*" Mar 11 09:21:18 crc kubenswrapper[4840]: fi Mar 11 09:21:18 crc kubenswrapper[4840]: Mar 11 09:21:18 crc kubenswrapper[4840]: # going for maximum compatibility here: Mar 11 09:21:18 crc kubenswrapper[4840]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 11 09:21:18 crc kubenswrapper[4840]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 11 09:21:18 crc kubenswrapper[4840]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 11 09:21:18 crc kubenswrapper[4840]: # support updates Mar 11 09:21:18 crc kubenswrapper[4840]: Mar 11 09:21:18 crc kubenswrapper[4840]: $MYSQL_CMD < logger="UnhandledError" Mar 11 09:21:18 crc kubenswrapper[4840]: E0311 09:21:18.801106 4840 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 11 09:21:18 crc kubenswrapper[4840]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 11 09:21:18 crc kubenswrapper[4840]: Mar 11 09:21:18 crc kubenswrapper[4840]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 11 09:21:18 crc kubenswrapper[4840]: Mar 11 09:21:18 crc kubenswrapper[4840]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 11 09:21:18 crc kubenswrapper[4840]: Mar 11 09:21:18 crc kubenswrapper[4840]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 11 09:21:18 crc kubenswrapper[4840]: Mar 11 09:21:18 crc kubenswrapper[4840]: if [ -n "glance" ]; then Mar 11 09:21:18 crc kubenswrapper[4840]: GRANT_DATABASE="glance" Mar 11 09:21:18 crc kubenswrapper[4840]: else Mar 11 09:21:18 crc kubenswrapper[4840]: GRANT_DATABASE="*" Mar 11 09:21:18 crc kubenswrapper[4840]: fi Mar 11 09:21:18 crc kubenswrapper[4840]: Mar 11 09:21:18 crc kubenswrapper[4840]: # going for maximum compatibility here: Mar 11 09:21:18 crc kubenswrapper[4840]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 11 09:21:18 crc kubenswrapper[4840]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 11 09:21:18 crc kubenswrapper[4840]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 11 09:21:18 crc kubenswrapper[4840]: # support updates Mar 11 09:21:18 crc kubenswrapper[4840]: Mar 11 09:21:18 crc kubenswrapper[4840]: $MYSQL_CMD < logger="UnhandledError" Mar 11 09:21:18 crc kubenswrapper[4840]: E0311 09:21:18.801227 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-28ac-account-create-update-4b8kv" podUID="822a8132-1b8a-4f19-b42b-b6acd65e7743" Mar 11 09:21:18 crc kubenswrapper[4840]: E0311 09:21:18.803082 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-3c72-account-create-update-2dtm6" podUID="40f2a324-9d85-4e2e-ac26-0b710dc379b2" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.802388 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" podUID="d7acca3e-61e4-495b-adbf-36c435b4a7d2" containerName="barbican-keystone-listener" containerID="cri-o://7ee049965601ca58aa215725536bdf7c36fb49e30446ec4221604a56f4d368d1" gracePeriod=30 Mar 11 09:21:18 crc kubenswrapper[4840]: W0311 09:21:18.813413 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4873de8_8c37_49b7_a8ba_b352a2cf0320.slice/crio-cc3143ee5795b8b5fc103fc2a51c7b5e869ce56a236d9a328707bb7efcec05f2 WatchSource:0}: Error finding container cc3143ee5795b8b5fc103fc2a51c7b5e869ce56a236d9a328707bb7efcec05f2: Status 404 returned error can't find the container with id cc3143ee5795b8b5fc103fc2a51c7b5e869ce56a236d9a328707bb7efcec05f2 Mar 11 09:21:18 crc kubenswrapper[4840]: E0311 09:21:18.821077 4840 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 11 09:21:18 crc kubenswrapper[4840]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 11 09:21:18 crc kubenswrapper[4840]: Mar 11 09:21:18 crc kubenswrapper[4840]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 11 09:21:18 crc kubenswrapper[4840]: Mar 11 09:21:18 crc kubenswrapper[4840]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 11 09:21:18 crc kubenswrapper[4840]: Mar 11 09:21:18 crc kubenswrapper[4840]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 11 09:21:18 crc kubenswrapper[4840]: Mar 11 09:21:18 crc kubenswrapper[4840]: if [ -n "nova_cell0" ]; then Mar 11 09:21:18 crc kubenswrapper[4840]: GRANT_DATABASE="nova_cell0" Mar 11 09:21:18 crc kubenswrapper[4840]: else Mar 11 09:21:18 crc kubenswrapper[4840]: GRANT_DATABASE="*" Mar 11 09:21:18 crc kubenswrapper[4840]: fi Mar 11 09:21:18 crc kubenswrapper[4840]: Mar 11 09:21:18 crc kubenswrapper[4840]: # going for maximum compatibility here: Mar 11 09:21:18 crc kubenswrapper[4840]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 11 09:21:18 crc kubenswrapper[4840]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 11 09:21:18 crc kubenswrapper[4840]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 11 09:21:18 crc kubenswrapper[4840]: # support updates Mar 11 09:21:18 crc kubenswrapper[4840]: Mar 11 09:21:18 crc kubenswrapper[4840]: $MYSQL_CMD < logger="UnhandledError" Mar 11 09:21:18 crc kubenswrapper[4840]: E0311 09:21:18.827271 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-0f23-account-create-update-mbjxd" podUID="b4873de8-8c37-49b7-a8ba-b352a2cf0320" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.832943 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d076df9-9280-425c-9b61-bf84751f11c1-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "0d076df9-9280-425c-9b61-bf84751f11c1" (UID: "0d076df9-9280-425c-9b61-bf84751f11c1"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.846839 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7dcc97bc9b-l7z2g"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.847628 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7dcc97bc9b-l7z2g" podUID="15a50bea-c32e-4aed-8fd2-7289e1694f6e" containerName="barbican-api" containerID="cri-o://6915adfd19dd7500fc5f1e3d97669f01dd9e52bc0540e0cbfa6ee88e4556faeb" gracePeriod=30 Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.847753 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7dcc97bc9b-l7z2g" podUID="15a50bea-c32e-4aed-8fd2-7289e1694f6e" containerName="barbican-api-log" containerID="cri-o://70a8af150a750503de4de737257241b8eae85fdd8f6880b4e54352c0730d7435" gracePeriod=30 Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.881411 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-298f-account-create-update-hn765"] Mar 11 09:21:18 crc kubenswrapper[4840]: E0311 09:21:18.883000 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-2657p operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/nova-cell1-298f-account-create-update-hn765" podUID="13c22be3-bd7d-45e0-8948-4574e00507c0" Mar 11 09:21:18 crc kubenswrapper[4840]: E0311 09:21:18.898728 4840 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ab14b76_8d70_44f2_b986_b3d600c73b60.slice/crio-0a86a4d07fba011f02807e2ed6cc060d80c18aeab359b46ed4a060d9bc9539eb.scope\": RecentStats: unable to find data in memory cache]" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.913829 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-ovn-rundir\") pod \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\" (UID: \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.913897 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-ovsdbserver-sb\") pod \"56c8323a-4163-45af-8e67-2490198805f2\" (UID: \"56c8323a-4163-45af-8e67-2490198805f2\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.913968 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-config\") pod \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\" (UID: \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.914000 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-dns-svc\") pod \"56c8323a-4163-45af-8e67-2490198805f2\" (UID: \"56c8323a-4163-45af-8e67-2490198805f2\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.914207 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-config\") pod \"56c8323a-4163-45af-8e67-2490198805f2\" (UID: \"56c8323a-4163-45af-8e67-2490198805f2\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.914269 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dc89\" (UniqueName: \"kubernetes.io/projected/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-kube-api-access-4dc89\") pod \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\" (UID: \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.914309 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-dns-swift-storage-0\") pod \"56c8323a-4163-45af-8e67-2490198805f2\" (UID: \"56c8323a-4163-45af-8e67-2490198805f2\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.914350 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-metrics-certs-tls-certs\") pod \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\" (UID: \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.914397 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7zl8\" (UniqueName: \"kubernetes.io/projected/56c8323a-4163-45af-8e67-2490198805f2-kube-api-access-w7zl8\") pod \"56c8323a-4163-45af-8e67-2490198805f2\" (UID: \"56c8323a-4163-45af-8e67-2490198805f2\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.914423 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-ovs-rundir\") pod \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\" (UID: \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.914486 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-combined-ca-bundle\") pod \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\" (UID: \"f0b554eb-06b2-4670-99df-9b4fcfc6a42f\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.914523 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-ovsdbserver-nb\") pod \"56c8323a-4163-45af-8e67-2490198805f2\" (UID: \"56c8323a-4163-45af-8e67-2490198805f2\") " Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.915776 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d076df9-9280-425c-9b61-bf84751f11c1-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:18 crc kubenswrapper[4840]: E0311 09:21:18.915882 4840 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Mar 11 09:21:18 crc kubenswrapper[4840]: E0311 09:21:18.915948 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-config-data podName:f31748d2-64a9-4839-ac55-691d9682ee8e nodeName:}" failed. No retries permitted until 2026-03-11 09:21:20.915929954 +0000 UTC m=+1479.581599769 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-config-data") pod "rabbitmq-server-0" (UID: "f31748d2-64a9-4839-ac55-691d9682ee8e") : configmap "rabbitmq-config-data" not found Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.917072 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "f0b554eb-06b2-4670-99df-9b4fcfc6a42f" (UID: "f0b554eb-06b2-4670-99df-9b4fcfc6a42f"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.918302 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-6x72v_f0b554eb-06b2-4670-99df-9b4fcfc6a42f/openstack-network-exporter/0.log" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.918398 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6x72v" event={"ID":"f0b554eb-06b2-4670-99df-9b4fcfc6a42f","Type":"ContainerDied","Data":"0c663ab073f92da3912363efc0522d911b62b6a9415959263b09dbb8ca419de0"} Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.918441 4840 scope.go:117] "RemoveContainer" containerID="663786306d4268a4dc766ed8fb9c44b26423f4bc146d98039618216faacd06ee" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.918619 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6x72v" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.920111 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-config" (OuterVolumeSpecName: "config") pod "f0b554eb-06b2-4670-99df-9b4fcfc6a42f" (UID: "f0b554eb-06b2-4670-99df-9b4fcfc6a42f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.920114 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-kube-api-access-4dc89" (OuterVolumeSpecName: "kube-api-access-4dc89") pod "f0b554eb-06b2-4670-99df-9b4fcfc6a42f" (UID: "f0b554eb-06b2-4670-99df-9b4fcfc6a42f"). InnerVolumeSpecName "kube-api-access-4dc89". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.920200 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "f0b554eb-06b2-4670-99df-9b4fcfc6a42f" (UID: "f0b554eb-06b2-4670-99df-9b4fcfc6a42f"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.931484 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56c8323a-4163-45af-8e67-2490198805f2-kube-api-access-w7zl8" (OuterVolumeSpecName: "kube-api-access-w7zl8") pod "56c8323a-4163-45af-8e67-2490198805f2" (UID: "56c8323a-4163-45af-8e67-2490198805f2"). InnerVolumeSpecName "kube-api-access-w7zl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.932243 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.932595 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="629115e9-6bcf-45e8-a0da-d7c06386b7b7" containerName="nova-api-log" containerID="cri-o://964ebeaba64a80f1be2bba1d82ca1a7e7dffe0224141e942622675fc8b28aeb6" gracePeriod=30 Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.933068 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="629115e9-6bcf-45e8-a0da-d7c06386b7b7" containerName="nova-api-api" containerID="cri-o://9895b030ee05b1c30d40171df5aaa90b27098c8597a3d0999e257cf13cec7e67" gracePeriod=30 Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.938454 4840 generic.go:334] "Generic (PLEG): container finished" podID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerID="ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09" exitCode=0 Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.938563 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qtcdv" event={"ID":"b58b63e6-0eb4-444e-be2e-dca6bf37030e","Type":"ContainerDied","Data":"ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09"} Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.946493 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-f2mtr"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.948115 4840 generic.go:334] "Generic (PLEG): container finished" podID="915955ff-c1d8-4f99-a621-f28d463c512f" containerID="a0aea92352d16543560402efd876df16008f6e3e1715ebf0bc7c85603335be96" exitCode=143 Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.948186 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"915955ff-c1d8-4f99-a621-f28d463c512f","Type":"ContainerDied","Data":"a0aea92352d16543560402efd876df16008f6e3e1715ebf0bc7c85603335be96"} Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.960872 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-f2mtr"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.961662 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-bc41-account-create-update-kjhwj" event={"ID":"fc871f48-4882-49a3-be1f-80d95c2548a9","Type":"ContainerStarted","Data":"66861f25312767fc6c9602f4dec035324e5c608987cf8541e97f10a0ccd1580e"} Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.980183 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.980435 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="666b69a9-5d21-4f9c-83ce-49e6e132e8e9" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://bc0ab41733673759705d9fad6da2ab073f93f91b5041cad5c966fdd3db1c9c9b" gracePeriod=30 Mar 11 09:21:18 crc kubenswrapper[4840]: E0311 09:21:18.993417 4840 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 11 09:21:18 crc kubenswrapper[4840]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 11 09:21:18 crc kubenswrapper[4840]: Mar 11 09:21:18 crc kubenswrapper[4840]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 11 09:21:18 crc kubenswrapper[4840]: Mar 11 09:21:18 crc kubenswrapper[4840]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 11 09:21:18 crc kubenswrapper[4840]: Mar 11 09:21:18 crc kubenswrapper[4840]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 11 09:21:18 crc kubenswrapper[4840]: Mar 11 09:21:18 crc kubenswrapper[4840]: if [ -n "barbican" ]; then Mar 11 09:21:18 crc kubenswrapper[4840]: GRANT_DATABASE="barbican" Mar 11 09:21:18 crc kubenswrapper[4840]: else Mar 11 09:21:18 crc kubenswrapper[4840]: GRANT_DATABASE="*" Mar 11 09:21:18 crc kubenswrapper[4840]: fi Mar 11 09:21:18 crc kubenswrapper[4840]: Mar 11 09:21:18 crc kubenswrapper[4840]: # going for maximum compatibility here: Mar 11 09:21:18 crc kubenswrapper[4840]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 11 09:21:18 crc kubenswrapper[4840]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 11 09:21:18 crc kubenswrapper[4840]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 11 09:21:18 crc kubenswrapper[4840]: # support updates Mar 11 09:21:18 crc kubenswrapper[4840]: Mar 11 09:21:18 crc kubenswrapper[4840]: $MYSQL_CMD < logger="UnhandledError" Mar 11 09:21:18 crc kubenswrapper[4840]: I0311 09:21:18.993548 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0f23-account-create-update-mbjxd"] Mar 11 09:21:18 crc kubenswrapper[4840]: E0311 09:21:18.994965 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-bc41-account-create-update-kjhwj" podUID="fc871f48-4882-49a3-be1f-80d95c2548a9" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.004821 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0d076df9-9280-425c-9b61-bf84751f11c1/ovsdbserver-nb/0.log" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.004957 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0d076df9-9280-425c-9b61-bf84751f11c1","Type":"ContainerDied","Data":"80a97e0a6c473d74e6991b8d2dda489c250850997d57966126038a715c457f2e"} Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.005096 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.012059 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.015330 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ada053fd-c71a-4425-8220-b950f0cab229" containerName="nova-scheduler-scheduler" containerID="cri-o://344c0fde0e6a0013eefcf0f63c44c10fef7fcbc6d796ca335364063dd3641c28" gracePeriod=30 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.021671 4840 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-ovs-rundir\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.021716 4840 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-ovn-rundir\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.021727 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.021737 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dc89\" (UniqueName: \"kubernetes.io/projected/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-kube-api-access-4dc89\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.021751 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7zl8\" (UniqueName: \"kubernetes.io/projected/56c8323a-4163-45af-8e67-2490198805f2-kube-api-access-w7zl8\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.035098 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7749c44969-mwk5x" event={"ID":"56c8323a-4163-45af-8e67-2490198805f2","Type":"ContainerDied","Data":"9e25ab2894f031fc64d812de265c80ff66737c3823d4eb2abdd9efba72966490"} Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.035240 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7749c44969-mwk5x" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.048687 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3c72-account-create-update-2dtm6" event={"ID":"40f2a324-9d85-4e2e-ac26-0b710dc379b2","Type":"ContainerStarted","Data":"de8ba4d64066ed5fb1035013d4407c1e52a142bdcbb26aff3582ce57ddf56b42"} Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.059768 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "56c8323a-4163-45af-8e67-2490198805f2" (UID: "56c8323a-4163-45af-8e67-2490198805f2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.062510 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-wpn58"] Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.064748 4840 scope.go:117] "RemoveContainer" containerID="5df6036e4383bcc32cc40a5083113e13895062a09a380000d0d1fb0f5b2f30bb" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.073264 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.085170 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-28ac-account-create-update-4b8kv"] Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.103055 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b48aab55-15cb-42c4-a97b-692dbadf3353/ovsdbserver-sb/0.log" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.103156 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b48aab55-15cb-42c4-a97b-692dbadf3353","Type":"ContainerDied","Data":"1e8db8b381dcff5fcd0c36a76fecd838f5ed132d06427072768d78d9b7dd8517"} Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.103295 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.116541 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-wpn58"] Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.119920 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "56c8323a-4163-45af-8e67-2490198805f2" (UID: "56c8323a-4163-45af-8e67-2490198805f2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.132023 4840 generic.go:334] "Generic (PLEG): container finished" podID="71aaf352-8b91-4846-8ce4-1d83303ac203" containerID="8c20b7cb7d072af1e8fc8505f85ae53009e95d00f58e75523c006bc2e4ffcfc3" exitCode=143 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.132103 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"71aaf352-8b91-4846-8ce4-1d83303ac203","Type":"ContainerDied","Data":"8c20b7cb7d072af1e8fc8505f85ae53009e95d00f58e75523c006bc2e4ffcfc3"} Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.181094 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="e93bc954-4d65-4b5f-8070-2b9800ff3db2" containerName="galera" containerID="cri-o://b052e1bd7a58acb7f7eff7c22cb37c2b87847e0c994e60dd660321fe2b51b6c8" gracePeriod=30 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.181286 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.181399 4840 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.185782 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-q29dz"] Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.186126 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-28ac-account-create-update-4b8kv" event={"ID":"822a8132-1b8a-4f19-b42b-b6acd65e7743","Type":"ContainerStarted","Data":"2d5037fea20c401e1fd5a095ce410babc2e9dd24b321d3a5cda44a0b3eb92280"} Mar 11 09:21:19 crc kubenswrapper[4840]: E0311 09:21:19.210687 4840 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 11 09:21:19 crc kubenswrapper[4840]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 11 09:21:19 crc kubenswrapper[4840]: Mar 11 09:21:19 crc kubenswrapper[4840]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 11 09:21:19 crc kubenswrapper[4840]: Mar 11 09:21:19 crc kubenswrapper[4840]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 11 09:21:19 crc kubenswrapper[4840]: Mar 11 09:21:19 crc kubenswrapper[4840]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 11 09:21:19 crc kubenswrapper[4840]: Mar 11 09:21:19 crc kubenswrapper[4840]: if [ -n "nova_api" ]; then Mar 11 09:21:19 crc kubenswrapper[4840]: GRANT_DATABASE="nova_api" Mar 11 09:21:19 crc kubenswrapper[4840]: else Mar 11 09:21:19 crc kubenswrapper[4840]: GRANT_DATABASE="*" Mar 11 09:21:19 crc kubenswrapper[4840]: fi Mar 11 09:21:19 crc kubenswrapper[4840]: Mar 11 09:21:19 crc kubenswrapper[4840]: # going for maximum compatibility here: Mar 11 09:21:19 crc kubenswrapper[4840]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 11 09:21:19 crc kubenswrapper[4840]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 11 09:21:19 crc kubenswrapper[4840]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 11 09:21:19 crc kubenswrapper[4840]: # support updates Mar 11 09:21:19 crc kubenswrapper[4840]: Mar 11 09:21:19 crc kubenswrapper[4840]: $MYSQL_CMD < logger="UnhandledError" Mar 11 09:21:19 crc kubenswrapper[4840]: E0311 09:21:19.212379 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-28ac-account-create-update-4b8kv" podUID="822a8132-1b8a-4f19-b42b-b6acd65e7743" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.220238 4840 generic.go:334] "Generic (PLEG): container finished" podID="0ab14b76-8d70-44f2-b986-b3d600c73b60" containerID="0a86a4d07fba011f02807e2ed6cc060d80c18aeab359b46ed4a060d9bc9539eb" exitCode=143 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.220366 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-db9f7b9c-cdnm7" event={"ID":"0ab14b76-8d70-44f2-b986-b3d600c73b60","Type":"ContainerDied","Data":"0a86a4d07fba011f02807e2ed6cc060d80c18aeab359b46ed4a060d9bc9539eb"} Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.225768 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0b554eb-06b2-4670-99df-9b4fcfc6a42f" (UID: "f0b554eb-06b2-4670-99df-9b4fcfc6a42f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.239213 4840 scope.go:117] "RemoveContainer" containerID="3959aad58a8318dc9102553ae85a912f06348c456f581330b23a50a7f00d5ade" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.239676 4840 generic.go:334] "Generic (PLEG): container finished" podID="86c17dbf-d890-4de3-bf5d-29e0aea4d968" containerID="de2608ad4638642b2ed36b2182d3eea8ca8dc51168010aa67653e3ba968f01af" exitCode=143 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.239886 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"86c17dbf-d890-4de3-bf5d-29e0aea4d968","Type":"ContainerDied","Data":"de2608ad4638642b2ed36b2182d3eea8ca8dc51168010aa67653e3ba968f01af"} Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.252349 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "56c8323a-4163-45af-8e67-2490198805f2" (UID: "56c8323a-4163-45af-8e67-2490198805f2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.265092 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-w9dg6"] Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.274080 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.274302 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="9699b913-55db-46fe-9831-1e1ac94ca609" containerName="nova-cell1-conductor-conductor" containerID="cri-o://cfec390aae2b33c2baf59838d2300ba819a1ecdc9f653835cc711873fa787a85" gracePeriod=30 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.285169 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.285815 4840 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.294756 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-config" (OuterVolumeSpecName: "config") pod "56c8323a-4163-45af-8e67-2490198805f2" (UID: "56c8323a-4163-45af-8e67-2490198805f2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.298259 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "56c8323a-4163-45af-8e67-2490198805f2" (UID: "56c8323a-4163-45af-8e67-2490198805f2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.307437 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-w9dg6"] Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.338123 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mdkd2"] Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.342677 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.342986 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="95c08694-92ed-44cb-8ca3-92a47b5571d4" containerName="nova-cell0-conductor-conductor" containerID="cri-o://a8b0caed3a79d6574e838b5ca30ae9f7dc0d2e8987551e440fb3bb935fcc2b90" gracePeriod=30 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.363581 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mdkd2"] Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.372612 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.387123 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "f0b554eb-06b2-4670-99df-9b4fcfc6a42f" (UID: "f0b554eb-06b2-4670-99df-9b4fcfc6a42f"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.388702 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.389042 4840 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0b554eb-06b2-4670-99df-9b4fcfc6a42f-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.389180 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/56c8323a-4163-45af-8e67-2490198805f2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.393833 4840 scope.go:117] "RemoveContainer" containerID="dc66b182d7e7d4162f25d405953a697d877ebcc573fde71e7656196962265a32" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.394197 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-bc41-account-create-update-kjhwj"] Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.415898 4840 generic.go:334] "Generic (PLEG): container finished" podID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerID="905763d5f4df2194269413da5305c2cade78aefa23c40527af321dc5de2fb39b" exitCode=0 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.415926 4840 generic.go:334] "Generic (PLEG): container finished" podID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerID="9feba492b523c8dcc3c35b13e0ce09e4fe030b4986a39633f2601ebb6e23baa2" exitCode=0 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.415934 4840 generic.go:334] "Generic (PLEG): container finished" podID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerID="dd4d20030dddae38ba6950210cd3d087c5fd1596090ecd0351b94eeaa971448c" exitCode=0 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.415943 4840 generic.go:334] "Generic (PLEG): container finished" podID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerID="e1983bac8c82d7bd2cb300b8c29f24106251022a1b8393dbcfa2f98ffc26c55b" exitCode=0 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.415950 4840 generic.go:334] "Generic (PLEG): container finished" podID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerID="51312dd1b22898c3fcfef3be786a86dd0bc450bace061ed340d18f1903ce9f72" exitCode=0 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.415958 4840 generic.go:334] "Generic (PLEG): container finished" podID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerID="322dd29ccdd62a25f4af6457497d29a55dbd6bb69515090d89cfac3caafffe97" exitCode=0 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.415964 4840 generic.go:334] "Generic (PLEG): container finished" podID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerID="22f7a187c7f01516652e793a521148fe2ac2ab6befee1d8ed2d686c2bc8b8adc" exitCode=0 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.415971 4840 generic.go:334] "Generic (PLEG): container finished" podID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerID="25eb7124241e743ab0f23cf5b6fd16ff25372ee1ef90290032d988d844df4427" exitCode=0 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.415979 4840 generic.go:334] "Generic (PLEG): container finished" podID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerID="9b3746cc36c2137ab5787473b5942125bb1e7796da2df7d082d9e9313eaffba4" exitCode=0 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.415985 4840 generic.go:334] "Generic (PLEG): container finished" podID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerID="7ce263946b4448b3a0388e295bd5ae37d6a5148f4b13708343302829cff90b62" exitCode=0 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.415992 4840 generic.go:334] "Generic (PLEG): container finished" podID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerID="d57ba4418c21d87822207480d1742dfe663f600ae09365578d4cc8cf912a7fa3" exitCode=0 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.415999 4840 generic.go:334] "Generic (PLEG): container finished" podID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerID="804f1935b7ae1994c956dc341f3afb51d2d0d59c2f35239a216e61bc6f1d79d2" exitCode=0 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.416006 4840 generic.go:334] "Generic (PLEG): container finished" podID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerID="7c83bfc113dd757cc74c4935f44d862430d391d2b34e5c7877464152ad5f1bdd" exitCode=0 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.416013 4840 generic.go:334] "Generic (PLEG): container finished" podID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerID="1d4b8fe0d33d601094bd4c0c808ede9155728eb0e8cb7eb8ed2a56feaa32a3ad" exitCode=0 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.416094 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerDied","Data":"905763d5f4df2194269413da5305c2cade78aefa23c40527af321dc5de2fb39b"} Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.416141 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerDied","Data":"9feba492b523c8dcc3c35b13e0ce09e4fe030b4986a39633f2601ebb6e23baa2"} Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.416157 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerDied","Data":"dd4d20030dddae38ba6950210cd3d087c5fd1596090ecd0351b94eeaa971448c"} Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.416168 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerDied","Data":"e1983bac8c82d7bd2cb300b8c29f24106251022a1b8393dbcfa2f98ffc26c55b"} Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.416179 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerDied","Data":"51312dd1b22898c3fcfef3be786a86dd0bc450bace061ed340d18f1903ce9f72"} Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.416190 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerDied","Data":"322dd29ccdd62a25f4af6457497d29a55dbd6bb69515090d89cfac3caafffe97"} Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.416200 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerDied","Data":"22f7a187c7f01516652e793a521148fe2ac2ab6befee1d8ed2d686c2bc8b8adc"} Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.416211 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerDied","Data":"25eb7124241e743ab0f23cf5b6fd16ff25372ee1ef90290032d988d844df4427"} Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.416221 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerDied","Data":"9b3746cc36c2137ab5787473b5942125bb1e7796da2df7d082d9e9313eaffba4"} Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.416233 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerDied","Data":"7ce263946b4448b3a0388e295bd5ae37d6a5148f4b13708343302829cff90b62"} Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.416246 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerDied","Data":"d57ba4418c21d87822207480d1742dfe663f600ae09365578d4cc8cf912a7fa3"} Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.416254 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerDied","Data":"804f1935b7ae1994c956dc341f3afb51d2d0d59c2f35239a216e61bc6f1d79d2"} Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.416262 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerDied","Data":"7c83bfc113dd757cc74c4935f44d862430d391d2b34e5c7877464152ad5f1bdd"} Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.416272 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerDied","Data":"1d4b8fe0d33d601094bd4c0c808ede9155728eb0e8cb7eb8ed2a56feaa32a3ad"} Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.425052 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0f23-account-create-update-mbjxd" event={"ID":"b4873de8-8c37-49b7-a8ba-b352a2cf0320","Type":"ContainerStarted","Data":"cc3143ee5795b8b5fc103fc2a51c7b5e869ce56a236d9a328707bb7efcec05f2"} Mar 11 09:21:19 crc kubenswrapper[4840]: E0311 09:21:19.430234 4840 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 11 09:21:19 crc kubenswrapper[4840]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 11 09:21:19 crc kubenswrapper[4840]: Mar 11 09:21:19 crc kubenswrapper[4840]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 11 09:21:19 crc kubenswrapper[4840]: Mar 11 09:21:19 crc kubenswrapper[4840]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 11 09:21:19 crc kubenswrapper[4840]: Mar 11 09:21:19 crc kubenswrapper[4840]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 11 09:21:19 crc kubenswrapper[4840]: Mar 11 09:21:19 crc kubenswrapper[4840]: if [ -n "nova_cell0" ]; then Mar 11 09:21:19 crc kubenswrapper[4840]: GRANT_DATABASE="nova_cell0" Mar 11 09:21:19 crc kubenswrapper[4840]: else Mar 11 09:21:19 crc kubenswrapper[4840]: GRANT_DATABASE="*" Mar 11 09:21:19 crc kubenswrapper[4840]: fi Mar 11 09:21:19 crc kubenswrapper[4840]: Mar 11 09:21:19 crc kubenswrapper[4840]: # going for maximum compatibility here: Mar 11 09:21:19 crc kubenswrapper[4840]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 11 09:21:19 crc kubenswrapper[4840]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 11 09:21:19 crc kubenswrapper[4840]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 11 09:21:19 crc kubenswrapper[4840]: # support updates Mar 11 09:21:19 crc kubenswrapper[4840]: Mar 11 09:21:19 crc kubenswrapper[4840]: $MYSQL_CMD < logger="UnhandledError" Mar 11 09:21:19 crc kubenswrapper[4840]: E0311 09:21:19.436235 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-0f23-account-create-update-mbjxd" podUID="b4873de8-8c37-49b7-a8ba-b352a2cf0320" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.455539 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-28ac-account-create-update-4b8kv"] Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.465061 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f31748d2-64a9-4839-ac55-691d9682ee8e" containerName="rabbitmq" containerID="cri-o://0e215c93773db23bc670a8e1e1523ba89c86e46f9708b9592117120dd8a29424" gracePeriod=604800 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.465539 4840 generic.go:334] "Generic (PLEG): container finished" podID="d3ccfc13-7a62-4923-95ab-c68cb93aa03c" containerID="fa51fa6846a6391fd4d41433bac4cdd3a55817184d7eb6184af7110475d61e48" exitCode=143 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.465621 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d3ccfc13-7a62-4923-95ab-c68cb93aa03c","Type":"ContainerDied","Data":"fa51fa6846a6391fd4d41433bac4cdd3a55817184d7eb6184af7110475d61e48"} Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.466945 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-3c72-account-create-update-2dtm6"] Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.529792 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0f23-account-create-update-mbjxd"] Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.530990 4840 generic.go:334] "Generic (PLEG): container finished" podID="550bab70-eacb-4c56-98fd-460c20f22dcc" containerID="de14b2ab95a92c30b8e51cfa0b41eeb1219f8858689935e5b0e0d5e56fbb4fc9" exitCode=0 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.531660 4840 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-q29dz" secret="" err="secret \"galera-openstack-cell1-dockercfg-ntlhs\" not found" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.532108 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58769b4545-5q2fv" event={"ID":"550bab70-eacb-4c56-98fd-460c20f22dcc","Type":"ContainerDied","Data":"de14b2ab95a92c30b8e51cfa0b41eeb1219f8858689935e5b0e0d5e56fbb4fc9"} Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.532164 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-298f-account-create-update-hn765" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.541661 4840 scope.go:117] "RemoveContainer" containerID="f9500782e434442aa22be0ab97cf2a255b5b9cecd44ab6561484a6263b0f2598" Mar 11 09:21:19 crc kubenswrapper[4840]: E0311 09:21:19.550867 4840 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 11 09:21:19 crc kubenswrapper[4840]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 11 09:21:19 crc kubenswrapper[4840]: Mar 11 09:21:19 crc kubenswrapper[4840]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 11 09:21:19 crc kubenswrapper[4840]: Mar 11 09:21:19 crc kubenswrapper[4840]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 11 09:21:19 crc kubenswrapper[4840]: Mar 11 09:21:19 crc kubenswrapper[4840]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 11 09:21:19 crc kubenswrapper[4840]: Mar 11 09:21:19 crc kubenswrapper[4840]: if [ -n "" ]; then Mar 11 09:21:19 crc kubenswrapper[4840]: GRANT_DATABASE="" Mar 11 09:21:19 crc kubenswrapper[4840]: else Mar 11 09:21:19 crc kubenswrapper[4840]: GRANT_DATABASE="*" Mar 11 09:21:19 crc kubenswrapper[4840]: fi Mar 11 09:21:19 crc kubenswrapper[4840]: Mar 11 09:21:19 crc kubenswrapper[4840]: # going for maximum compatibility here: Mar 11 09:21:19 crc kubenswrapper[4840]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 11 09:21:19 crc kubenswrapper[4840]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 11 09:21:19 crc kubenswrapper[4840]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 11 09:21:19 crc kubenswrapper[4840]: # support updates Mar 11 09:21:19 crc kubenswrapper[4840]: Mar 11 09:21:19 crc kubenswrapper[4840]: $MYSQL_CMD < logger="UnhandledError" Mar 11 09:21:19 crc kubenswrapper[4840]: E0311 09:21:19.554403 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-q29dz" podUID="aa334d0b-a179-4905-a660-05bbc12e5c02" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.560938 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-298f-account-create-update-hn765" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.643036 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.653195 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.662709 4840 scope.go:117] "RemoveContainer" containerID="037417d9a07e2291989661f88eef7dd45b3d6fb5f3c8e6af6e31e40973d67031" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.704262 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 11 09:21:19 crc kubenswrapper[4840]: E0311 09:21:19.704975 4840 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 11 09:21:19 crc kubenswrapper[4840]: E0311 09:21:19.705043 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aa334d0b-a179-4905-a660-05bbc12e5c02-operator-scripts podName:aa334d0b-a179-4905-a660-05bbc12e5c02 nodeName:}" failed. No retries permitted until 2026-03-11 09:21:21.705026977 +0000 UTC m=+1480.370696792 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/aa334d0b-a179-4905-a660-05bbc12e5c02-operator-scripts") pod "root-account-create-update-q29dz" (UID: "aa334d0b-a179-4905-a660-05bbc12e5c02") : configmap "openstack-cell1-scripts" not found Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.715631 4840 scope.go:117] "RemoveContainer" containerID="10e0544be31ad9957ce1bc9464d8fd6efda2ed36676a4427c1dd1344bc4e1a58" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.715631 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.728004 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7749c44969-mwk5x"] Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.744555 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7749c44969-mwk5x"] Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.764264 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-97696577-2mh8q"] Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.766628 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-97696577-2mh8q" podUID="f10cd3a6-0a55-4957-b861-678d9af3c338" containerName="proxy-server" containerID="cri-o://e59d16a9086fcb2441c89141dd63cbc53bf993e3952c1f25bf481d63fd3390fa" gracePeriod=30 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.766856 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-97696577-2mh8q" podUID="f10cd3a6-0a55-4957-b861-678d9af3c338" containerName="proxy-httpd" containerID="cri-o://7ec25a62f5da025330f39922a9766d18ca04cc7a4cbe3b4c61c31b0ef51d49f5" gracePeriod=30 Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.774135 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-6x72v"] Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.780608 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-6x72v"] Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.895381 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="666b69a9-5d21-4f9c-83ce-49e6e132e8e9" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.0.204:6080/vnc_lite.html\": dial tcp 10.217.0.204:6080: connect: connection refused" Mar 11 09:21:19 crc kubenswrapper[4840]: I0311 09:21:19.983708 4840 scope.go:117] "RemoveContainer" containerID="c7605e78bc0c06e2ec26eebd5becb94c457edb41d3f28ee05f105bd2a7e5ac3f" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.012345 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3c72-account-create-update-2dtm6" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.031515 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9sj79"] Mar 11 09:21:20 crc kubenswrapper[4840]: E0311 09:21:20.031913 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b48aab55-15cb-42c4-a97b-692dbadf3353" containerName="ovsdbserver-sb" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.031927 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b48aab55-15cb-42c4-a97b-692dbadf3353" containerName="ovsdbserver-sb" Mar 11 09:21:20 crc kubenswrapper[4840]: E0311 09:21:20.031940 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d076df9-9280-425c-9b61-bf84751f11c1" containerName="openstack-network-exporter" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.031946 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d076df9-9280-425c-9b61-bf84751f11c1" containerName="openstack-network-exporter" Mar 11 09:21:20 crc kubenswrapper[4840]: E0311 09:21:20.031958 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d076df9-9280-425c-9b61-bf84751f11c1" containerName="ovsdbserver-nb" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.031964 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d076df9-9280-425c-9b61-bf84751f11c1" containerName="ovsdbserver-nb" Mar 11 09:21:20 crc kubenswrapper[4840]: E0311 09:21:20.031989 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c8323a-4163-45af-8e67-2490198805f2" containerName="init" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.031995 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c8323a-4163-45af-8e67-2490198805f2" containerName="init" Mar 11 09:21:20 crc kubenswrapper[4840]: E0311 09:21:20.032029 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c8323a-4163-45af-8e67-2490198805f2" containerName="dnsmasq-dns" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.032036 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c8323a-4163-45af-8e67-2490198805f2" containerName="dnsmasq-dns" Mar 11 09:21:20 crc kubenswrapper[4840]: E0311 09:21:20.032058 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b48aab55-15cb-42c4-a97b-692dbadf3353" containerName="openstack-network-exporter" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.032067 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b48aab55-15cb-42c4-a97b-692dbadf3353" containerName="openstack-network-exporter" Mar 11 09:21:20 crc kubenswrapper[4840]: E0311 09:21:20.032081 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0b554eb-06b2-4670-99df-9b4fcfc6a42f" containerName="openstack-network-exporter" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.032089 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0b554eb-06b2-4670-99df-9b4fcfc6a42f" containerName="openstack-network-exporter" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.032283 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b48aab55-15cb-42c4-a97b-692dbadf3353" containerName="openstack-network-exporter" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.032301 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c8323a-4163-45af-8e67-2490198805f2" containerName="dnsmasq-dns" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.032313 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b48aab55-15cb-42c4-a97b-692dbadf3353" containerName="ovsdbserver-sb" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.032324 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d076df9-9280-425c-9b61-bf84751f11c1" containerName="ovsdbserver-nb" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.032338 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0b554eb-06b2-4670-99df-9b4fcfc6a42f" containerName="openstack-network-exporter" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.032350 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d076df9-9280-425c-9b61-bf84751f11c1" containerName="openstack-network-exporter" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.033280 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9sj79" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.036491 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.039347 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9sj79"] Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.072272 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ba90e38-377f-42d7-91dd-cb92889e0cbf" path="/var/lib/kubelet/pods/0ba90e38-377f-42d7-91dd-cb92889e0cbf/volumes" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.079741 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d076df9-9280-425c-9b61-bf84751f11c1" path="/var/lib/kubelet/pods/0d076df9-9280-425c-9b61-bf84751f11c1/volumes" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.080841 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16f6c965-932a-45fa-9ad7-608779e0bf25" path="/var/lib/kubelet/pods/16f6c965-932a-45fa-9ad7-608779e0bf25/volumes" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.081874 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e078769-708a-4d94-8d56-cc2e673edbd2" path="/var/lib/kubelet/pods/1e078769-708a-4d94-8d56-cc2e673edbd2/volumes" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.083454 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f3b6268-b59e-449e-a7d4-ecb9e26e1f39" path="/var/lib/kubelet/pods/1f3b6268-b59e-449e-a7d4-ecb9e26e1f39/volumes" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.084811 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="206be006-d33b-43d1-8fae-ecc49291d0a4" path="/var/lib/kubelet/pods/206be006-d33b-43d1-8fae-ecc49291d0a4/volumes" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.085434 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b934acf-2bd8-420b-803b-dd9c6e993fe7" path="/var/lib/kubelet/pods/2b934acf-2bd8-420b-803b-dd9c6e993fe7/volumes" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.087210 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37f195c4-316b-428e-9213-ee66b1fcfd9f" path="/var/lib/kubelet/pods/37f195c4-316b-428e-9213-ee66b1fcfd9f/volumes" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.088040 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56c8323a-4163-45af-8e67-2490198805f2" path="/var/lib/kubelet/pods/56c8323a-4163-45af-8e67-2490198805f2/volumes" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.088862 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="757b3a3d-0655-4517-a752-b944899642c9" path="/var/lib/kubelet/pods/757b3a3d-0655-4517-a752-b944899642c9/volumes" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.090235 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="794eb074-baf3-46dc-8f9d-8a92fc9240fd" path="/var/lib/kubelet/pods/794eb074-baf3-46dc-8f9d-8a92fc9240fd/volumes" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.090864 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8627bd78-c61f-4140-88b3-09cd6091afb4" path="/var/lib/kubelet/pods/8627bd78-c61f-4140-88b3-09cd6091afb4/volumes" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.091434 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4" path="/var/lib/kubelet/pods/a2ca037a-561b-4d4b-b85e-5c0fcc9b1cb4/volumes" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.092788 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b48aab55-15cb-42c4-a97b-692dbadf3353" path="/var/lib/kubelet/pods/b48aab55-15cb-42c4-a97b-692dbadf3353/volumes" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.093447 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cebd38d4-d574-424d-8472-b60da948f8d1" path="/var/lib/kubelet/pods/cebd38d4-d574-424d-8472-b60da948f8d1/volumes" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.094213 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2c581ad-e4bb-40c7-aa81-8937e2aab87b" path="/var/lib/kubelet/pods/e2c581ad-e4bb-40c7-aa81-8937e2aab87b/volumes" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.094818 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0b554eb-06b2-4670-99df-9b4fcfc6a42f" path="/var/lib/kubelet/pods/f0b554eb-06b2-4670-99df-9b4fcfc6a42f/volumes" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.139633 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40f2a324-9d85-4e2e-ac26-0b710dc379b2-operator-scripts\") pod \"40f2a324-9d85-4e2e-ac26-0b710dc379b2\" (UID: \"40f2a324-9d85-4e2e-ac26-0b710dc379b2\") " Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.139827 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb2lx\" (UniqueName: \"kubernetes.io/projected/40f2a324-9d85-4e2e-ac26-0b710dc379b2-kube-api-access-jb2lx\") pod \"40f2a324-9d85-4e2e-ac26-0b710dc379b2\" (UID: \"40f2a324-9d85-4e2e-ac26-0b710dc379b2\") " Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.140093 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/882e0084-6891-4852-aecb-2951f5763800-operator-scripts\") pod \"root-account-create-update-9sj79\" (UID: \"882e0084-6891-4852-aecb-2951f5763800\") " pod="openstack/root-account-create-update-9sj79" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.140158 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrffb\" (UniqueName: \"kubernetes.io/projected/882e0084-6891-4852-aecb-2951f5763800-kube-api-access-xrffb\") pod \"root-account-create-update-9sj79\" (UID: \"882e0084-6891-4852-aecb-2951f5763800\") " pod="openstack/root-account-create-update-9sj79" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.140431 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40f2a324-9d85-4e2e-ac26-0b710dc379b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "40f2a324-9d85-4e2e-ac26-0b710dc379b2" (UID: "40f2a324-9d85-4e2e-ac26-0b710dc379b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.147806 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40f2a324-9d85-4e2e-ac26-0b710dc379b2-kube-api-access-jb2lx" (OuterVolumeSpecName: "kube-api-access-jb2lx") pod "40f2a324-9d85-4e2e-ac26-0b710dc379b2" (UID: "40f2a324-9d85-4e2e-ac26-0b710dc379b2"). InnerVolumeSpecName "kube-api-access-jb2lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.241985 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/882e0084-6891-4852-aecb-2951f5763800-operator-scripts\") pod \"root-account-create-update-9sj79\" (UID: \"882e0084-6891-4852-aecb-2951f5763800\") " pod="openstack/root-account-create-update-9sj79" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.242458 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrffb\" (UniqueName: \"kubernetes.io/projected/882e0084-6891-4852-aecb-2951f5763800-kube-api-access-xrffb\") pod \"root-account-create-update-9sj79\" (UID: \"882e0084-6891-4852-aecb-2951f5763800\") " pod="openstack/root-account-create-update-9sj79" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.242703 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb2lx\" (UniqueName: \"kubernetes.io/projected/40f2a324-9d85-4e2e-ac26-0b710dc379b2-kube-api-access-jb2lx\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.242718 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40f2a324-9d85-4e2e-ac26-0b710dc379b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.244154 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/882e0084-6891-4852-aecb-2951f5763800-operator-scripts\") pod \"root-account-create-update-9sj79\" (UID: \"882e0084-6891-4852-aecb-2951f5763800\") " pod="openstack/root-account-create-update-9sj79" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.261591 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrffb\" (UniqueName: \"kubernetes.io/projected/882e0084-6891-4852-aecb-2951f5763800-kube-api-access-xrffb\") pod \"root-account-create-update-9sj79\" (UID: \"882e0084-6891-4852-aecb-2951f5763800\") " pod="openstack/root-account-create-update-9sj79" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.287998 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-db9f7b9c-cdnm7" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.343388 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab14b76-8d70-44f2-b986-b3d600c73b60-combined-ca-bundle\") pod \"0ab14b76-8d70-44f2-b986-b3d600c73b60\" (UID: \"0ab14b76-8d70-44f2-b986-b3d600c73b60\") " Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.343448 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ab14b76-8d70-44f2-b986-b3d600c73b60-logs\") pod \"0ab14b76-8d70-44f2-b986-b3d600c73b60\" (UID: \"0ab14b76-8d70-44f2-b986-b3d600c73b60\") " Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.343634 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ab14b76-8d70-44f2-b986-b3d600c73b60-config-data-custom\") pod \"0ab14b76-8d70-44f2-b986-b3d600c73b60\" (UID: \"0ab14b76-8d70-44f2-b986-b3d600c73b60\") " Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.343683 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab14b76-8d70-44f2-b986-b3d600c73b60-config-data\") pod \"0ab14b76-8d70-44f2-b986-b3d600c73b60\" (UID: \"0ab14b76-8d70-44f2-b986-b3d600c73b60\") " Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.343720 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jphpq\" (UniqueName: \"kubernetes.io/projected/0ab14b76-8d70-44f2-b986-b3d600c73b60-kube-api-access-jphpq\") pod \"0ab14b76-8d70-44f2-b986-b3d600c73b60\" (UID: \"0ab14b76-8d70-44f2-b986-b3d600c73b60\") " Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.355655 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ab14b76-8d70-44f2-b986-b3d600c73b60-logs" (OuterVolumeSpecName: "logs") pod "0ab14b76-8d70-44f2-b986-b3d600c73b60" (UID: "0ab14b76-8d70-44f2-b986-b3d600c73b60"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.360061 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab14b76-8d70-44f2-b986-b3d600c73b60-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0ab14b76-8d70-44f2-b986-b3d600c73b60" (UID: "0ab14b76-8d70-44f2-b986-b3d600c73b60"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.385429 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9sj79" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.418008 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab14b76-8d70-44f2-b986-b3d600c73b60-kube-api-access-jphpq" (OuterVolumeSpecName: "kube-api-access-jphpq") pod "0ab14b76-8d70-44f2-b986-b3d600c73b60" (UID: "0ab14b76-8d70-44f2-b986-b3d600c73b60"). InnerVolumeSpecName "kube-api-access-jphpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.432126 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab14b76-8d70-44f2-b986-b3d600c73b60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ab14b76-8d70-44f2-b986-b3d600c73b60" (UID: "0ab14b76-8d70-44f2-b986-b3d600c73b60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.438780 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab14b76-8d70-44f2-b986-b3d600c73b60-config-data" (OuterVolumeSpecName: "config-data") pod "0ab14b76-8d70-44f2-b986-b3d600c73b60" (UID: "0ab14b76-8d70-44f2-b986-b3d600c73b60"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.457707 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab14b76-8d70-44f2-b986-b3d600c73b60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.457742 4840 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ab14b76-8d70-44f2-b986-b3d600c73b60-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.457753 4840 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ab14b76-8d70-44f2-b986-b3d600c73b60-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.457762 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab14b76-8d70-44f2-b986-b3d600c73b60-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.457772 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jphpq\" (UniqueName: \"kubernetes.io/projected/0ab14b76-8d70-44f2-b986-b3d600c73b60-kube-api-access-jphpq\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.604974 4840 generic.go:334] "Generic (PLEG): container finished" podID="629115e9-6bcf-45e8-a0da-d7c06386b7b7" containerID="964ebeaba64a80f1be2bba1d82ca1a7e7dffe0224141e942622675fc8b28aeb6" exitCode=143 Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.605537 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"629115e9-6bcf-45e8-a0da-d7c06386b7b7","Type":"ContainerDied","Data":"964ebeaba64a80f1be2bba1d82ca1a7e7dffe0224141e942622675fc8b28aeb6"} Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.628631 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3c72-account-create-update-2dtm6" event={"ID":"40f2a324-9d85-4e2e-ac26-0b710dc379b2","Type":"ContainerDied","Data":"de8ba4d64066ed5fb1035013d4407c1e52a142bdcbb26aff3582ce57ddf56b42"} Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.628669 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3c72-account-create-update-2dtm6" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.634624 4840 generic.go:334] "Generic (PLEG): container finished" podID="e93bc954-4d65-4b5f-8070-2b9800ff3db2" containerID="b052e1bd7a58acb7f7eff7c22cb37c2b87847e0c994e60dd660321fe2b51b6c8" exitCode=0 Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.634713 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e93bc954-4d65-4b5f-8070-2b9800ff3db2","Type":"ContainerDied","Data":"b052e1bd7a58acb7f7eff7c22cb37c2b87847e0c994e60dd660321fe2b51b6c8"} Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.641424 4840 generic.go:334] "Generic (PLEG): container finished" podID="f10cd3a6-0a55-4957-b861-678d9af3c338" containerID="e59d16a9086fcb2441c89141dd63cbc53bf993e3952c1f25bf481d63fd3390fa" exitCode=0 Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.641489 4840 generic.go:334] "Generic (PLEG): container finished" podID="f10cd3a6-0a55-4957-b861-678d9af3c338" containerID="7ec25a62f5da025330f39922a9766d18ca04cc7a4cbe3b4c61c31b0ef51d49f5" exitCode=0 Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.641525 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-97696577-2mh8q" event={"ID":"f10cd3a6-0a55-4957-b861-678d9af3c338","Type":"ContainerDied","Data":"e59d16a9086fcb2441c89141dd63cbc53bf993e3952c1f25bf481d63fd3390fa"} Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.641591 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-97696577-2mh8q" event={"ID":"f10cd3a6-0a55-4957-b861-678d9af3c338","Type":"ContainerDied","Data":"7ec25a62f5da025330f39922a9766d18ca04cc7a4cbe3b4c61c31b0ef51d49f5"} Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.646843 4840 generic.go:334] "Generic (PLEG): container finished" podID="15a50bea-c32e-4aed-8fd2-7289e1694f6e" containerID="70a8af150a750503de4de737257241b8eae85fdd8f6880b4e54352c0730d7435" exitCode=143 Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.646938 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dcc97bc9b-l7z2g" event={"ID":"15a50bea-c32e-4aed-8fd2-7289e1694f6e","Type":"ContainerDied","Data":"70a8af150a750503de4de737257241b8eae85fdd8f6880b4e54352c0730d7435"} Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.672334 4840 generic.go:334] "Generic (PLEG): container finished" podID="d7acca3e-61e4-495b-adbf-36c435b4a7d2" containerID="7ee049965601ca58aa215725536bdf7c36fb49e30446ec4221604a56f4d368d1" exitCode=0 Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.672389 4840 generic.go:334] "Generic (PLEG): container finished" podID="d7acca3e-61e4-495b-adbf-36c435b4a7d2" containerID="33e49e14fd2afa823c560d68be5b5e4ad273e3e82868e5aba42b52ba56655a7e" exitCode=143 Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.672975 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" event={"ID":"d7acca3e-61e4-495b-adbf-36c435b4a7d2","Type":"ContainerDied","Data":"7ee049965601ca58aa215725536bdf7c36fb49e30446ec4221604a56f4d368d1"} Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.673048 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" event={"ID":"d7acca3e-61e4-495b-adbf-36c435b4a7d2","Type":"ContainerDied","Data":"33e49e14fd2afa823c560d68be5b5e4ad273e3e82868e5aba42b52ba56655a7e"} Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.684722 4840 generic.go:334] "Generic (PLEG): container finished" podID="fd8b15e6-0bb6-4d79-99aa-765ded51af1d" containerID="8d27b219acdbf1ed0f6cb152b86c9b44e8ed6f976378d28441b06575714a48ed" exitCode=0 Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.684816 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fd8b15e6-0bb6-4d79-99aa-765ded51af1d","Type":"ContainerDied","Data":"8d27b219acdbf1ed0f6cb152b86c9b44e8ed6f976378d28441b06575714a48ed"} Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.691271 4840 generic.go:334] "Generic (PLEG): container finished" podID="0ab14b76-8d70-44f2-b986-b3d600c73b60" containerID="403197cd6e2f799a8e5d55b29a90f95a1d072287400393d0ce075725778b4fe2" exitCode=0 Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.691381 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-db9f7b9c-cdnm7" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.691383 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-db9f7b9c-cdnm7" event={"ID":"0ab14b76-8d70-44f2-b986-b3d600c73b60","Type":"ContainerDied","Data":"403197cd6e2f799a8e5d55b29a90f95a1d072287400393d0ce075725778b4fe2"} Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.691451 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-db9f7b9c-cdnm7" event={"ID":"0ab14b76-8d70-44f2-b986-b3d600c73b60","Type":"ContainerDied","Data":"07de301082e0a2bae7b73eca53225371682f43e11308a0ffea360e54fd0fc0a5"} Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.691488 4840 scope.go:117] "RemoveContainer" containerID="403197cd6e2f799a8e5d55b29a90f95a1d072287400393d0ce075725778b4fe2" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.709539 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.732261 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.737797 4840 generic.go:334] "Generic (PLEG): container finished" podID="666b69a9-5d21-4f9c-83ce-49e6e132e8e9" containerID="bc0ab41733673759705d9fad6da2ab073f93f91b5041cad5c966fdd3db1c9c9b" exitCode=0 Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.737872 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"666b69a9-5d21-4f9c-83ce-49e6e132e8e9","Type":"ContainerDied","Data":"bc0ab41733673759705d9fad6da2ab073f93f91b5041cad5c966fdd3db1c9c9b"} Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.737948 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.744244 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-298f-account-create-update-hn765" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.749950 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-3c72-account-create-update-2dtm6"] Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.755588 4840 scope.go:117] "RemoveContainer" containerID="0a86a4d07fba011f02807e2ed6cc060d80c18aeab359b46ed4a060d9bc9539eb" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.761109 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-3c72-account-create-update-2dtm6"] Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.767744 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-combined-ca-bundle\") pod \"666b69a9-5d21-4f9c-83ce-49e6e132e8e9\" (UID: \"666b69a9-5d21-4f9c-83ce-49e6e132e8e9\") " Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.767857 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hb29\" (UniqueName: \"kubernetes.io/projected/d7acca3e-61e4-495b-adbf-36c435b4a7d2-kube-api-access-6hb29\") pod \"d7acca3e-61e4-495b-adbf-36c435b4a7d2\" (UID: \"d7acca3e-61e4-495b-adbf-36c435b4a7d2\") " Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.767914 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7acca3e-61e4-495b-adbf-36c435b4a7d2-config-data-custom\") pod \"d7acca3e-61e4-495b-adbf-36c435b4a7d2\" (UID: \"d7acca3e-61e4-495b-adbf-36c435b4a7d2\") " Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.767941 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2stz\" (UniqueName: \"kubernetes.io/projected/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-kube-api-access-x2stz\") pod \"666b69a9-5d21-4f9c-83ce-49e6e132e8e9\" (UID: \"666b69a9-5d21-4f9c-83ce-49e6e132e8e9\") " Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.767991 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-config-data\") pod \"666b69a9-5d21-4f9c-83ce-49e6e132e8e9\" (UID: \"666b69a9-5d21-4f9c-83ce-49e6e132e8e9\") " Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.768029 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-nova-novncproxy-tls-certs\") pod \"666b69a9-5d21-4f9c-83ce-49e6e132e8e9\" (UID: \"666b69a9-5d21-4f9c-83ce-49e6e132e8e9\") " Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.768078 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-vencrypt-tls-certs\") pod \"666b69a9-5d21-4f9c-83ce-49e6e132e8e9\" (UID: \"666b69a9-5d21-4f9c-83ce-49e6e132e8e9\") " Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.768099 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7acca3e-61e4-495b-adbf-36c435b4a7d2-logs\") pod \"d7acca3e-61e4-495b-adbf-36c435b4a7d2\" (UID: \"d7acca3e-61e4-495b-adbf-36c435b4a7d2\") " Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.768139 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7acca3e-61e4-495b-adbf-36c435b4a7d2-config-data\") pod \"d7acca3e-61e4-495b-adbf-36c435b4a7d2\" (UID: \"d7acca3e-61e4-495b-adbf-36c435b4a7d2\") " Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.768154 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7acca3e-61e4-495b-adbf-36c435b4a7d2-combined-ca-bundle\") pod \"d7acca3e-61e4-495b-adbf-36c435b4a7d2\" (UID: \"d7acca3e-61e4-495b-adbf-36c435b4a7d2\") " Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.772484 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7acca3e-61e4-495b-adbf-36c435b4a7d2-logs" (OuterVolumeSpecName: "logs") pod "d7acca3e-61e4-495b-adbf-36c435b4a7d2" (UID: "d7acca3e-61e4-495b-adbf-36c435b4a7d2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.776845 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13c22be3-bd7d-45e0-8948-4574e00507c0-operator-scripts\") pod \"nova-cell1-298f-account-create-update-hn765\" (UID: \"13c22be3-bd7d-45e0-8948-4574e00507c0\") " pod="openstack/nova-cell1-298f-account-create-update-hn765" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.777000 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2657p\" (UniqueName: \"kubernetes.io/projected/13c22be3-bd7d-45e0-8948-4574e00507c0-kube-api-access-2657p\") pod \"nova-cell1-298f-account-create-update-hn765\" (UID: \"13c22be3-bd7d-45e0-8948-4574e00507c0\") " pod="openstack/nova-cell1-298f-account-create-update-hn765" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.777070 4840 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7acca3e-61e4-495b-adbf-36c435b4a7d2-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:20 crc kubenswrapper[4840]: E0311 09:21:20.778129 4840 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 11 09:21:20 crc kubenswrapper[4840]: E0311 09:21:20.778288 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13c22be3-bd7d-45e0-8948-4574e00507c0-operator-scripts podName:13c22be3-bd7d-45e0-8948-4574e00507c0 nodeName:}" failed. No retries permitted until 2026-03-11 09:21:24.778262793 +0000 UTC m=+1483.443932668 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/13c22be3-bd7d-45e0-8948-4574e00507c0-operator-scripts") pod "nova-cell1-298f-account-create-update-hn765" (UID: "13c22be3-bd7d-45e0-8948-4574e00507c0") : configmap "openstack-cell1-scripts" not found Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.786709 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7acca3e-61e4-495b-adbf-36c435b4a7d2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d7acca3e-61e4-495b-adbf-36c435b4a7d2" (UID: "d7acca3e-61e4-495b-adbf-36c435b4a7d2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:20 crc kubenswrapper[4840]: E0311 09:21:20.789039 4840 projected.go:194] Error preparing data for projected volume kube-api-access-2657p for pod openstack/nova-cell1-298f-account-create-update-hn765: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Mar 11 09:21:20 crc kubenswrapper[4840]: E0311 09:21:20.789120 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13c22be3-bd7d-45e0-8948-4574e00507c0-kube-api-access-2657p podName:13c22be3-bd7d-45e0-8948-4574e00507c0 nodeName:}" failed. No retries permitted until 2026-03-11 09:21:24.789093736 +0000 UTC m=+1483.454763551 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2657p" (UniqueName: "kubernetes.io/projected/13c22be3-bd7d-45e0-8948-4574e00507c0-kube-api-access-2657p") pod "nova-cell1-298f-account-create-update-hn765" (UID: "13c22be3-bd7d-45e0-8948-4574e00507c0") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.790355 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7acca3e-61e4-495b-adbf-36c435b4a7d2-kube-api-access-6hb29" (OuterVolumeSpecName: "kube-api-access-6hb29") pod "d7acca3e-61e4-495b-adbf-36c435b4a7d2" (UID: "d7acca3e-61e4-495b-adbf-36c435b4a7d2"). InnerVolumeSpecName "kube-api-access-6hb29". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.806789 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-kube-api-access-x2stz" (OuterVolumeSpecName: "kube-api-access-x2stz") pod "666b69a9-5d21-4f9c-83ce-49e6e132e8e9" (UID: "666b69a9-5d21-4f9c-83ce-49e6e132e8e9"). InnerVolumeSpecName "kube-api-access-x2stz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.822709 4840 scope.go:117] "RemoveContainer" containerID="403197cd6e2f799a8e5d55b29a90f95a1d072287400393d0ce075725778b4fe2" Mar 11 09:21:20 crc kubenswrapper[4840]: E0311 09:21:20.826064 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"403197cd6e2f799a8e5d55b29a90f95a1d072287400393d0ce075725778b4fe2\": container with ID starting with 403197cd6e2f799a8e5d55b29a90f95a1d072287400393d0ce075725778b4fe2 not found: ID does not exist" containerID="403197cd6e2f799a8e5d55b29a90f95a1d072287400393d0ce075725778b4fe2" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.826169 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"403197cd6e2f799a8e5d55b29a90f95a1d072287400393d0ce075725778b4fe2"} err="failed to get container status \"403197cd6e2f799a8e5d55b29a90f95a1d072287400393d0ce075725778b4fe2\": rpc error: code = NotFound desc = could not find container \"403197cd6e2f799a8e5d55b29a90f95a1d072287400393d0ce075725778b4fe2\": container with ID starting with 403197cd6e2f799a8e5d55b29a90f95a1d072287400393d0ce075725778b4fe2 not found: ID does not exist" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.826240 4840 scope.go:117] "RemoveContainer" containerID="0a86a4d07fba011f02807e2ed6cc060d80c18aeab359b46ed4a060d9bc9539eb" Mar 11 09:21:20 crc kubenswrapper[4840]: E0311 09:21:20.827273 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a86a4d07fba011f02807e2ed6cc060d80c18aeab359b46ed4a060d9bc9539eb\": container with ID starting with 0a86a4d07fba011f02807e2ed6cc060d80c18aeab359b46ed4a060d9bc9539eb not found: ID does not exist" containerID="0a86a4d07fba011f02807e2ed6cc060d80c18aeab359b46ed4a060d9bc9539eb" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.827307 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a86a4d07fba011f02807e2ed6cc060d80c18aeab359b46ed4a060d9bc9539eb"} err="failed to get container status \"0a86a4d07fba011f02807e2ed6cc060d80c18aeab359b46ed4a060d9bc9539eb\": rpc error: code = NotFound desc = could not find container \"0a86a4d07fba011f02807e2ed6cc060d80c18aeab359b46ed4a060d9bc9539eb\": container with ID starting with 0a86a4d07fba011f02807e2ed6cc060d80c18aeab359b46ed4a060d9bc9539eb not found: ID does not exist" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.827360 4840 scope.go:117] "RemoveContainer" containerID="bc0ab41733673759705d9fad6da2ab073f93f91b5041cad5c966fdd3db1c9c9b" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.879207 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hb29\" (UniqueName: \"kubernetes.io/projected/d7acca3e-61e4-495b-adbf-36c435b4a7d2-kube-api-access-6hb29\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.879518 4840 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7acca3e-61e4-495b-adbf-36c435b4a7d2-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.879531 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2stz\" (UniqueName: \"kubernetes.io/projected/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-kube-api-access-x2stz\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.896882 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "666b69a9-5d21-4f9c-83ce-49e6e132e8e9" (UID: "666b69a9-5d21-4f9c-83ce-49e6e132e8e9"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.949315 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-db9f7b9c-cdnm7"] Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.961053 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-db9f7b9c-cdnm7"] Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.962492 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-config-data" (OuterVolumeSpecName: "config-data") pod "666b69a9-5d21-4f9c-83ce-49e6e132e8e9" (UID: "666b69a9-5d21-4f9c-83ce-49e6e132e8e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.981710 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.981775 4840 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:20 crc kubenswrapper[4840]: E0311 09:21:20.981859 4840 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Mar 11 09:21:20 crc kubenswrapper[4840]: E0311 09:21:20.981916 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-config-data podName:f31748d2-64a9-4839-ac55-691d9682ee8e nodeName:}" failed. No retries permitted until 2026-03-11 09:21:24.981897599 +0000 UTC m=+1483.647567414 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-config-data") pod "rabbitmq-server-0" (UID: "f31748d2-64a9-4839-ac55-691d9682ee8e") : configmap "rabbitmq-config-data" not found Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.983167 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-298f-account-create-update-hn765"] Mar 11 09:21:20 crc kubenswrapper[4840]: I0311 09:21:20.994404 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-298f-account-create-update-hn765"] Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.021983 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.034401 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7acca3e-61e4-495b-adbf-36c435b4a7d2-config-data" (OuterVolumeSpecName: "config-data") pod "d7acca3e-61e4-495b-adbf-36c435b4a7d2" (UID: "d7acca3e-61e4-495b-adbf-36c435b4a7d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.056505 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "666b69a9-5d21-4f9c-83ce-49e6e132e8e9" (UID: "666b69a9-5d21-4f9c-83ce-49e6e132e8e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.071985 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7acca3e-61e4-495b-adbf-36c435b4a7d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7acca3e-61e4-495b-adbf-36c435b4a7d2" (UID: "d7acca3e-61e4-495b-adbf-36c435b4a7d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.083556 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e93bc954-4d65-4b5f-8070-2b9800ff3db2-config-data-default\") pod \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.083700 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e93bc954-4d65-4b5f-8070-2b9800ff3db2-galera-tls-certs\") pod \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.083803 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf8xv\" (UniqueName: \"kubernetes.io/projected/e93bc954-4d65-4b5f-8070-2b9800ff3db2-kube-api-access-lf8xv\") pod \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.083813 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e93bc954-4d65-4b5f-8070-2b9800ff3db2-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "e93bc954-4d65-4b5f-8070-2b9800ff3db2" (UID: "e93bc954-4d65-4b5f-8070-2b9800ff3db2"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.084085 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e93bc954-4d65-4b5f-8070-2b9800ff3db2-operator-scripts\") pod \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.084180 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.084251 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e93bc954-4d65-4b5f-8070-2b9800ff3db2-config-data-generated\") pod \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.084284 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93bc954-4d65-4b5f-8070-2b9800ff3db2-combined-ca-bundle\") pod \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.084315 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e93bc954-4d65-4b5f-8070-2b9800ff3db2-kolla-config\") pod \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\" (UID: \"e93bc954-4d65-4b5f-8070-2b9800ff3db2\") " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.084881 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7acca3e-61e4-495b-adbf-36c435b4a7d2-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.084910 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7acca3e-61e4-495b-adbf-36c435b4a7d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.084929 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13c22be3-bd7d-45e0-8948-4574e00507c0-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.085598 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.085623 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2657p\" (UniqueName: \"kubernetes.io/projected/13c22be3-bd7d-45e0-8948-4574e00507c0-kube-api-access-2657p\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.085661 4840 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e93bc954-4d65-4b5f-8070-2b9800ff3db2-config-data-default\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.085061 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e93bc954-4d65-4b5f-8070-2b9800ff3db2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e93bc954-4d65-4b5f-8070-2b9800ff3db2" (UID: "e93bc954-4d65-4b5f-8070-2b9800ff3db2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.085651 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e93bc954-4d65-4b5f-8070-2b9800ff3db2-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "e93bc954-4d65-4b5f-8070-2b9800ff3db2" (UID: "e93bc954-4d65-4b5f-8070-2b9800ff3db2"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.085982 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e93bc954-4d65-4b5f-8070-2b9800ff3db2-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "e93bc954-4d65-4b5f-8070-2b9800ff3db2" (UID: "e93bc954-4d65-4b5f-8070-2b9800ff3db2"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.089835 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e93bc954-4d65-4b5f-8070-2b9800ff3db2-kube-api-access-lf8xv" (OuterVolumeSpecName: "kube-api-access-lf8xv") pod "e93bc954-4d65-4b5f-8070-2b9800ff3db2" (UID: "e93bc954-4d65-4b5f-8070-2b9800ff3db2"). InnerVolumeSpecName "kube-api-access-lf8xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.104315 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "mysql-db") pod "e93bc954-4d65-4b5f-8070-2b9800ff3db2" (UID: "e93bc954-4d65-4b5f-8070-2b9800ff3db2"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.117651 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.124863 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "666b69a9-5d21-4f9c-83ce-49e6e132e8e9" (UID: "666b69a9-5d21-4f9c-83ce-49e6e132e8e9"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.140746 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e93bc954-4d65-4b5f-8070-2b9800ff3db2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e93bc954-4d65-4b5f-8070-2b9800ff3db2" (UID: "e93bc954-4d65-4b5f-8070-2b9800ff3db2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.174100 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e93bc954-4d65-4b5f-8070-2b9800ff3db2-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "e93bc954-4d65-4b5f-8070-2b9800ff3db2" (UID: "e93bc954-4d65-4b5f-8070-2b9800ff3db2"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.186828 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f10cd3a6-0a55-4957-b861-678d9af3c338-etc-swift\") pod \"f10cd3a6-0a55-4957-b861-678d9af3c338\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.187298 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-internal-tls-certs\") pod \"f10cd3a6-0a55-4957-b861-678d9af3c338\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.187359 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-combined-ca-bundle\") pod \"f10cd3a6-0a55-4957-b861-678d9af3c338\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.187403 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10cd3a6-0a55-4957-b861-678d9af3c338-run-httpd\") pod \"f10cd3a6-0a55-4957-b861-678d9af3c338\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.187443 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10cd3a6-0a55-4957-b861-678d9af3c338-log-httpd\") pod \"f10cd3a6-0a55-4957-b861-678d9af3c338\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.187549 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-config-data\") pod \"f10cd3a6-0a55-4957-b861-678d9af3c338\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.187593 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wf8r\" (UniqueName: \"kubernetes.io/projected/f10cd3a6-0a55-4957-b861-678d9af3c338-kube-api-access-9wf8r\") pod \"f10cd3a6-0a55-4957-b861-678d9af3c338\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.187675 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-public-tls-certs\") pod \"f10cd3a6-0a55-4957-b861-678d9af3c338\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.188114 4840 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/666b69a9-5d21-4f9c-83ce-49e6e132e8e9-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.188126 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf8xv\" (UniqueName: \"kubernetes.io/projected/e93bc954-4d65-4b5f-8070-2b9800ff3db2-kube-api-access-lf8xv\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.188136 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e93bc954-4d65-4b5f-8070-2b9800ff3db2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.188158 4840 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.188168 4840 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e93bc954-4d65-4b5f-8070-2b9800ff3db2-config-data-generated\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.188177 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93bc954-4d65-4b5f-8070-2b9800ff3db2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.188188 4840 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e93bc954-4d65-4b5f-8070-2b9800ff3db2-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.188197 4840 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e93bc954-4d65-4b5f-8070-2b9800ff3db2-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.188155 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f10cd3a6-0a55-4957-b861-678d9af3c338-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f10cd3a6-0a55-4957-b861-678d9af3c338" (UID: "f10cd3a6-0a55-4957-b861-678d9af3c338"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.188350 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f10cd3a6-0a55-4957-b861-678d9af3c338-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f10cd3a6-0a55-4957-b861-678d9af3c338" (UID: "f10cd3a6-0a55-4957-b861-678d9af3c338"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.191484 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f10cd3a6-0a55-4957-b861-678d9af3c338-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f10cd3a6-0a55-4957-b861-678d9af3c338" (UID: "f10cd3a6-0a55-4957-b861-678d9af3c338"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.200421 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f10cd3a6-0a55-4957-b861-678d9af3c338-kube-api-access-9wf8r" (OuterVolumeSpecName: "kube-api-access-9wf8r") pod "f10cd3a6-0a55-4957-b861-678d9af3c338" (UID: "f10cd3a6-0a55-4957-b861-678d9af3c338"). InnerVolumeSpecName "kube-api-access-9wf8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.243662 4840 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.246144 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="86c17dbf-d890-4de3-bf5d-29e0aea4d968" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.178:8776/healthcheck\": read tcp 10.217.0.2:56184->10.217.0.178:8776: read: connection reset by peer" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.254872 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f10cd3a6-0a55-4957-b861-678d9af3c338" (UID: "f10cd3a6-0a55-4957-b861-678d9af3c338"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.255139 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f10cd3a6-0a55-4957-b861-678d9af3c338" (UID: "f10cd3a6-0a55-4957-b861-678d9af3c338"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:21 crc kubenswrapper[4840]: E0311 09:21:21.299511 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-config-data podName:f10cd3a6-0a55-4957-b861-678d9af3c338 nodeName:}" failed. No retries permitted until 2026-03-11 09:21:21.799483084 +0000 UTC m=+1480.465152899 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-config-data") pod "f10cd3a6-0a55-4957-b861-678d9af3c338" (UID: "f10cd3a6-0a55-4957-b861-678d9af3c338") : error deleting /var/lib/kubelet/pods/f10cd3a6-0a55-4957-b861-678d9af3c338/volume-subpaths: remove /var/lib/kubelet/pods/f10cd3a6-0a55-4957-b861-678d9af3c338/volume-subpaths: no such file or directory Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.302724 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f10cd3a6-0a55-4957-b861-678d9af3c338" (UID: "f10cd3a6-0a55-4957-b861-678d9af3c338"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.306134 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wf8r\" (UniqueName: \"kubernetes.io/projected/f10cd3a6-0a55-4957-b861-678d9af3c338-kube-api-access-9wf8r\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.310092 4840 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.310125 4840 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.310136 4840 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f10cd3a6-0a55-4957-b861-678d9af3c338-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.310149 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.310160 4840 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10cd3a6-0a55-4957-b861-678d9af3c338-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.310169 4840 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f10cd3a6-0a55-4957-b861-678d9af3c338-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.412647 4840 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:21 crc kubenswrapper[4840]: E0311 09:21:21.672900 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 is running failed: container process not found" containerID="ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:21:21 crc kubenswrapper[4840]: E0311 09:21:21.674558 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:21:21 crc kubenswrapper[4840]: E0311 09:21:21.674641 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 is running failed: container process not found" containerID="ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:21:21 crc kubenswrapper[4840]: E0311 09:21:21.675363 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 is running failed: container process not found" containerID="ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:21:21 crc kubenswrapper[4840]: E0311 09:21:21.675410 4840 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-qtcdv" podUID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerName="ovsdb-server" Mar 11 09:21:21 crc kubenswrapper[4840]: E0311 09:21:21.685210 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.710603 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-g2p7c" podUID="056df7c0-d577-4908-91a8-b5dfb95e0316" containerName="ovn-controller" probeResult="failure" output=< Mar 11 09:21:21 crc kubenswrapper[4840]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Mar 11 09:21:21 crc kubenswrapper[4840]: > Mar 11 09:21:21 crc kubenswrapper[4840]: E0311 09:21:21.721891 4840 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 11 09:21:21 crc kubenswrapper[4840]: E0311 09:21:21.722008 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aa334d0b-a179-4905-a660-05bbc12e5c02-operator-scripts podName:aa334d0b-a179-4905-a660-05bbc12e5c02 nodeName:}" failed. No retries permitted until 2026-03-11 09:21:25.721977179 +0000 UTC m=+1484.387646994 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/aa334d0b-a179-4905-a660-05bbc12e5c02-operator-scripts") pod "root-account-create-update-q29dz" (UID: "aa334d0b-a179-4905-a660-05bbc12e5c02") : configmap "openstack-cell1-scripts" not found Mar 11 09:21:21 crc kubenswrapper[4840]: E0311 09:21:21.734637 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:21:21 crc kubenswrapper[4840]: E0311 09:21:21.734720 4840 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-qtcdv" podUID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerName="ovs-vswitchd" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.761394 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-28ac-account-create-update-4b8kv" event={"ID":"822a8132-1b8a-4f19-b42b-b6acd65e7743","Type":"ContainerDied","Data":"2d5037fea20c401e1fd5a095ce410babc2e9dd24b321d3a5cda44a0b3eb92280"} Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.761444 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d5037fea20c401e1fd5a095ce410babc2e9dd24b321d3a5cda44a0b3eb92280" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.777005 4840 generic.go:334] "Generic (PLEG): container finished" podID="f415e24c-207c-4dc7-b68c-14180ac09391" containerID="9c0b5e241b0b51a690dd9f346219c8bee2d0c8e7039ad56b281f6ca277463616" exitCode=0 Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.777118 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bc9bbdff8-zczrp" event={"ID":"f415e24c-207c-4dc7-b68c-14180ac09391","Type":"ContainerDied","Data":"9c0b5e241b0b51a690dd9f346219c8bee2d0c8e7039ad56b281f6ca277463616"} Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.777155 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5bc9bbdff8-zczrp" event={"ID":"f415e24c-207c-4dc7-b68c-14180ac09391","Type":"ContainerDied","Data":"402a0d6c12cf3081f162b44a7bac1199ca1b57a8d2f74b430f973ab11ba90cd7"} Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.777170 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="402a0d6c12cf3081f162b44a7bac1199ca1b57a8d2f74b430f973ab11ba90cd7" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.802920 4840 generic.go:334] "Generic (PLEG): container finished" podID="86c17dbf-d890-4de3-bf5d-29e0aea4d968" containerID="97908e7657b276e714bdd7983d1b6b792bc1fff3b99535e851314e2428338b75" exitCode=0 Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.803016 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"86c17dbf-d890-4de3-bf5d-29e0aea4d968","Type":"ContainerDied","Data":"97908e7657b276e714bdd7983d1b6b792bc1fff3b99535e851314e2428338b75"} Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.821000 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-bc41-account-create-update-kjhwj" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.821593 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"666b69a9-5d21-4f9c-83ce-49e6e132e8e9","Type":"ContainerDied","Data":"c8dd15eda1c58bb68283006caea24257ef51479f9335c7236545f1571d004289"} Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.823542 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f23-account-create-update-mbjxd" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.824248 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-config-data\") pod \"f10cd3a6-0a55-4957-b861-678d9af3c338\" (UID: \"f10cd3a6-0a55-4957-b861-678d9af3c338\") " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.829037 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-config-data" (OuterVolumeSpecName: "config-data") pod "f10cd3a6-0a55-4957-b861-678d9af3c338" (UID: "f10cd3a6-0a55-4957-b861-678d9af3c338"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.829216 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q29dz" event={"ID":"aa334d0b-a179-4905-a660-05bbc12e5c02","Type":"ContainerDied","Data":"bf892370d3ab9e2560925e61ce95e7cd5a97d693a5fea6dd88c30dc813e4e125"} Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.829276 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf892370d3ab9e2560925e61ce95e7cd5a97d693a5fea6dd88c30dc813e4e125" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.830583 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.841753 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-28ac-account-create-update-4b8kv" Mar 11 09:21:21 crc kubenswrapper[4840]: E0311 09:21:21.841895 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a8b0caed3a79d6574e838b5ca30ae9f7dc0d2e8987551e440fb3bb935fcc2b90 is running failed: container process not found" containerID="a8b0caed3a79d6574e838b5ca30ae9f7dc0d2e8987551e440fb3bb935fcc2b90" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 11 09:21:21 crc kubenswrapper[4840]: E0311 09:21:21.847704 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a8b0caed3a79d6574e838b5ca30ae9f7dc0d2e8987551e440fb3bb935fcc2b90 is running failed: container process not found" containerID="a8b0caed3a79d6574e838b5ca30ae9f7dc0d2e8987551e440fb3bb935fcc2b90" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 11 09:21:21 crc kubenswrapper[4840]: E0311 09:21:21.851859 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a8b0caed3a79d6574e838b5ca30ae9f7dc0d2e8987551e440fb3bb935fcc2b90 is running failed: container process not found" containerID="a8b0caed3a79d6574e838b5ca30ae9f7dc0d2e8987551e440fb3bb935fcc2b90" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 11 09:21:21 crc kubenswrapper[4840]: E0311 09:21:21.852002 4840 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a8b0caed3a79d6574e838b5ca30ae9f7dc0d2e8987551e440fb3bb935fcc2b90 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="95c08694-92ed-44cb-8ca3-92a47b5571d4" containerName="nova-cell0-conductor-conductor" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.870995 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.872946 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" event={"ID":"d7acca3e-61e4-495b-adbf-36c435b4a7d2","Type":"ContainerDied","Data":"0d6a73c7be414144b5b39d108da60d4f4d294a3e15759f0745cc1e91bf77052c"} Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.873008 4840 scope.go:117] "RemoveContainer" containerID="7ee049965601ca58aa215725536bdf7c36fb49e30446ec4221604a56f4d368d1" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.873138 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-84c78b97c8-frfs9" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.878737 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0f23-account-create-update-mbjxd" event={"ID":"b4873de8-8c37-49b7-a8ba-b352a2cf0320","Type":"ContainerDied","Data":"cc3143ee5795b8b5fc103fc2a51c7b5e869ce56a236d9a328707bb7efcec05f2"} Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.878859 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f23-account-create-update-mbjxd" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.879380 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q29dz" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.930342 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5bwt\" (UniqueName: \"kubernetes.io/projected/aa334d0b-a179-4905-a660-05bbc12e5c02-kube-api-access-l5bwt\") pod \"aa334d0b-a179-4905-a660-05bbc12e5c02\" (UID: \"aa334d0b-a179-4905-a660-05bbc12e5c02\") " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.931712 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l9l9\" (UniqueName: \"kubernetes.io/projected/b4873de8-8c37-49b7-a8ba-b352a2cf0320-kube-api-access-2l9l9\") pod \"b4873de8-8c37-49b7-a8ba-b352a2cf0320\" (UID: \"b4873de8-8c37-49b7-a8ba-b352a2cf0320\") " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.931969 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xbdt\" (UniqueName: \"kubernetes.io/projected/fc871f48-4882-49a3-be1f-80d95c2548a9-kube-api-access-6xbdt\") pod \"fc871f48-4882-49a3-be1f-80d95c2548a9\" (UID: \"fc871f48-4882-49a3-be1f-80d95c2548a9\") " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.932293 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa334d0b-a179-4905-a660-05bbc12e5c02-operator-scripts\") pod \"aa334d0b-a179-4905-a660-05bbc12e5c02\" (UID: \"aa334d0b-a179-4905-a660-05bbc12e5c02\") " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.932495 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/822a8132-1b8a-4f19-b42b-b6acd65e7743-operator-scripts\") pod \"822a8132-1b8a-4f19-b42b-b6acd65e7743\" (UID: \"822a8132-1b8a-4f19-b42b-b6acd65e7743\") " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.932620 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rhcz\" (UniqueName: \"kubernetes.io/projected/822a8132-1b8a-4f19-b42b-b6acd65e7743-kube-api-access-6rhcz\") pod \"822a8132-1b8a-4f19-b42b-b6acd65e7743\" (UID: \"822a8132-1b8a-4f19-b42b-b6acd65e7743\") " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.935401 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4873de8-8c37-49b7-a8ba-b352a2cf0320-operator-scripts\") pod \"b4873de8-8c37-49b7-a8ba-b352a2cf0320\" (UID: \"b4873de8-8c37-49b7-a8ba-b352a2cf0320\") " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.938530 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc871f48-4882-49a3-be1f-80d95c2548a9-operator-scripts\") pod \"fc871f48-4882-49a3-be1f-80d95c2548a9\" (UID: \"fc871f48-4882-49a3-be1f-80d95c2548a9\") " Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.945235 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f10cd3a6-0a55-4957-b861-678d9af3c338-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.948177 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa334d0b-a179-4905-a660-05bbc12e5c02-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "aa334d0b-a179-4905-a660-05bbc12e5c02" (UID: "aa334d0b-a179-4905-a660-05bbc12e5c02"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.959052 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc871f48-4882-49a3-be1f-80d95c2548a9-kube-api-access-6xbdt" (OuterVolumeSpecName: "kube-api-access-6xbdt") pod "fc871f48-4882-49a3-be1f-80d95c2548a9" (UID: "fc871f48-4882-49a3-be1f-80d95c2548a9"). InnerVolumeSpecName "kube-api-access-6xbdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.960037 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc871f48-4882-49a3-be1f-80d95c2548a9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc871f48-4882-49a3-be1f-80d95c2548a9" (UID: "fc871f48-4882-49a3-be1f-80d95c2548a9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.961156 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4873de8-8c37-49b7-a8ba-b352a2cf0320-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4873de8-8c37-49b7-a8ba-b352a2cf0320" (UID: "b4873de8-8c37-49b7-a8ba-b352a2cf0320"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.935031 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e93bc954-4d65-4b5f-8070-2b9800ff3db2","Type":"ContainerDied","Data":"4bd9e179773f0fb3c0b3a85a8322d1e11a73ef520f1e644906ceba1e874a3aa3"} Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.935219 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.966067 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/822a8132-1b8a-4f19-b42b-b6acd65e7743-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "822a8132-1b8a-4f19-b42b-b6acd65e7743" (UID: "822a8132-1b8a-4f19-b42b-b6acd65e7743"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:21 crc kubenswrapper[4840]: I0311 09:21:21.990710 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa334d0b-a179-4905-a660-05bbc12e5c02-kube-api-access-l5bwt" (OuterVolumeSpecName: "kube-api-access-l5bwt") pod "aa334d0b-a179-4905-a660-05bbc12e5c02" (UID: "aa334d0b-a179-4905-a660-05bbc12e5c02"). InnerVolumeSpecName "kube-api-access-l5bwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:21.996012 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-97696577-2mh8q" event={"ID":"f10cd3a6-0a55-4957-b861-678d9af3c338","Type":"ContainerDied","Data":"b1c44fbc8505b3dbc12f02047d18c60d764862d674aa170ce783dcff52172d01"} Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:21.996516 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4873de8-8c37-49b7-a8ba-b352a2cf0320-kube-api-access-2l9l9" (OuterVolumeSpecName: "kube-api-access-2l9l9") pod "b4873de8-8c37-49b7-a8ba-b352a2cf0320" (UID: "b4873de8-8c37-49b7-a8ba-b352a2cf0320"). InnerVolumeSpecName "kube-api-access-2l9l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:21.997527 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-97696577-2mh8q" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:21.998282 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/822a8132-1b8a-4f19-b42b-b6acd65e7743-kube-api-access-6rhcz" (OuterVolumeSpecName: "kube-api-access-6rhcz") pod "822a8132-1b8a-4f19-b42b-b6acd65e7743" (UID: "822a8132-1b8a-4f19-b42b-b6acd65e7743"). InnerVolumeSpecName "kube-api-access-6rhcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.005354 4840 generic.go:334] "Generic (PLEG): container finished" podID="d3ccfc13-7a62-4923-95ab-c68cb93aa03c" containerID="1443dcc20b53c34d6b5983e69576991044f6bc08da4320cceeb036e8ad539edf" exitCode=0 Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.005446 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d3ccfc13-7a62-4923-95ab-c68cb93aa03c","Type":"ContainerDied","Data":"1443dcc20b53c34d6b5983e69576991044f6bc08da4320cceeb036e8ad539edf"} Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.016877 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-bc41-account-create-update-kjhwj" event={"ID":"fc871f48-4882-49a3-be1f-80d95c2548a9","Type":"ContainerDied","Data":"66861f25312767fc6c9602f4dec035324e5c608987cf8541e97f10a0ccd1580e"} Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.017199 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-bc41-account-create-update-kjhwj" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.058034 4840 generic.go:334] "Generic (PLEG): container finished" podID="95c08694-92ed-44cb-8ca3-92a47b5571d4" containerID="a8b0caed3a79d6574e838b5ca30ae9f7dc0d2e8987551e440fb3bb935fcc2b90" exitCode=0 Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.058095 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"95c08694-92ed-44cb-8ca3-92a47b5571d4","Type":"ContainerDied","Data":"a8b0caed3a79d6574e838b5ca30ae9f7dc0d2e8987551e440fb3bb935fcc2b90"} Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.059268 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4873de8-8c37-49b7-a8ba-b352a2cf0320-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.059311 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc871f48-4882-49a3-be1f-80d95c2548a9-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.059331 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5bwt\" (UniqueName: \"kubernetes.io/projected/aa334d0b-a179-4905-a660-05bbc12e5c02-kube-api-access-l5bwt\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.059346 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l9l9\" (UniqueName: \"kubernetes.io/projected/b4873de8-8c37-49b7-a8ba-b352a2cf0320-kube-api-access-2l9l9\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.059362 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xbdt\" (UniqueName: \"kubernetes.io/projected/fc871f48-4882-49a3-be1f-80d95c2548a9-kube-api-access-6xbdt\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.059374 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa334d0b-a179-4905-a660-05bbc12e5c02-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.059389 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/822a8132-1b8a-4f19-b42b-b6acd65e7743-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.059401 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rhcz\" (UniqueName: \"kubernetes.io/projected/822a8132-1b8a-4f19-b42b-b6acd65e7743-kube-api-access-6rhcz\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:22 crc kubenswrapper[4840]: E0311 09:21:22.211401 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cfec390aae2b33c2baf59838d2300ba819a1ecdc9f653835cc711873fa787a85" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 11 09:21:22 crc kubenswrapper[4840]: E0311 09:21:22.217176 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cfec390aae2b33c2baf59838d2300ba819a1ecdc9f653835cc711873fa787a85" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.219298 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="915955ff-c1d8-4f99-a621-f28d463c512f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.211:8775/\": dial tcp 10.217.0.211:8775: connect: connection refused" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.219455 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="915955ff-c1d8-4f99-a621-f28d463c512f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.211:8775/\": dial tcp 10.217.0.211:8775: connect: connection refused" Mar 11 09:21:22 crc kubenswrapper[4840]: E0311 09:21:22.222554 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cfec390aae2b33c2baf59838d2300ba819a1ecdc9f653835cc711873fa787a85" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 11 09:21:22 crc kubenswrapper[4840]: E0311 09:21:22.222673 4840 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="9699b913-55db-46fe-9831-1e1ac94ca609" containerName="nova-cell1-conductor-conductor" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.292457 4840 scope.go:117] "RemoveContainer" containerID="33e49e14fd2afa823c560d68be5b5e4ad273e3e82868e5aba42b52ba56655a7e" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.305888 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.328065 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ab14b76-8d70-44f2-b986-b3d600c73b60" path="/var/lib/kubelet/pods/0ab14b76-8d70-44f2-b986-b3d600c73b60/volumes" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.328650 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13c22be3-bd7d-45e0-8948-4574e00507c0" path="/var/lib/kubelet/pods/13c22be3-bd7d-45e0-8948-4574e00507c0/volumes" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.328978 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40f2a324-9d85-4e2e-ac26-0b710dc379b2" path="/var/lib/kubelet/pods/40f2a324-9d85-4e2e-ac26-0b710dc379b2/volumes" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.329350 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="666b69a9-5d21-4f9c-83ce-49e6e132e8e9" path="/var/lib/kubelet/pods/666b69a9-5d21-4f9c-83ce-49e6e132e8e9/volumes" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.338938 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.338990 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.339014 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.339026 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-42de-account-create-update-bjxph"] Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.339045 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-42de-account-create-update-bjxph"] Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.339063 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-42de-account-create-update-558dw"] Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.355861 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="c44e0641-be37-4447-9666-14bf00c08827" containerName="memcached" containerID="cri-o://dba9b2b61d92b87502d4ac01bbbc8b61a58c6a11a4e92608207ef818b5d336cd" gracePeriod=30 Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.356585 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f5fc86d0-4547-42ed-a880-f58c9e29d8a4" containerName="ceilometer-central-agent" containerID="cri-o://34979c436634b8f0e018557d27d62f341c687de81ef20f3c5d4a9fa769f1e4fb" gracePeriod=30 Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.356931 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f5fc86d0-4547-42ed-a880-f58c9e29d8a4" containerName="proxy-httpd" containerID="cri-o://f68b76db653f501b6c2cade1e67b940b2a3ee35bd560abe70b7979ccd257edaf" gracePeriod=30 Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.357014 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f5fc86d0-4547-42ed-a880-f58c9e29d8a4" containerName="sg-core" containerID="cri-o://c1366968afef948e7ccf78e045f5f679b4fcb3ebe863e591eb2d9cc24c2f872b" gracePeriod=30 Mar 11 09:21:22 crc kubenswrapper[4840]: E0311 09:21:22.357056 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab14b76-8d70-44f2-b986-b3d600c73b60" containerName="barbican-worker-log" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.357066 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f5fc86d0-4547-42ed-a880-f58c9e29d8a4" containerName="ceilometer-notification-agent" containerID="cri-o://786131d91770a763753e005b03e25ce929741941d3ba83d0e07a62fe71b27388" gracePeriod=30 Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.357074 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab14b76-8d70-44f2-b986-b3d600c73b60" containerName="barbican-worker-log" Mar 11 09:21:22 crc kubenswrapper[4840]: E0311 09:21:22.357093 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="666b69a9-5d21-4f9c-83ce-49e6e132e8e9" containerName="nova-cell1-novncproxy-novncproxy" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.357106 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="666b69a9-5d21-4f9c-83ce-49e6e132e8e9" containerName="nova-cell1-novncproxy-novncproxy" Mar 11 09:21:22 crc kubenswrapper[4840]: E0311 09:21:22.357118 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f10cd3a6-0a55-4957-b861-678d9af3c338" containerName="proxy-httpd" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.357126 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f10cd3a6-0a55-4957-b861-678d9af3c338" containerName="proxy-httpd" Mar 11 09:21:22 crc kubenswrapper[4840]: E0311 09:21:22.357138 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f10cd3a6-0a55-4957-b861-678d9af3c338" containerName="proxy-server" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.357146 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f10cd3a6-0a55-4957-b861-678d9af3c338" containerName="proxy-server" Mar 11 09:21:22 crc kubenswrapper[4840]: E0311 09:21:22.357162 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7acca3e-61e4-495b-adbf-36c435b4a7d2" containerName="barbican-keystone-listener" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.357169 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7acca3e-61e4-495b-adbf-36c435b4a7d2" containerName="barbican-keystone-listener" Mar 11 09:21:22 crc kubenswrapper[4840]: E0311 09:21:22.357181 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab14b76-8d70-44f2-b986-b3d600c73b60" containerName="barbican-worker" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.357189 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab14b76-8d70-44f2-b986-b3d600c73b60" containerName="barbican-worker" Mar 11 09:21:22 crc kubenswrapper[4840]: E0311 09:21:22.357202 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7acca3e-61e4-495b-adbf-36c435b4a7d2" containerName="barbican-keystone-listener-log" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.357200 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e" containerName="kube-state-metrics" containerID="cri-o://6da9e415bb6fb1fafd6a20eb3f85c8ad0216612c5870621974d7888d3f87aa59" gracePeriod=30 Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.357211 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7acca3e-61e4-495b-adbf-36c435b4a7d2" containerName="barbican-keystone-listener-log" Mar 11 09:21:22 crc kubenswrapper[4840]: E0311 09:21:22.357292 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e93bc954-4d65-4b5f-8070-2b9800ff3db2" containerName="mysql-bootstrap" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.357303 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="e93bc954-4d65-4b5f-8070-2b9800ff3db2" containerName="mysql-bootstrap" Mar 11 09:21:22 crc kubenswrapper[4840]: E0311 09:21:22.357313 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f415e24c-207c-4dc7-b68c-14180ac09391" containerName="placement-api" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.357322 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f415e24c-207c-4dc7-b68c-14180ac09391" containerName="placement-api" Mar 11 09:21:22 crc kubenswrapper[4840]: E0311 09:21:22.357339 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e93bc954-4d65-4b5f-8070-2b9800ff3db2" containerName="galera" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.357347 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="e93bc954-4d65-4b5f-8070-2b9800ff3db2" containerName="galera" Mar 11 09:21:22 crc kubenswrapper[4840]: E0311 09:21:22.357363 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f415e24c-207c-4dc7-b68c-14180ac09391" containerName="placement-log" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.357376 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f415e24c-207c-4dc7-b68c-14180ac09391" containerName="placement-log" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.357660 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f415e24c-207c-4dc7-b68c-14180ac09391" containerName="placement-api" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.357677 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f10cd3a6-0a55-4957-b861-678d9af3c338" containerName="proxy-httpd" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.357690 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="666b69a9-5d21-4f9c-83ce-49e6e132e8e9" containerName="nova-cell1-novncproxy-novncproxy" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.357725 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ab14b76-8d70-44f2-b986-b3d600c73b60" containerName="barbican-worker" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.357747 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7acca3e-61e4-495b-adbf-36c435b4a7d2" containerName="barbican-keystone-listener-log" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.357759 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7acca3e-61e4-495b-adbf-36c435b4a7d2" containerName="barbican-keystone-listener" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.357773 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f10cd3a6-0a55-4957-b861-678d9af3c338" containerName="proxy-server" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.357787 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f415e24c-207c-4dc7-b68c-14180ac09391" containerName="placement-log" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.357801 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="e93bc954-4d65-4b5f-8070-2b9800ff3db2" containerName="galera" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.357811 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ab14b76-8d70-44f2-b986-b3d600c73b60" containerName="barbican-worker-log" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.358669 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-42de-account-create-update-558dw"] Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.358697 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jlpsf"] Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.358796 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-42de-account-create-update-558dw" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.365224 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g6fx\" (UniqueName: \"kubernetes.io/projected/f415e24c-207c-4dc7-b68c-14180ac09391-kube-api-access-7g6fx\") pod \"f415e24c-207c-4dc7-b68c-14180ac09391\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.365340 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-internal-tls-certs\") pod \"f415e24c-207c-4dc7-b68c-14180ac09391\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.365395 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-combined-ca-bundle\") pod \"f415e24c-207c-4dc7-b68c-14180ac09391\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.365437 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-config-data\") pod \"f415e24c-207c-4dc7-b68c-14180ac09391\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.365482 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-public-tls-certs\") pod \"f415e24c-207c-4dc7-b68c-14180ac09391\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.365619 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f415e24c-207c-4dc7-b68c-14180ac09391-logs\") pod \"f415e24c-207c-4dc7-b68c-14180ac09391\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.365681 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-scripts\") pod \"f415e24c-207c-4dc7-b68c-14180ac09391\" (UID: \"f415e24c-207c-4dc7-b68c-14180ac09391\") " Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.370623 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jlpsf"] Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.371821 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.372665 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f415e24c-207c-4dc7-b68c-14180ac09391-logs" (OuterVolumeSpecName: "logs") pod "f415e24c-207c-4dc7-b68c-14180ac09391" (UID: "f415e24c-207c-4dc7-b68c-14180ac09391"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.412056 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f415e24c-207c-4dc7-b68c-14180ac09391-kube-api-access-7g6fx" (OuterVolumeSpecName: "kube-api-access-7g6fx") pod "f415e24c-207c-4dc7-b68c-14180ac09391" (UID: "f415e24c-207c-4dc7-b68c-14180ac09391"). InnerVolumeSpecName "kube-api-access-7g6fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.412132 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bcbd67b5c-tnrsf"] Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.412387 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-bcbd67b5c-tnrsf" podUID="f86e8b7a-656b-423e-8cf0-6d1025486c46" containerName="keystone-api" containerID="cri-o://4eb33fb06f0c44c9373d79d5c966ac7a03ec43ee8c7e1b0b5af3343cd05a504e" gracePeriod=30 Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.422679 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-scripts" (OuterVolumeSpecName: "scripts") pod "f415e24c-207c-4dc7-b68c-14180ac09391" (UID: "f415e24c-207c-4dc7-b68c-14180ac09391"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.468790 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6092c570-2c07-432f-9bdd-b48bf0600ef9-operator-scripts\") pod \"keystone-42de-account-create-update-558dw\" (UID: \"6092c570-2c07-432f-9bdd-b48bf0600ef9\") " pod="openstack/keystone-42de-account-create-update-558dw" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.468878 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwdfz\" (UniqueName: \"kubernetes.io/projected/6092c570-2c07-432f-9bdd-b48bf0600ef9-kube-api-access-vwdfz\") pod \"keystone-42de-account-create-update-558dw\" (UID: \"6092c570-2c07-432f-9bdd-b48bf0600ef9\") " pod="openstack/keystone-42de-account-create-update-558dw" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.469001 4840 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f415e24c-207c-4dc7-b68c-14180ac09391-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.469016 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.469025 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g6fx\" (UniqueName: \"kubernetes.io/projected/f415e24c-207c-4dc7-b68c-14180ac09391-kube-api-access-7g6fx\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.499898 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-qb59h"] Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.531386 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-qb59h"] Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.562648 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.563316 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-vghhr"] Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.571048 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7dcc97bc9b-l7z2g" podUID="15a50bea-c32e-4aed-8fd2-7289e1694f6e" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.167:9311/healthcheck\": read tcp 10.217.0.2:47858->10.217.0.167:9311: read: connection reset by peer" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.571312 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7dcc97bc9b-l7z2g" podUID="15a50bea-c32e-4aed-8fd2-7289e1694f6e" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.167:9311/healthcheck\": read tcp 10.217.0.2:47860->10.217.0.167:9311: read: connection reset by peer" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.572718 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6092c570-2c07-432f-9bdd-b48bf0600ef9-operator-scripts\") pod \"keystone-42de-account-create-update-558dw\" (UID: \"6092c570-2c07-432f-9bdd-b48bf0600ef9\") " pod="openstack/keystone-42de-account-create-update-558dw" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.572790 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwdfz\" (UniqueName: \"kubernetes.io/projected/6092c570-2c07-432f-9bdd-b48bf0600ef9-kube-api-access-vwdfz\") pod \"keystone-42de-account-create-update-558dw\" (UID: \"6092c570-2c07-432f-9bdd-b48bf0600ef9\") " pod="openstack/keystone-42de-account-create-update-558dw" Mar 11 09:21:22 crc kubenswrapper[4840]: E0311 09:21:22.573193 4840 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Mar 11 09:21:22 crc kubenswrapper[4840]: E0311 09:21:22.573259 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6092c570-2c07-432f-9bdd-b48bf0600ef9-operator-scripts podName:6092c570-2c07-432f-9bdd-b48bf0600ef9 nodeName:}" failed. No retries permitted until 2026-03-11 09:21:23.073240177 +0000 UTC m=+1481.738910002 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/6092c570-2c07-432f-9bdd-b48bf0600ef9-operator-scripts") pod "keystone-42de-account-create-update-558dw" (UID: "6092c570-2c07-432f-9bdd-b48bf0600ef9") : configmap "openstack-scripts" not found Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.573294 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-vghhr"] Mar 11 09:21:22 crc kubenswrapper[4840]: E0311 09:21:22.578661 4840 projected.go:194] Error preparing data for projected volume kube-api-access-vwdfz for pod openstack/keystone-42de-account-create-update-558dw: failed to fetch token: serviceaccounts "galera-openstack" not found Mar 11 09:21:22 crc kubenswrapper[4840]: E0311 09:21:22.578737 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6092c570-2c07-432f-9bdd-b48bf0600ef9-kube-api-access-vwdfz podName:6092c570-2c07-432f-9bdd-b48bf0600ef9 nodeName:}" failed. No retries permitted until 2026-03-11 09:21:23.078716965 +0000 UTC m=+1481.744386780 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vwdfz" (UniqueName: "kubernetes.io/projected/6092c570-2c07-432f-9bdd-b48bf0600ef9-kube-api-access-vwdfz") pod "keystone-42de-account-create-update-558dw" (UID: "6092c570-2c07-432f-9bdd-b48bf0600ef9") : failed to fetch token: serviceaccounts "galera-openstack" not found Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.601817 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f415e24c-207c-4dc7-b68c-14180ac09391" (UID: "f415e24c-207c-4dc7-b68c-14180ac09391"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.604730 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-config-data" (OuterVolumeSpecName: "config-data") pod "f415e24c-207c-4dc7-b68c-14180ac09391" (UID: "f415e24c-207c-4dc7-b68c-14180ac09391"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.605008 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-42de-account-create-update-558dw"] Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.630970 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9sj79"] Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.675847 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.675893 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.687704 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f415e24c-207c-4dc7-b68c-14180ac09391" (UID: "f415e24c-207c-4dc7-b68c-14180ac09391"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.727070 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f415e24c-207c-4dc7-b68c-14180ac09391" (UID: "f415e24c-207c-4dc7-b68c-14180ac09391"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.778013 4840 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.778059 4840 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f415e24c-207c-4dc7-b68c-14180ac09391-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:22 crc kubenswrapper[4840]: I0311 09:21:22.846030 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="c6f4129d-4bc4-449b-be94-82fce07cf1f0" containerName="galera" containerID="cri-o://25ef9fd444056788b6ce2758460268d8703cbbd6d97d9234bfee447123075141" gracePeriod=30 Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.023243 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f5fc86d0-4547-42ed-a880-f58c9e29d8a4" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.210:3000/\": dial tcp 10.217.0.210:3000: connect: connection refused" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.098928 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6092c570-2c07-432f-9bdd-b48bf0600ef9-operator-scripts\") pod \"keystone-42de-account-create-update-558dw\" (UID: \"6092c570-2c07-432f-9bdd-b48bf0600ef9\") " pod="openstack/keystone-42de-account-create-update-558dw" Mar 11 09:21:23 crc kubenswrapper[4840]: E0311 09:21:23.099282 4840 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Mar 11 09:21:23 crc kubenswrapper[4840]: E0311 09:21:23.106365 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6092c570-2c07-432f-9bdd-b48bf0600ef9-operator-scripts podName:6092c570-2c07-432f-9bdd-b48bf0600ef9 nodeName:}" failed. No retries permitted until 2026-03-11 09:21:24.106319426 +0000 UTC m=+1482.771989241 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/6092c570-2c07-432f-9bdd-b48bf0600ef9-operator-scripts") pod "keystone-42de-account-create-update-558dw" (UID: "6092c570-2c07-432f-9bdd-b48bf0600ef9") : configmap "openstack-scripts" not found Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.106669 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwdfz\" (UniqueName: \"kubernetes.io/projected/6092c570-2c07-432f-9bdd-b48bf0600ef9-kube-api-access-vwdfz\") pod \"keystone-42de-account-create-update-558dw\" (UID: \"6092c570-2c07-432f-9bdd-b48bf0600ef9\") " pod="openstack/keystone-42de-account-create-update-558dw" Mar 11 09:21:23 crc kubenswrapper[4840]: E0311 09:21:23.112541 4840 projected.go:194] Error preparing data for projected volume kube-api-access-vwdfz for pod openstack/keystone-42de-account-create-update-558dw: failed to fetch token: serviceaccounts "galera-openstack" not found Mar 11 09:21:23 crc kubenswrapper[4840]: E0311 09:21:23.112642 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6092c570-2c07-432f-9bdd-b48bf0600ef9-kube-api-access-vwdfz podName:6092c570-2c07-432f-9bdd-b48bf0600ef9 nodeName:}" failed. No retries permitted until 2026-03-11 09:21:24.112610904 +0000 UTC m=+1482.778280719 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-vwdfz" (UniqueName: "kubernetes.io/projected/6092c570-2c07-432f-9bdd-b48bf0600ef9-kube-api-access-vwdfz") pod "keystone-42de-account-create-update-558dw" (UID: "6092c570-2c07-432f-9bdd-b48bf0600ef9") : failed to fetch token: serviceaccounts "galera-openstack" not found Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.116044 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9sj79"] Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.123736 4840 generic.go:334] "Generic (PLEG): container finished" podID="9699b913-55db-46fe-9831-1e1ac94ca609" containerID="cfec390aae2b33c2baf59838d2300ba819a1ecdc9f653835cc711873fa787a85" exitCode=0 Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.125967 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9699b913-55db-46fe-9831-1e1ac94ca609","Type":"ContainerDied","Data":"cfec390aae2b33c2baf59838d2300ba819a1ecdc9f653835cc711873fa787a85"} Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.159836 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"95c08694-92ed-44cb-8ca3-92a47b5571d4","Type":"ContainerDied","Data":"f88b40934b2119f1b47aca3deb6bf658f2a1f94be2c87943d332e5a1e4a7c666"} Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.160381 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f88b40934b2119f1b47aca3deb6bf658f2a1f94be2c87943d332e5a1e4a7c666" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.165086 4840 generic.go:334] "Generic (PLEG): container finished" podID="c44e0641-be37-4447-9666-14bf00c08827" containerID="dba9b2b61d92b87502d4ac01bbbc8b61a58c6a11a4e92608207ef818b5d336cd" exitCode=0 Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.165176 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c44e0641-be37-4447-9666-14bf00c08827","Type":"ContainerDied","Data":"dba9b2b61d92b87502d4ac01bbbc8b61a58c6a11a4e92608207ef818b5d336cd"} Mar 11 09:21:23 crc kubenswrapper[4840]: E0311 09:21:23.165416 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="344c0fde0e6a0013eefcf0f63c44c10fef7fcbc6d796ca335364063dd3641c28" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 11 09:21:23 crc kubenswrapper[4840]: E0311 09:21:23.167777 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="344c0fde0e6a0013eefcf0f63c44c10fef7fcbc6d796ca335364063dd3641c28" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.172683 4840 generic.go:334] "Generic (PLEG): container finished" podID="f5fc86d0-4547-42ed-a880-f58c9e29d8a4" containerID="f68b76db653f501b6c2cade1e67b940b2a3ee35bd560abe70b7979ccd257edaf" exitCode=0 Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.172722 4840 generic.go:334] "Generic (PLEG): container finished" podID="f5fc86d0-4547-42ed-a880-f58c9e29d8a4" containerID="c1366968afef948e7ccf78e045f5f679b4fcb3ebe863e591eb2d9cc24c2f872b" exitCode=2 Mar 11 09:21:23 crc kubenswrapper[4840]: E0311 09:21:23.172748 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="344c0fde0e6a0013eefcf0f63c44c10fef7fcbc6d796ca335364063dd3641c28" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 11 09:21:23 crc kubenswrapper[4840]: E0311 09:21:23.172831 4840 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="ada053fd-c71a-4425-8220-b950f0cab229" containerName="nova-scheduler-scheduler" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.172774 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5fc86d0-4547-42ed-a880-f58c9e29d8a4","Type":"ContainerDied","Data":"f68b76db653f501b6c2cade1e67b940b2a3ee35bd560abe70b7979ccd257edaf"} Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.172935 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5fc86d0-4547-42ed-a880-f58c9e29d8a4","Type":"ContainerDied","Data":"c1366968afef948e7ccf78e045f5f679b4fcb3ebe863e591eb2d9cc24c2f872b"} Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.178732 4840 generic.go:334] "Generic (PLEG): container finished" podID="cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e" containerID="6da9e415bb6fb1fafd6a20eb3f85c8ad0216612c5870621974d7888d3f87aa59" exitCode=2 Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.178863 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e","Type":"ContainerDied","Data":"6da9e415bb6fb1fafd6a20eb3f85c8ad0216612c5870621974d7888d3f87aa59"} Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.193690 4840 generic.go:334] "Generic (PLEG): container finished" podID="15a50bea-c32e-4aed-8fd2-7289e1694f6e" containerID="6915adfd19dd7500fc5f1e3d97669f01dd9e52bc0540e0cbfa6ee88e4556faeb" exitCode=0 Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.193779 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dcc97bc9b-l7z2g" event={"ID":"15a50bea-c32e-4aed-8fd2-7289e1694f6e","Type":"ContainerDied","Data":"6915adfd19dd7500fc5f1e3d97669f01dd9e52bc0540e0cbfa6ee88e4556faeb"} Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.212066 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d3ccfc13-7a62-4923-95ab-c68cb93aa03c","Type":"ContainerDied","Data":"cd8de2c1f764692164114f6363292122cbecbd4b42a9e5a5e6ee07c85eb32226"} Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.212124 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd8de2c1f764692164114f6363292122cbecbd4b42a9e5a5e6ee07c85eb32226" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.224361 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.230821 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"86c17dbf-d890-4de3-bf5d-29e0aea4d968","Type":"ContainerDied","Data":"84bbbbbba13debeb9cdd8757772784ee81cb2bdcff0f703a2bf3ea42f422a04c"} Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.260292 4840 generic.go:334] "Generic (PLEG): container finished" podID="915955ff-c1d8-4f99-a621-f28d463c512f" containerID="904fce62c27e75c0afd1b04ee9c4e1dd4f36346fdb9943a482344076f39797f2" exitCode=0 Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.260423 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"915955ff-c1d8-4f99-a621-f28d463c512f","Type":"ContainerDied","Data":"904fce62c27e75c0afd1b04ee9c4e1dd4f36346fdb9943a482344076f39797f2"} Mar 11 09:21:23 crc kubenswrapper[4840]: E0311 09:21:23.272257 4840 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 11 09:21:23 crc kubenswrapper[4840]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:763d1f1e8a1cf877c151c59609960fd2fa29e7e50001f8818122a2d51878befa,Command:[/bin/sh -c #!/bin/bash Mar 11 09:21:23 crc kubenswrapper[4840]: Mar 11 09:21:23 crc kubenswrapper[4840]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 11 09:21:23 crc kubenswrapper[4840]: Mar 11 09:21:23 crc kubenswrapper[4840]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 11 09:21:23 crc kubenswrapper[4840]: Mar 11 09:21:23 crc kubenswrapper[4840]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 11 09:21:23 crc kubenswrapper[4840]: Mar 11 09:21:23 crc kubenswrapper[4840]: if [ -n "" ]; then Mar 11 09:21:23 crc kubenswrapper[4840]: GRANT_DATABASE="" Mar 11 09:21:23 crc kubenswrapper[4840]: else Mar 11 09:21:23 crc kubenswrapper[4840]: GRANT_DATABASE="*" Mar 11 09:21:23 crc kubenswrapper[4840]: fi Mar 11 09:21:23 crc kubenswrapper[4840]: Mar 11 09:21:23 crc kubenswrapper[4840]: # going for maximum compatibility here: Mar 11 09:21:23 crc kubenswrapper[4840]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 11 09:21:23 crc kubenswrapper[4840]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 11 09:21:23 crc kubenswrapper[4840]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 11 09:21:23 crc kubenswrapper[4840]: # support updates Mar 11 09:21:23 crc kubenswrapper[4840]: Mar 11 09:21:23 crc kubenswrapper[4840]: $MYSQL_CMD < logger="UnhandledError" Mar 11 09:21:23 crc kubenswrapper[4840]: E0311 09:21:23.273445 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-9sj79" podUID="882e0084-6891-4852-aecb-2951f5763800" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.274436 4840 generic.go:334] "Generic (PLEG): container finished" podID="71aaf352-8b91-4846-8ce4-1d83303ac203" containerID="7d5a479df92438b43deb38719eb65c2cb14faa128400d6d01ccbd757dae47f94" exitCode=0 Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.274528 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"71aaf352-8b91-4846-8ce4-1d83303ac203","Type":"ContainerDied","Data":"7d5a479df92438b43deb38719eb65c2cb14faa128400d6d01ccbd757dae47f94"} Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.278334 4840 generic.go:334] "Generic (PLEG): container finished" podID="629115e9-6bcf-45e8-a0da-d7c06386b7b7" containerID="9895b030ee05b1c30d40171df5aaa90b27098c8597a3d0999e257cf13cec7e67" exitCode=0 Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.278503 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-28ac-account-create-update-4b8kv" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.287330 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-84c78b97c8-frfs9"] Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.287396 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"629115e9-6bcf-45e8-a0da-d7c06386b7b7","Type":"ContainerDied","Data":"9895b030ee05b1c30d40171df5aaa90b27098c8597a3d0999e257cf13cec7e67"} Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.287524 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q29dz" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.287687 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5bc9bbdff8-zczrp" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.293627 4840 scope.go:117] "RemoveContainer" containerID="b052e1bd7a58acb7f7eff7c22cb37c2b87847e0c994e60dd660321fe2b51b6c8" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.296802 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-84c78b97c8-frfs9"] Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.315412 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-internal-tls-certs\") pod \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.315504 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-public-tls-certs\") pod \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.315671 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-scripts\") pod \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.315829 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86c17dbf-d890-4de3-bf5d-29e0aea4d968-etc-machine-id\") pod \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.315879 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2vr8\" (UniqueName: \"kubernetes.io/projected/86c17dbf-d890-4de3-bf5d-29e0aea4d968-kube-api-access-l2vr8\") pod \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.315909 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-config-data-custom\") pod \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.315947 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-config-data\") pod \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.315973 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-combined-ca-bundle\") pod \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.316036 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86c17dbf-d890-4de3-bf5d-29e0aea4d968-logs\") pod \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\" (UID: \"86c17dbf-d890-4de3-bf5d-29e0aea4d968\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.317127 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86c17dbf-d890-4de3-bf5d-29e0aea4d968-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "86c17dbf-d890-4de3-bf5d-29e0aea4d968" (UID: "86c17dbf-d890-4de3-bf5d-29e0aea4d968"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.318202 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86c17dbf-d890-4de3-bf5d-29e0aea4d968-logs" (OuterVolumeSpecName: "logs") pod "86c17dbf-d890-4de3-bf5d-29e0aea4d968" (UID: "86c17dbf-d890-4de3-bf5d-29e0aea4d968"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.319008 4840 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86c17dbf-d890-4de3-bf5d-29e0aea4d968-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.319035 4840 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86c17dbf-d890-4de3-bf5d-29e0aea4d968-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.332362 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86c17dbf-d890-4de3-bf5d-29e0aea4d968-kube-api-access-l2vr8" (OuterVolumeSpecName: "kube-api-access-l2vr8") pod "86c17dbf-d890-4de3-bf5d-29e0aea4d968" (UID: "86c17dbf-d890-4de3-bf5d-29e0aea4d968"). InnerVolumeSpecName "kube-api-access-l2vr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.334595 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "86c17dbf-d890-4de3-bf5d-29e0aea4d968" (UID: "86c17dbf-d890-4de3-bf5d-29e0aea4d968"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.353537 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-scripts" (OuterVolumeSpecName: "scripts") pod "86c17dbf-d890-4de3-bf5d-29e0aea4d968" (UID: "86c17dbf-d890-4de3-bf5d-29e0aea4d968"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.353723 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.364927 4840 scope.go:117] "RemoveContainer" containerID="90bda731ece0d646b8d0e358a1c93f0fa416106233b5751b9b744fa4d0a5ddc0" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.395966 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86c17dbf-d890-4de3-bf5d-29e0aea4d968" (UID: "86c17dbf-d890-4de3-bf5d-29e0aea4d968"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.399588 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 11 09:21:23 crc kubenswrapper[4840]: E0311 09:21:23.407911 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-vwdfz operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-42de-account-create-update-558dw" podUID="6092c570-2c07-432f-9bdd-b48bf0600ef9" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.439586 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2vr8\" (UniqueName: \"kubernetes.io/projected/86c17dbf-d890-4de3-bf5d-29e0aea4d968-kube-api-access-l2vr8\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.439636 4840 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.439649 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.439661 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.448509 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.451078 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "86c17dbf-d890-4de3-bf5d-29e0aea4d968" (UID: "86c17dbf-d890-4de3-bf5d-29e0aea4d968"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.453591 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.461625 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.501668 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.501715 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-97696577-2mh8q"] Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.511078 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.514709 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-config-data" (OuterVolumeSpecName: "config-data") pod "86c17dbf-d890-4de3-bf5d-29e0aea4d968" (UID: "86c17dbf-d890-4de3-bf5d-29e0aea4d968"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.515255 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-97696577-2mh8q"] Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.524050 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "86c17dbf-d890-4de3-bf5d-29e0aea4d968" (UID: "86c17dbf-d890-4de3-bf5d-29e0aea4d968"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.529294 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-28ac-account-create-update-4b8kv"] Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.532356 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.535252 4840 scope.go:117] "RemoveContainer" containerID="e59d16a9086fcb2441c89141dd63cbc53bf993e3952c1f25bf481d63fd3390fa" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.545147 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-httpd-run\") pod \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.545212 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-config-data\") pod \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.545240 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-combined-ca-bundle\") pod \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.545320 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52m5l\" (UniqueName: \"kubernetes.io/projected/915955ff-c1d8-4f99-a621-f28d463c512f-kube-api-access-52m5l\") pod \"915955ff-c1d8-4f99-a621-f28d463c512f\" (UID: \"915955ff-c1d8-4f99-a621-f28d463c512f\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.545412 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-internal-tls-certs\") pod \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.545481 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxf22\" (UniqueName: \"kubernetes.io/projected/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-kube-api-access-gxf22\") pod \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.545518 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c08694-92ed-44cb-8ca3-92a47b5571d4-config-data\") pod \"95c08694-92ed-44cb-8ca3-92a47b5571d4\" (UID: \"95c08694-92ed-44cb-8ca3-92a47b5571d4\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.545543 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915955ff-c1d8-4f99-a621-f28d463c512f-combined-ca-bundle\") pod \"915955ff-c1d8-4f99-a621-f28d463c512f\" (UID: \"915955ff-c1d8-4f99-a621-f28d463c512f\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.545576 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-scripts\") pod \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.545597 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.545623 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/915955ff-c1d8-4f99-a621-f28d463c512f-config-data\") pod \"915955ff-c1d8-4f99-a621-f28d463c512f\" (UID: \"915955ff-c1d8-4f99-a621-f28d463c512f\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.545653 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-logs\") pod \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\" (UID: \"d3ccfc13-7a62-4923-95ab-c68cb93aa03c\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.545676 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chwnz\" (UniqueName: \"kubernetes.io/projected/95c08694-92ed-44cb-8ca3-92a47b5571d4-kube-api-access-chwnz\") pod \"95c08694-92ed-44cb-8ca3-92a47b5571d4\" (UID: \"95c08694-92ed-44cb-8ca3-92a47b5571d4\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.545710 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/915955ff-c1d8-4f99-a621-f28d463c512f-logs\") pod \"915955ff-c1d8-4f99-a621-f28d463c512f\" (UID: \"915955ff-c1d8-4f99-a621-f28d463c512f\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.545736 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c08694-92ed-44cb-8ca3-92a47b5571d4-combined-ca-bundle\") pod \"95c08694-92ed-44cb-8ca3-92a47b5571d4\" (UID: \"95c08694-92ed-44cb-8ca3-92a47b5571d4\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.545766 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/915955ff-c1d8-4f99-a621-f28d463c512f-nova-metadata-tls-certs\") pod \"915955ff-c1d8-4f99-a621-f28d463c512f\" (UID: \"915955ff-c1d8-4f99-a621-f28d463c512f\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.546162 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.546174 4840 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.546184 4840 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86c17dbf-d890-4de3-bf5d-29e0aea4d968-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.559382 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-logs" (OuterVolumeSpecName: "logs") pod "d3ccfc13-7a62-4923-95ab-c68cb93aa03c" (UID: "d3ccfc13-7a62-4923-95ab-c68cb93aa03c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.562090 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d3ccfc13-7a62-4923-95ab-c68cb93aa03c" (UID: "d3ccfc13-7a62-4923-95ab-c68cb93aa03c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.565159 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/915955ff-c1d8-4f99-a621-f28d463c512f-kube-api-access-52m5l" (OuterVolumeSpecName: "kube-api-access-52m5l") pod "915955ff-c1d8-4f99-a621-f28d463c512f" (UID: "915955ff-c1d8-4f99-a621-f28d463c512f"). InnerVolumeSpecName "kube-api-access-52m5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.565256 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/915955ff-c1d8-4f99-a621-f28d463c512f-logs" (OuterVolumeSpecName: "logs") pod "915955ff-c1d8-4f99-a621-f28d463c512f" (UID: "915955ff-c1d8-4f99-a621-f28d463c512f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.568415 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-kube-api-access-gxf22" (OuterVolumeSpecName: "kube-api-access-gxf22") pod "d3ccfc13-7a62-4923-95ab-c68cb93aa03c" (UID: "d3ccfc13-7a62-4923-95ab-c68cb93aa03c"). InnerVolumeSpecName "kube-api-access-gxf22". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.568520 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-28ac-account-create-update-4b8kv"] Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.576551 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-scripts" (OuterVolumeSpecName: "scripts") pod "d3ccfc13-7a62-4923-95ab-c68cb93aa03c" (UID: "d3ccfc13-7a62-4923-95ab-c68cb93aa03c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.577200 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "d3ccfc13-7a62-4923-95ab-c68cb93aa03c" (UID: "d3ccfc13-7a62-4923-95ab-c68cb93aa03c"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.610370 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95c08694-92ed-44cb-8ca3-92a47b5571d4-kube-api-access-chwnz" (OuterVolumeSpecName: "kube-api-access-chwnz") pod "95c08694-92ed-44cb-8ca3-92a47b5571d4" (UID: "95c08694-92ed-44cb-8ca3-92a47b5571d4"). InnerVolumeSpecName "kube-api-access-chwnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.631609 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-config-data" (OuterVolumeSpecName: "config-data") pod "d3ccfc13-7a62-4923-95ab-c68cb93aa03c" (UID: "d3ccfc13-7a62-4923-95ab-c68cb93aa03c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.646972 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-config-data\") pod \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.647238 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl84b\" (UniqueName: \"kubernetes.io/projected/15a50bea-c32e-4aed-8fd2-7289e1694f6e-kube-api-access-rl84b\") pod \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.647390 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e-kube-state-metrics-tls-config\") pod \"cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e\" (UID: \"cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.647724 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/629115e9-6bcf-45e8-a0da-d7c06386b7b7-public-tls-certs\") pod \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\" (UID: \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.647802 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-internal-tls-certs\") pod \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.647885 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/629115e9-6bcf-45e8-a0da-d7c06386b7b7-internal-tls-certs\") pod \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\" (UID: \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.647990 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15a50bea-c32e-4aed-8fd2-7289e1694f6e-logs\") pod \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.648099 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-combined-ca-bundle\") pod \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.648165 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e-combined-ca-bundle\") pod \"cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e\" (UID: \"cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.648256 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-public-tls-certs\") pod \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.648329 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/629115e9-6bcf-45e8-a0da-d7c06386b7b7-logs\") pod \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\" (UID: \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.648428 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/629115e9-6bcf-45e8-a0da-d7c06386b7b7-combined-ca-bundle\") pod \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\" (UID: \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.648568 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-config-data-custom\") pod \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\" (UID: \"15a50bea-c32e-4aed-8fd2-7289e1694f6e\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.648648 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/629115e9-6bcf-45e8-a0da-d7c06386b7b7-config-data\") pod \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\" (UID: \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.649294 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h79kk\" (UniqueName: \"kubernetes.io/projected/cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e-kube-api-access-h79kk\") pod \"cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e\" (UID: \"cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.649443 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm5m6\" (UniqueName: \"kubernetes.io/projected/629115e9-6bcf-45e8-a0da-d7c06386b7b7-kube-api-access-pm5m6\") pod \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\" (UID: \"629115e9-6bcf-45e8-a0da-d7c06386b7b7\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.649540 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e-kube-state-metrics-tls-certs\") pod \"cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e\" (UID: \"cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e\") " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.649986 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915955ff-c1d8-4f99-a621-f28d463c512f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "915955ff-c1d8-4f99-a621-f28d463c512f" (UID: "915955ff-c1d8-4f99-a621-f28d463c512f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.650271 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52m5l\" (UniqueName: \"kubernetes.io/projected/915955ff-c1d8-4f99-a621-f28d463c512f-kube-api-access-52m5l\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.650340 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxf22\" (UniqueName: \"kubernetes.io/projected/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-kube-api-access-gxf22\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.650405 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915955ff-c1d8-4f99-a621-f28d463c512f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.650476 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.650544 4840 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.650608 4840 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.650663 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chwnz\" (UniqueName: \"kubernetes.io/projected/95c08694-92ed-44cb-8ca3-92a47b5571d4-kube-api-access-chwnz\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.650716 4840 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/915955ff-c1d8-4f99-a621-f28d463c512f-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.650771 4840 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.650831 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.651880 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915955ff-c1d8-4f99-a621-f28d463c512f-config-data" (OuterVolumeSpecName: "config-data") pod "915955ff-c1d8-4f99-a621-f28d463c512f" (UID: "915955ff-c1d8-4f99-a621-f28d463c512f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.661755 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/629115e9-6bcf-45e8-a0da-d7c06386b7b7-logs" (OuterVolumeSpecName: "logs") pod "629115e9-6bcf-45e8-a0da-d7c06386b7b7" (UID: "629115e9-6bcf-45e8-a0da-d7c06386b7b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.664593 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0f23-account-create-update-mbjxd"] Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.668363 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15a50bea-c32e-4aed-8fd2-7289e1694f6e-logs" (OuterVolumeSpecName: "logs") pod "15a50bea-c32e-4aed-8fd2-7289e1694f6e" (UID: "15a50bea-c32e-4aed-8fd2-7289e1694f6e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.668626 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0f23-account-create-update-mbjxd"] Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.669131 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15a50bea-c32e-4aed-8fd2-7289e1694f6e-kube-api-access-rl84b" (OuterVolumeSpecName: "kube-api-access-rl84b") pod "15a50bea-c32e-4aed-8fd2-7289e1694f6e" (UID: "15a50bea-c32e-4aed-8fd2-7289e1694f6e"). InnerVolumeSpecName "kube-api-access-rl84b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.669171 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/629115e9-6bcf-45e8-a0da-d7c06386b7b7-kube-api-access-pm5m6" (OuterVolumeSpecName: "kube-api-access-pm5m6") pod "629115e9-6bcf-45e8-a0da-d7c06386b7b7" (UID: "629115e9-6bcf-45e8-a0da-d7c06386b7b7"). InnerVolumeSpecName "kube-api-access-pm5m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.679822 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3ccfc13-7a62-4923-95ab-c68cb93aa03c" (UID: "d3ccfc13-7a62-4923-95ab-c68cb93aa03c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.685423 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915955ff-c1d8-4f99-a621-f28d463c512f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "915955ff-c1d8-4f99-a621-f28d463c512f" (UID: "915955ff-c1d8-4f99-a621-f28d463c512f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.705685 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-bc41-account-create-update-kjhwj"] Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.707174 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e-kube-api-access-h79kk" (OuterVolumeSpecName: "kube-api-access-h79kk") pod "cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e" (UID: "cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e"). InnerVolumeSpecName "kube-api-access-h79kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.713985 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-bc41-account-create-update-kjhwj"] Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.720655 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5bc9bbdff8-zczrp"] Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.724965 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "15a50bea-c32e-4aed-8fd2-7289e1694f6e" (UID: "15a50bea-c32e-4aed-8fd2-7289e1694f6e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.730017 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5bc9bbdff8-zczrp"] Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.732540 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c08694-92ed-44cb-8ca3-92a47b5571d4-config-data" (OuterVolumeSpecName: "config-data") pod "95c08694-92ed-44cb-8ca3-92a47b5571d4" (UID: "95c08694-92ed-44cb-8ca3-92a47b5571d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.754446 4840 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/629115e9-6bcf-45e8-a0da-d7c06386b7b7-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.754579 4840 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.754594 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95c08694-92ed-44cb-8ca3-92a47b5571d4-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.754610 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h79kk\" (UniqueName: \"kubernetes.io/projected/cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e-kube-api-access-h79kk\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.754622 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/915955ff-c1d8-4f99-a621-f28d463c512f-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.754634 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm5m6\" (UniqueName: \"kubernetes.io/projected/629115e9-6bcf-45e8-a0da-d7c06386b7b7-kube-api-access-pm5m6\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.754647 4840 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/915955ff-c1d8-4f99-a621-f28d463c512f-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.754659 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl84b\" (UniqueName: \"kubernetes.io/projected/15a50bea-c32e-4aed-8fd2-7289e1694f6e-kube-api-access-rl84b\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.754671 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.754683 4840 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/15a50bea-c32e-4aed-8fd2-7289e1694f6e-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.756799 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95c08694-92ed-44cb-8ca3-92a47b5571d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "95c08694-92ed-44cb-8ca3-92a47b5571d4" (UID: "95c08694-92ed-44cb-8ca3-92a47b5571d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.776253 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-q29dz"] Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.784693 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-q29dz"] Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.795725 4840 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.826903 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d3ccfc13-7a62-4923-95ab-c68cb93aa03c" (UID: "d3ccfc13-7a62-4923-95ab-c68cb93aa03c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.832325 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e" (UID: "cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.859653 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/629115e9-6bcf-45e8-a0da-d7c06386b7b7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "629115e9-6bcf-45e8-a0da-d7c06386b7b7" (UID: "629115e9-6bcf-45e8-a0da-d7c06386b7b7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.860634 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95c08694-92ed-44cb-8ca3-92a47b5571d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.860656 4840 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/629115e9-6bcf-45e8-a0da-d7c06386b7b7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.860669 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.860678 4840 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3ccfc13-7a62-4923-95ab-c68cb93aa03c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.860687 4840 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.861162 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15a50bea-c32e-4aed-8fd2-7289e1694f6e" (UID: "15a50bea-c32e-4aed-8fd2-7289e1694f6e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.884398 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/629115e9-6bcf-45e8-a0da-d7c06386b7b7-config-data" (OuterVolumeSpecName: "config-data") pod "629115e9-6bcf-45e8-a0da-d7c06386b7b7" (UID: "629115e9-6bcf-45e8-a0da-d7c06386b7b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.916353 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e" (UID: "cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.933874 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/629115e9-6bcf-45e8-a0da-d7c06386b7b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "629115e9-6bcf-45e8-a0da-d7c06386b7b7" (UID: "629115e9-6bcf-45e8-a0da-d7c06386b7b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.933892 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "15a50bea-c32e-4aed-8fd2-7289e1694f6e" (UID: "15a50bea-c32e-4aed-8fd2-7289e1694f6e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.936296 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e" (UID: "cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.948120 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-config-data" (OuterVolumeSpecName: "config-data") pod "15a50bea-c32e-4aed-8fd2-7289e1694f6e" (UID: "15a50bea-c32e-4aed-8fd2-7289e1694f6e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.948858 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/629115e9-6bcf-45e8-a0da-d7c06386b7b7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "629115e9-6bcf-45e8-a0da-d7c06386b7b7" (UID: "629115e9-6bcf-45e8-a0da-d7c06386b7b7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.964239 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "15a50bea-c32e-4aed-8fd2-7289e1694f6e" (UID: "15a50bea-c32e-4aed-8fd2-7289e1694f6e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.966754 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/629115e9-6bcf-45e8-a0da-d7c06386b7b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.966787 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/629115e9-6bcf-45e8-a0da-d7c06386b7b7-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.966802 4840 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.966821 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.966835 4840 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.966861 4840 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/629115e9-6bcf-45e8-a0da-d7c06386b7b7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.966873 4840 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.966885 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.966897 4840 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a50bea-c32e-4aed-8fd2-7289e1694f6e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:23 crc kubenswrapper[4840]: I0311 09:21:23.995276 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.002769 4840 scope.go:117] "RemoveContainer" containerID="7ec25a62f5da025330f39922a9766d18ca04cc7a4cbe3b4c61c31b0ef51d49f5" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.004417 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.006307 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f31748d2-64a9-4839-ac55-691d9682ee8e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.013954 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.031014 4840 scope.go:117] "RemoveContainer" containerID="97908e7657b276e714bdd7983d1b6b792bc1fff3b99535e851314e2428338b75" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.068425 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9699b913-55db-46fe-9831-1e1ac94ca609-combined-ca-bundle\") pod \"9699b913-55db-46fe-9831-1e1ac94ca609\" (UID: \"9699b913-55db-46fe-9831-1e1ac94ca609\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.068818 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71aaf352-8b91-4846-8ce4-1d83303ac203-combined-ca-bundle\") pod \"71aaf352-8b91-4846-8ce4-1d83303ac203\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.068840 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c44e0641-be37-4447-9666-14bf00c08827-combined-ca-bundle\") pod \"c44e0641-be37-4447-9666-14bf00c08827\" (UID: \"c44e0641-be37-4447-9666-14bf00c08827\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.068860 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71aaf352-8b91-4846-8ce4-1d83303ac203-scripts\") pod \"71aaf352-8b91-4846-8ce4-1d83303ac203\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.068882 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmxxk\" (UniqueName: \"kubernetes.io/projected/c44e0641-be37-4447-9666-14bf00c08827-kube-api-access-cmxxk\") pod \"c44e0641-be37-4447-9666-14bf00c08827\" (UID: \"c44e0641-be37-4447-9666-14bf00c08827\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.068944 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71aaf352-8b91-4846-8ce4-1d83303ac203-logs\") pod \"71aaf352-8b91-4846-8ce4-1d83303ac203\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.068969 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/71aaf352-8b91-4846-8ce4-1d83303ac203-httpd-run\") pod \"71aaf352-8b91-4846-8ce4-1d83303ac203\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.069031 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c44e0641-be37-4447-9666-14bf00c08827-config-data\") pod \"c44e0641-be37-4447-9666-14bf00c08827\" (UID: \"c44e0641-be37-4447-9666-14bf00c08827\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.069051 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c44e0641-be37-4447-9666-14bf00c08827-kolla-config\") pod \"c44e0641-be37-4447-9666-14bf00c08827\" (UID: \"c44e0641-be37-4447-9666-14bf00c08827\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.069069 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71aaf352-8b91-4846-8ce4-1d83303ac203-public-tls-certs\") pod \"71aaf352-8b91-4846-8ce4-1d83303ac203\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.069085 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"71aaf352-8b91-4846-8ce4-1d83303ac203\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.069150 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjwpl\" (UniqueName: \"kubernetes.io/projected/71aaf352-8b91-4846-8ce4-1d83303ac203-kube-api-access-gjwpl\") pod \"71aaf352-8b91-4846-8ce4-1d83303ac203\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.069177 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf8hr\" (UniqueName: \"kubernetes.io/projected/9699b913-55db-46fe-9831-1e1ac94ca609-kube-api-access-tf8hr\") pod \"9699b913-55db-46fe-9831-1e1ac94ca609\" (UID: \"9699b913-55db-46fe-9831-1e1ac94ca609\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.069201 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9699b913-55db-46fe-9831-1e1ac94ca609-config-data\") pod \"9699b913-55db-46fe-9831-1e1ac94ca609\" (UID: \"9699b913-55db-46fe-9831-1e1ac94ca609\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.069229 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c44e0641-be37-4447-9666-14bf00c08827-memcached-tls-certs\") pod \"c44e0641-be37-4447-9666-14bf00c08827\" (UID: \"c44e0641-be37-4447-9666-14bf00c08827\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.069270 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71aaf352-8b91-4846-8ce4-1d83303ac203-config-data\") pod \"71aaf352-8b91-4846-8ce4-1d83303ac203\" (UID: \"71aaf352-8b91-4846-8ce4-1d83303ac203\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.071745 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71aaf352-8b91-4846-8ce4-1d83303ac203-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "71aaf352-8b91-4846-8ce4-1d83303ac203" (UID: "71aaf352-8b91-4846-8ce4-1d83303ac203"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.075869 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71aaf352-8b91-4846-8ce4-1d83303ac203-logs" (OuterVolumeSpecName: "logs") pod "71aaf352-8b91-4846-8ce4-1d83303ac203" (UID: "71aaf352-8b91-4846-8ce4-1d83303ac203"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.075988 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c44e0641-be37-4447-9666-14bf00c08827-config-data" (OuterVolumeSpecName: "config-data") pod "c44e0641-be37-4447-9666-14bf00c08827" (UID: "c44e0641-be37-4447-9666-14bf00c08827"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.076894 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c44e0641-be37-4447-9666-14bf00c08827-kube-api-access-cmxxk" (OuterVolumeSpecName: "kube-api-access-cmxxk") pod "c44e0641-be37-4447-9666-14bf00c08827" (UID: "c44e0641-be37-4447-9666-14bf00c08827"). InnerVolumeSpecName "kube-api-access-cmxxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.077016 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c44e0641-be37-4447-9666-14bf00c08827-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "c44e0641-be37-4447-9666-14bf00c08827" (UID: "c44e0641-be37-4447-9666-14bf00c08827"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.085122 4840 scope.go:117] "RemoveContainer" containerID="de2608ad4638642b2ed36b2182d3eea8ca8dc51168010aa67653e3ba968f01af" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.085352 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71aaf352-8b91-4846-8ce4-1d83303ac203-scripts" (OuterVolumeSpecName: "scripts") pod "71aaf352-8b91-4846-8ce4-1d83303ac203" (UID: "71aaf352-8b91-4846-8ce4-1d83303ac203"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.088720 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="3a964c24-3c53-4a29-98fb-ceaca467c372" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.094524 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="462d670d-c6b2-4c12-9d5c-183069b04200" path="/var/lib/kubelet/pods/462d670d-c6b2-4c12-9d5c-183069b04200/volumes" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.095694 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d2cbae7-4ece-49e8-b85e-30db29d6c172" path="/var/lib/kubelet/pods/4d2cbae7-4ece-49e8-b85e-30db29d6c172/volumes" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.096261 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="822a8132-1b8a-4f19-b42b-b6acd65e7743" path="/var/lib/kubelet/pods/822a8132-1b8a-4f19-b42b-b6acd65e7743/volumes" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.096636 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa334d0b-a179-4905-a660-05bbc12e5c02" path="/var/lib/kubelet/pods/aa334d0b-a179-4905-a660-05bbc12e5c02/volumes" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.097443 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4873de8-8c37-49b7-a8ba-b352a2cf0320" path="/var/lib/kubelet/pods/b4873de8-8c37-49b7-a8ba-b352a2cf0320/volumes" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.097859 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7acca3e-61e4-495b-adbf-36c435b4a7d2" path="/var/lib/kubelet/pods/d7acca3e-61e4-495b-adbf-36c435b4a7d2/volumes" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.097918 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9699b913-55db-46fe-9831-1e1ac94ca609-kube-api-access-tf8hr" (OuterVolumeSpecName: "kube-api-access-tf8hr") pod "9699b913-55db-46fe-9831-1e1ac94ca609" (UID: "9699b913-55db-46fe-9831-1e1ac94ca609"). InnerVolumeSpecName "kube-api-access-tf8hr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.100493 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "71aaf352-8b91-4846-8ce4-1d83303ac203" (UID: "71aaf352-8b91-4846-8ce4-1d83303ac203"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.104321 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3dc3dc8-7561-4964-9b52-5bfacca166da" path="/var/lib/kubelet/pods/e3dc3dc8-7561-4964-9b52-5bfacca166da/volumes" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.107144 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e93bc954-4d65-4b5f-8070-2b9800ff3db2" path="/var/lib/kubelet/pods/e93bc954-4d65-4b5f-8070-2b9800ff3db2/volumes" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.110802 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71aaf352-8b91-4846-8ce4-1d83303ac203-kube-api-access-gjwpl" (OuterVolumeSpecName: "kube-api-access-gjwpl") pod "71aaf352-8b91-4846-8ce4-1d83303ac203" (UID: "71aaf352-8b91-4846-8ce4-1d83303ac203"). InnerVolumeSpecName "kube-api-access-gjwpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.112660 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f06548f2-3dd2-48ae-ad54-333466a3e6ac" path="/var/lib/kubelet/pods/f06548f2-3dd2-48ae-ad54-333466a3e6ac/volumes" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.113673 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f10cd3a6-0a55-4957-b861-678d9af3c338" path="/var/lib/kubelet/pods/f10cd3a6-0a55-4957-b861-678d9af3c338/volumes" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.114575 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f415e24c-207c-4dc7-b68c-14180ac09391" path="/var/lib/kubelet/pods/f415e24c-207c-4dc7-b68c-14180ac09391/volumes" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.117663 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc871f48-4882-49a3-be1f-80d95c2548a9" path="/var/lib/kubelet/pods/fc871f48-4882-49a3-be1f-80d95c2548a9/volumes" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.160607 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6/ovn-northd/0.log" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.160700 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.171145 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6092c570-2c07-432f-9bdd-b48bf0600ef9-operator-scripts\") pod \"keystone-42de-account-create-update-558dw\" (UID: \"6092c570-2c07-432f-9bdd-b48bf0600ef9\") " pod="openstack/keystone-42de-account-create-update-558dw" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.171261 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwdfz\" (UniqueName: \"kubernetes.io/projected/6092c570-2c07-432f-9bdd-b48bf0600ef9-kube-api-access-vwdfz\") pod \"keystone-42de-account-create-update-558dw\" (UID: \"6092c570-2c07-432f-9bdd-b48bf0600ef9\") " pod="openstack/keystone-42de-account-create-update-558dw" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.171336 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71aaf352-8b91-4846-8ce4-1d83303ac203-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.171348 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmxxk\" (UniqueName: \"kubernetes.io/projected/c44e0641-be37-4447-9666-14bf00c08827-kube-api-access-cmxxk\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.171360 4840 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71aaf352-8b91-4846-8ce4-1d83303ac203-logs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.171368 4840 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/71aaf352-8b91-4846-8ce4-1d83303ac203-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.171378 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c44e0641-be37-4447-9666-14bf00c08827-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.171387 4840 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c44e0641-be37-4447-9666-14bf00c08827-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.171408 4840 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.171417 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjwpl\" (UniqueName: \"kubernetes.io/projected/71aaf352-8b91-4846-8ce4-1d83303ac203-kube-api-access-gjwpl\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.171429 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf8hr\" (UniqueName: \"kubernetes.io/projected/9699b913-55db-46fe-9831-1e1ac94ca609-kube-api-access-tf8hr\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: E0311 09:21:24.172691 4840 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Mar 11 09:21:24 crc kubenswrapper[4840]: E0311 09:21:24.172816 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6092c570-2c07-432f-9bdd-b48bf0600ef9-operator-scripts podName:6092c570-2c07-432f-9bdd-b48bf0600ef9 nodeName:}" failed. No retries permitted until 2026-03-11 09:21:26.172785122 +0000 UTC m=+1484.838455037 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/6092c570-2c07-432f-9bdd-b48bf0600ef9-operator-scripts") pod "keystone-42de-account-create-update-558dw" (UID: "6092c570-2c07-432f-9bdd-b48bf0600ef9") : configmap "openstack-scripts" not found Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.174003 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9699b913-55db-46fe-9831-1e1ac94ca609-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9699b913-55db-46fe-9831-1e1ac94ca609" (UID: "9699b913-55db-46fe-9831-1e1ac94ca609"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: E0311 09:21:24.179040 4840 projected.go:194] Error preparing data for projected volume kube-api-access-vwdfz for pod openstack/keystone-42de-account-create-update-558dw: failed to fetch token: serviceaccounts "galera-openstack" not found Mar 11 09:21:24 crc kubenswrapper[4840]: E0311 09:21:24.179121 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6092c570-2c07-432f-9bdd-b48bf0600ef9-kube-api-access-vwdfz podName:6092c570-2c07-432f-9bdd-b48bf0600ef9 nodeName:}" failed. No retries permitted until 2026-03-11 09:21:26.179101191 +0000 UTC m=+1484.844771006 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vwdfz" (UniqueName: "kubernetes.io/projected/6092c570-2c07-432f-9bdd-b48bf0600ef9-kube-api-access-vwdfz") pod "keystone-42de-account-create-update-558dw" (UID: "6092c570-2c07-432f-9bdd-b48bf0600ef9") : failed to fetch token: serviceaccounts "galera-openstack" not found Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.200825 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71aaf352-8b91-4846-8ce4-1d83303ac203-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71aaf352-8b91-4846-8ce4-1d83303ac203" (UID: "71aaf352-8b91-4846-8ce4-1d83303ac203"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.201292 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9699b913-55db-46fe-9831-1e1ac94ca609-config-data" (OuterVolumeSpecName: "config-data") pod "9699b913-55db-46fe-9831-1e1ac94ca609" (UID: "9699b913-55db-46fe-9831-1e1ac94ca609"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.202602 4840 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.226428 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.252536 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71aaf352-8b91-4846-8ce4-1d83303ac203-config-data" (OuterVolumeSpecName: "config-data") pod "71aaf352-8b91-4846-8ce4-1d83303ac203" (UID: "71aaf352-8b91-4846-8ce4-1d83303ac203"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.259423 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71aaf352-8b91-4846-8ce4-1d83303ac203-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "71aaf352-8b91-4846-8ce4-1d83303ac203" (UID: "71aaf352-8b91-4846-8ce4-1d83303ac203"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.273222 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-ovn-rundir\") pod \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.273643 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada053fd-c71a-4425-8220-b950f0cab229-combined-ca-bundle\") pod \"ada053fd-c71a-4425-8220-b950f0cab229\" (UID: \"ada053fd-c71a-4425-8220-b950f0cab229\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.273810 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-scripts\") pod \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.273906 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-config\") pod \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.274100 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc597\" (UniqueName: \"kubernetes.io/projected/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-kube-api-access-cc597\") pod \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.274193 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4m6s\" (UniqueName: \"kubernetes.io/projected/ada053fd-c71a-4425-8220-b950f0cab229-kube-api-access-f4m6s\") pod \"ada053fd-c71a-4425-8220-b950f0cab229\" (UID: \"ada053fd-c71a-4425-8220-b950f0cab229\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.274295 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-metrics-certs-tls-certs\") pod \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.273971 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6" (UID: "1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.274422 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-ovn-northd-tls-certs\") pod \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.274601 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44e0641-be37-4447-9666-14bf00c08827-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c44e0641-be37-4447-9666-14bf00c08827" (UID: "c44e0641-be37-4447-9666-14bf00c08827"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.274770 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-combined-ca-bundle\") pod \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\" (UID: \"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.274881 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ada053fd-c71a-4425-8220-b950f0cab229-config-data\") pod \"ada053fd-c71a-4425-8220-b950f0cab229\" (UID: \"ada053fd-c71a-4425-8220-b950f0cab229\") " Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.275766 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9699b913-55db-46fe-9831-1e1ac94ca609-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.276293 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71aaf352-8b91-4846-8ce4-1d83303ac203-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.281006 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9699b913-55db-46fe-9831-1e1ac94ca609-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.284621 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71aaf352-8b91-4846-8ce4-1d83303ac203-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.284779 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c44e0641-be37-4447-9666-14bf00c08827-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.284860 4840 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-ovn-rundir\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.284935 4840 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/71aaf352-8b91-4846-8ce4-1d83303ac203-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.285011 4840 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.274922 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-scripts" (OuterVolumeSpecName: "scripts") pod "1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6" (UID: "1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.276941 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-config" (OuterVolumeSpecName: "config") pod "1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6" (UID: "1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.287068 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ada053fd-c71a-4425-8220-b950f0cab229-kube-api-access-f4m6s" (OuterVolumeSpecName: "kube-api-access-f4m6s") pod "ada053fd-c71a-4425-8220-b950f0cab229" (UID: "ada053fd-c71a-4425-8220-b950f0cab229"). InnerVolumeSpecName "kube-api-access-f4m6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.291954 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44e0641-be37-4447-9666-14bf00c08827-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "c44e0641-be37-4447-9666-14bf00c08827" (UID: "c44e0641-be37-4447-9666-14bf00c08827"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.297768 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7dcc97bc9b-l7z2g" event={"ID":"15a50bea-c32e-4aed-8fd2-7289e1694f6e","Type":"ContainerDied","Data":"0e08d48e7eb1b3838bce9bb753d5e0af02f3c8a1f8c6831a0528536d3fa724ad"} Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.297839 4840 scope.go:117] "RemoveContainer" containerID="6915adfd19dd7500fc5f1e3d97669f01dd9e52bc0540e0cbfa6ee88e4556faeb" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.298003 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7dcc97bc9b-l7z2g" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.305898 4840 generic.go:334] "Generic (PLEG): container finished" podID="ada053fd-c71a-4425-8220-b950f0cab229" containerID="344c0fde0e6a0013eefcf0f63c44c10fef7fcbc6d796ca335364063dd3641c28" exitCode=0 Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.305998 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.306006 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ada053fd-c71a-4425-8220-b950f0cab229","Type":"ContainerDied","Data":"344c0fde0e6a0013eefcf0f63c44c10fef7fcbc6d796ca335364063dd3641c28"} Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.306420 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ada053fd-c71a-4425-8220-b950f0cab229","Type":"ContainerDied","Data":"da52197a8f497123829fe04146eb74a2412a70d31de945d59a3faaeee05b8f39"} Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.314515 4840 generic.go:334] "Generic (PLEG): container finished" podID="f5fc86d0-4547-42ed-a880-f58c9e29d8a4" containerID="34979c436634b8f0e018557d27d62f341c687de81ef20f3c5d4a9fa769f1e4fb" exitCode=0 Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.314593 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5fc86d0-4547-42ed-a880-f58c9e29d8a4","Type":"ContainerDied","Data":"34979c436634b8f0e018557d27d62f341c687de81ef20f3c5d4a9fa769f1e4fb"} Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.317296 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.319730 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"71aaf352-8b91-4846-8ce4-1d83303ac203","Type":"ContainerDied","Data":"a1993715d8a5812f3755cf35ed8576120afdedb601a5203ee479d8d13ab997ad"} Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.319846 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.325678 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"c44e0641-be37-4447-9666-14bf00c08827","Type":"ContainerDied","Data":"a7b84fe23abccdb2ad5bbb8949deb90f7c70551a93447acfcb47c9d0acdc6e49"} Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.325774 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.329425 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-kube-api-access-cc597" (OuterVolumeSpecName: "kube-api-access-cc597") pod "1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6" (UID: "1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6"). InnerVolumeSpecName "kube-api-access-cc597". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.338685 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e","Type":"ContainerDied","Data":"f6e9e04a8117c3de3c5a7248e19d2b7294656b7c0aba9f170b488aa9fb69a671"} Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.338791 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.340767 4840 scope.go:117] "RemoveContainer" containerID="70a8af150a750503de4de737257241b8eae85fdd8f6880b4e54352c0730d7435" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.350745 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6/ovn-northd/0.log" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.350795 4840 generic.go:334] "Generic (PLEG): container finished" podID="1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6" containerID="7adedc06633a9d0c8638d09c8c1e960d90f82138b050dce1fb05836daea4ad71" exitCode=139 Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.350913 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.351247 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6","Type":"ContainerDied","Data":"7adedc06633a9d0c8638d09c8c1e960d90f82138b050dce1fb05836daea4ad71"} Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.351274 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6","Type":"ContainerDied","Data":"5cfb05a3cd6aac51ef732971ab79ab60f75029cd71dda360f49ad1b9a2d0edc2"} Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.357096 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.367269 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"629115e9-6bcf-45e8-a0da-d7c06386b7b7","Type":"ContainerDied","Data":"450f2669f5a2991854ed4736969f267b82b59ce6ac9a94181b523317942841f7"} Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.367390 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.373011 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.374248 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6" (UID: "1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.376263 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.376322 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"9699b913-55db-46fe-9831-1e1ac94ca609","Type":"ContainerDied","Data":"41b38ee46e15d1fb3fb180f35855c895589415561d01f113d63fb75a37737b9d"} Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.388612 4840 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/c44e0641-be37-4447-9666-14bf00c08827-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.388669 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.388680 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.388689 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc597\" (UniqueName: \"kubernetes.io/projected/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-kube-api-access-cc597\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.388701 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4m6s\" (UniqueName: \"kubernetes.io/projected/ada053fd-c71a-4425-8220-b950f0cab229-kube-api-access-f4m6s\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.388711 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.397929 4840 scope.go:117] "RemoveContainer" containerID="344c0fde0e6a0013eefcf0f63c44c10fef7fcbc6d796ca335364063dd3641c28" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.402687 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ada053fd-c71a-4425-8220-b950f0cab229-config-data" (OuterVolumeSpecName: "config-data") pod "ada053fd-c71a-4425-8220-b950f0cab229" (UID: "ada053fd-c71a-4425-8220-b950f0cab229"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.403067 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ada053fd-c71a-4425-8220-b950f0cab229-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ada053fd-c71a-4425-8220-b950f0cab229" (UID: "ada053fd-c71a-4425-8220-b950f0cab229"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.407219 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7dcc97bc9b-l7z2g"] Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.417299 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7dcc97bc9b-l7z2g"] Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.421344 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"915955ff-c1d8-4f99-a621-f28d463c512f","Type":"ContainerDied","Data":"219e8ccbb2a92e75b89fe3b31edf67b7d0cebff0f30f5179b7f0e500621611de"} Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.421486 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.425551 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.434608 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.444112 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.447019 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.447609 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6" (UID: "1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.449050 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9sj79" event={"ID":"882e0084-6891-4852-aecb-2951f5763800","Type":"ContainerStarted","Data":"fedf125f00b50c05ecde1baafaa4325da1661a7c5761b62143314a4b108e4bcf"} Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.449142 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-42de-account-create-update-558dw" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.449181 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.449404 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.470888 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.474181 4840 scope.go:117] "RemoveContainer" containerID="344c0fde0e6a0013eefcf0f63c44c10fef7fcbc6d796ca335364063dd3641c28" Mar 11 09:21:24 crc kubenswrapper[4840]: E0311 09:21:24.476038 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"344c0fde0e6a0013eefcf0f63c44c10fef7fcbc6d796ca335364063dd3641c28\": container with ID starting with 344c0fde0e6a0013eefcf0f63c44c10fef7fcbc6d796ca335364063dd3641c28 not found: ID does not exist" containerID="344c0fde0e6a0013eefcf0f63c44c10fef7fcbc6d796ca335364063dd3641c28" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.476077 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"344c0fde0e6a0013eefcf0f63c44c10fef7fcbc6d796ca335364063dd3641c28"} err="failed to get container status \"344c0fde0e6a0013eefcf0f63c44c10fef7fcbc6d796ca335364063dd3641c28\": rpc error: code = NotFound desc = could not find container \"344c0fde0e6a0013eefcf0f63c44c10fef7fcbc6d796ca335364063dd3641c28\": container with ID starting with 344c0fde0e6a0013eefcf0f63c44c10fef7fcbc6d796ca335364063dd3641c28 not found: ID does not exist" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.476122 4840 scope.go:117] "RemoveContainer" containerID="7d5a479df92438b43deb38719eb65c2cb14faa128400d6d01ccbd757dae47f94" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.477605 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.484378 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6" (UID: "1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.491228 4840 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.491254 4840 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.491264 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ada053fd-c71a-4425-8220-b950f0cab229-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.491275 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ada053fd-c71a-4425-8220-b950f0cab229-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.491809 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-42de-account-create-update-558dw" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.500156 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.508219 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.528475 4840 scope.go:117] "RemoveContainer" containerID="8c20b7cb7d072af1e8fc8505f85ae53009e95d00f58e75523c006bc2e4ffcfc3" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.577073 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.620619 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.631576 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.664127 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.692555 4840 scope.go:117] "RemoveContainer" containerID="dba9b2b61d92b87502d4ac01bbbc8b61a58c6a11a4e92608207ef818b5d336cd" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.703909 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.715060 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.721515 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.727340 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.740708 4840 scope.go:117] "RemoveContainer" containerID="6da9e415bb6fb1fafd6a20eb3f85c8ad0216612c5870621974d7888d3f87aa59" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.743683 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.749080 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.754644 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.760126 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.787690 4840 scope.go:117] "RemoveContainer" containerID="a22560a02062c00ff0e4a3b9e873bd27ab71b2325c47df15e235ac0992d9117c" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.816227 4840 scope.go:117] "RemoveContainer" containerID="7adedc06633a9d0c8638d09c8c1e960d90f82138b050dce1fb05836daea4ad71" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.853591 4840 scope.go:117] "RemoveContainer" containerID="a22560a02062c00ff0e4a3b9e873bd27ab71b2325c47df15e235ac0992d9117c" Mar 11 09:21:24 crc kubenswrapper[4840]: E0311 09:21:24.861284 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a22560a02062c00ff0e4a3b9e873bd27ab71b2325c47df15e235ac0992d9117c\": container with ID starting with a22560a02062c00ff0e4a3b9e873bd27ab71b2325c47df15e235ac0992d9117c not found: ID does not exist" containerID="a22560a02062c00ff0e4a3b9e873bd27ab71b2325c47df15e235ac0992d9117c" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.861324 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a22560a02062c00ff0e4a3b9e873bd27ab71b2325c47df15e235ac0992d9117c"} err="failed to get container status \"a22560a02062c00ff0e4a3b9e873bd27ab71b2325c47df15e235ac0992d9117c\": rpc error: code = NotFound desc = could not find container \"a22560a02062c00ff0e4a3b9e873bd27ab71b2325c47df15e235ac0992d9117c\": container with ID starting with a22560a02062c00ff0e4a3b9e873bd27ab71b2325c47df15e235ac0992d9117c not found: ID does not exist" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.861352 4840 scope.go:117] "RemoveContainer" containerID="7adedc06633a9d0c8638d09c8c1e960d90f82138b050dce1fb05836daea4ad71" Mar 11 09:21:24 crc kubenswrapper[4840]: E0311 09:21:24.865839 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7adedc06633a9d0c8638d09c8c1e960d90f82138b050dce1fb05836daea4ad71\": container with ID starting with 7adedc06633a9d0c8638d09c8c1e960d90f82138b050dce1fb05836daea4ad71 not found: ID does not exist" containerID="7adedc06633a9d0c8638d09c8c1e960d90f82138b050dce1fb05836daea4ad71" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.865887 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7adedc06633a9d0c8638d09c8c1e960d90f82138b050dce1fb05836daea4ad71"} err="failed to get container status \"7adedc06633a9d0c8638d09c8c1e960d90f82138b050dce1fb05836daea4ad71\": rpc error: code = NotFound desc = could not find container \"7adedc06633a9d0c8638d09c8c1e960d90f82138b050dce1fb05836daea4ad71\": container with ID starting with 7adedc06633a9d0c8638d09c8c1e960d90f82138b050dce1fb05836daea4ad71 not found: ID does not exist" Mar 11 09:21:24 crc kubenswrapper[4840]: I0311 09:21:24.865916 4840 scope.go:117] "RemoveContainer" containerID="9895b030ee05b1c30d40171df5aaa90b27098c8597a3d0999e257cf13cec7e67" Mar 11 09:21:25 crc kubenswrapper[4840]: E0311 09:21:25.008254 4840 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Mar 11 09:21:25 crc kubenswrapper[4840]: E0311 09:21:25.008944 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-config-data podName:f31748d2-64a9-4839-ac55-691d9682ee8e nodeName:}" failed. No retries permitted until 2026-03-11 09:21:33.0089271 +0000 UTC m=+1491.674596915 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-config-data") pod "rabbitmq-server-0" (UID: "f31748d2-64a9-4839-ac55-691d9682ee8e") : configmap "rabbitmq-config-data" not found Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.029602 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9sj79" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.089037 4840 scope.go:117] "RemoveContainer" containerID="964ebeaba64a80f1be2bba1d82ca1a7e7dffe0224141e942622675fc8b28aeb6" Mar 11 09:21:25 crc kubenswrapper[4840]: E0311 09:21:25.089104 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="25ef9fd444056788b6ce2758460268d8703cbbd6d97d9234bfee447123075141" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 11 09:21:25 crc kubenswrapper[4840]: E0311 09:21:25.091525 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="25ef9fd444056788b6ce2758460268d8703cbbd6d97d9234bfee447123075141" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 11 09:21:25 crc kubenswrapper[4840]: E0311 09:21:25.094559 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="25ef9fd444056788b6ce2758460268d8703cbbd6d97d9234bfee447123075141" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 11 09:21:25 crc kubenswrapper[4840]: E0311 09:21:25.094619 4840 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="c6f4129d-4bc4-449b-be94-82fce07cf1f0" containerName="galera" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.108900 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrffb\" (UniqueName: \"kubernetes.io/projected/882e0084-6891-4852-aecb-2951f5763800-kube-api-access-xrffb\") pod \"882e0084-6891-4852-aecb-2951f5763800\" (UID: \"882e0084-6891-4852-aecb-2951f5763800\") " Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.108958 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/882e0084-6891-4852-aecb-2951f5763800-operator-scripts\") pod \"882e0084-6891-4852-aecb-2951f5763800\" (UID: \"882e0084-6891-4852-aecb-2951f5763800\") " Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.111363 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/882e0084-6891-4852-aecb-2951f5763800-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "882e0084-6891-4852-aecb-2951f5763800" (UID: "882e0084-6891-4852-aecb-2951f5763800"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.131118 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/882e0084-6891-4852-aecb-2951f5763800-kube-api-access-xrffb" (OuterVolumeSpecName: "kube-api-access-xrffb") pod "882e0084-6891-4852-aecb-2951f5763800" (UID: "882e0084-6891-4852-aecb-2951f5763800"). InnerVolumeSpecName "kube-api-access-xrffb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.131862 4840 scope.go:117] "RemoveContainer" containerID="cfec390aae2b33c2baf59838d2300ba819a1ecdc9f653835cc711873fa787a85" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.173339 4840 scope.go:117] "RemoveContainer" containerID="904fce62c27e75c0afd1b04ee9c4e1dd4f36346fdb9943a482344076f39797f2" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.199329 4840 scope.go:117] "RemoveContainer" containerID="a0aea92352d16543560402efd876df16008f6e3e1715ebf0bc7c85603335be96" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.211353 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrffb\" (UniqueName: \"kubernetes.io/projected/882e0084-6891-4852-aecb-2951f5763800-kube-api-access-xrffb\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.211383 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/882e0084-6891-4852-aecb-2951f5763800-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.388590 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.512827 4840 generic.go:334] "Generic (PLEG): container finished" podID="3a964c24-3c53-4a29-98fb-ceaca467c372" containerID="a744999686b40927e99b68155086c1bd24cdc20d1ac6a31619c39f4421dcb680" exitCode=0 Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.513029 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3a964c24-3c53-4a29-98fb-ceaca467c372","Type":"ContainerDied","Data":"a744999686b40927e99b68155086c1bd24cdc20d1ac6a31619c39f4421dcb680"} Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.513082 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.513097 4840 scope.go:117] "RemoveContainer" containerID="a744999686b40927e99b68155086c1bd24cdc20d1ac6a31619c39f4421dcb680" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.513078 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3a964c24-3c53-4a29-98fb-ceaca467c372","Type":"ContainerDied","Data":"49e11ae21c1d92f20c46632e1af3f03b92def2a83af164cd1422ae7fc9f596dd"} Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.521792 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3a964c24-3c53-4a29-98fb-ceaca467c372-config-data\") pod \"3a964c24-3c53-4a29-98fb-ceaca467c372\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.522378 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6q45\" (UniqueName: \"kubernetes.io/projected/3a964c24-3c53-4a29-98fb-ceaca467c372-kube-api-access-r6q45\") pod \"3a964c24-3c53-4a29-98fb-ceaca467c372\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.522624 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3a964c24-3c53-4a29-98fb-ceaca467c372-server-conf\") pod \"3a964c24-3c53-4a29-98fb-ceaca467c372\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.522845 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3a964c24-3c53-4a29-98fb-ceaca467c372-rabbitmq-confd\") pod \"3a964c24-3c53-4a29-98fb-ceaca467c372\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.522990 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3a964c24-3c53-4a29-98fb-ceaca467c372-erlang-cookie-secret\") pod \"3a964c24-3c53-4a29-98fb-ceaca467c372\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.523108 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3a964c24-3c53-4a29-98fb-ceaca467c372-rabbitmq-plugins\") pod \"3a964c24-3c53-4a29-98fb-ceaca467c372\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.523239 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3a964c24-3c53-4a29-98fb-ceaca467c372-plugins-conf\") pod \"3a964c24-3c53-4a29-98fb-ceaca467c372\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.523694 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3a964c24-3c53-4a29-98fb-ceaca467c372-pod-info\") pod \"3a964c24-3c53-4a29-98fb-ceaca467c372\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.523813 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3a964c24-3c53-4a29-98fb-ceaca467c372-rabbitmq-tls\") pod \"3a964c24-3c53-4a29-98fb-ceaca467c372\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.523951 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3a964c24-3c53-4a29-98fb-ceaca467c372-rabbitmq-erlang-cookie\") pod \"3a964c24-3c53-4a29-98fb-ceaca467c372\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.524060 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"3a964c24-3c53-4a29-98fb-ceaca467c372\" (UID: \"3a964c24-3c53-4a29-98fb-ceaca467c372\") " Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.526949 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a964c24-3c53-4a29-98fb-ceaca467c372-kube-api-access-r6q45" (OuterVolumeSpecName: "kube-api-access-r6q45") pod "3a964c24-3c53-4a29-98fb-ceaca467c372" (UID: "3a964c24-3c53-4a29-98fb-ceaca467c372"). InnerVolumeSpecName "kube-api-access-r6q45". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.527662 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a964c24-3c53-4a29-98fb-ceaca467c372-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "3a964c24-3c53-4a29-98fb-ceaca467c372" (UID: "3a964c24-3c53-4a29-98fb-ceaca467c372"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.528044 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a964c24-3c53-4a29-98fb-ceaca467c372-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "3a964c24-3c53-4a29-98fb-ceaca467c372" (UID: "3a964c24-3c53-4a29-98fb-ceaca467c372"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.523507 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-42de-account-create-update-558dw" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.529674 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3a964c24-3c53-4a29-98fb-ceaca467c372-pod-info" (OuterVolumeSpecName: "pod-info") pod "3a964c24-3c53-4a29-98fb-ceaca467c372" (UID: "3a964c24-3c53-4a29-98fb-ceaca467c372"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.523588 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9sj79" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.523604 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9sj79" event={"ID":"882e0084-6891-4852-aecb-2951f5763800","Type":"ContainerDied","Data":"fedf125f00b50c05ecde1baafaa4325da1661a7c5761b62143314a4b108e4bcf"} Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.533906 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a964c24-3c53-4a29-98fb-ceaca467c372-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "3a964c24-3c53-4a29-98fb-ceaca467c372" (UID: "3a964c24-3c53-4a29-98fb-ceaca467c372"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.550547 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a964c24-3c53-4a29-98fb-ceaca467c372-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "3a964c24-3c53-4a29-98fb-ceaca467c372" (UID: "3a964c24-3c53-4a29-98fb-ceaca467c372"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.551650 4840 scope.go:117] "RemoveContainer" containerID="43da3ef6c28eef2051a13159a394be8ff41df563fd5e85c35c8ba4cdcc59aec7" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.555955 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a964c24-3c53-4a29-98fb-ceaca467c372-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "3a964c24-3c53-4a29-98fb-ceaca467c372" (UID: "3a964c24-3c53-4a29-98fb-ceaca467c372"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.598847 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "3a964c24-3c53-4a29-98fb-ceaca467c372" (UID: "3a964c24-3c53-4a29-98fb-ceaca467c372"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.601020 4840 scope.go:117] "RemoveContainer" containerID="a744999686b40927e99b68155086c1bd24cdc20d1ac6a31619c39f4421dcb680" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.618323 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a964c24-3c53-4a29-98fb-ceaca467c372-server-conf" (OuterVolumeSpecName: "server-conf") pod "3a964c24-3c53-4a29-98fb-ceaca467c372" (UID: "3a964c24-3c53-4a29-98fb-ceaca467c372"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:25 crc kubenswrapper[4840]: E0311 09:21:25.618482 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a744999686b40927e99b68155086c1bd24cdc20d1ac6a31619c39f4421dcb680\": container with ID starting with a744999686b40927e99b68155086c1bd24cdc20d1ac6a31619c39f4421dcb680 not found: ID does not exist" containerID="a744999686b40927e99b68155086c1bd24cdc20d1ac6a31619c39f4421dcb680" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.618521 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a744999686b40927e99b68155086c1bd24cdc20d1ac6a31619c39f4421dcb680"} err="failed to get container status \"a744999686b40927e99b68155086c1bd24cdc20d1ac6a31619c39f4421dcb680\": rpc error: code = NotFound desc = could not find container \"a744999686b40927e99b68155086c1bd24cdc20d1ac6a31619c39f4421dcb680\": container with ID starting with a744999686b40927e99b68155086c1bd24cdc20d1ac6a31619c39f4421dcb680 not found: ID does not exist" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.618548 4840 scope.go:117] "RemoveContainer" containerID="43da3ef6c28eef2051a13159a394be8ff41df563fd5e85c35c8ba4cdcc59aec7" Mar 11 09:21:25 crc kubenswrapper[4840]: E0311 09:21:25.619212 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43da3ef6c28eef2051a13159a394be8ff41df563fd5e85c35c8ba4cdcc59aec7\": container with ID starting with 43da3ef6c28eef2051a13159a394be8ff41df563fd5e85c35c8ba4cdcc59aec7 not found: ID does not exist" containerID="43da3ef6c28eef2051a13159a394be8ff41df563fd5e85c35c8ba4cdcc59aec7" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.619238 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43da3ef6c28eef2051a13159a394be8ff41df563fd5e85c35c8ba4cdcc59aec7"} err="failed to get container status \"43da3ef6c28eef2051a13159a394be8ff41df563fd5e85c35c8ba4cdcc59aec7\": rpc error: code = NotFound desc = could not find container \"43da3ef6c28eef2051a13159a394be8ff41df563fd5e85c35c8ba4cdcc59aec7\": container with ID starting with 43da3ef6c28eef2051a13159a394be8ff41df563fd5e85c35c8ba4cdcc59aec7 not found: ID does not exist" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.626770 4840 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3a964c24-3c53-4a29-98fb-ceaca467c372-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.626815 4840 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3a964c24-3c53-4a29-98fb-ceaca467c372-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.626828 4840 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3a964c24-3c53-4a29-98fb-ceaca467c372-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.626838 4840 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3a964c24-3c53-4a29-98fb-ceaca467c372-pod-info\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.626849 4840 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3a964c24-3c53-4a29-98fb-ceaca467c372-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.626862 4840 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3a964c24-3c53-4a29-98fb-ceaca467c372-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.626891 4840 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.626904 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6q45\" (UniqueName: \"kubernetes.io/projected/3a964c24-3c53-4a29-98fb-ceaca467c372-kube-api-access-r6q45\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.626915 4840 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3a964c24-3c53-4a29-98fb-ceaca467c372-server-conf\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.635590 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a964c24-3c53-4a29-98fb-ceaca467c372-config-data" (OuterVolumeSpecName: "config-data") pod "3a964c24-3c53-4a29-98fb-ceaca467c372" (UID: "3a964c24-3c53-4a29-98fb-ceaca467c372"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.642558 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-42de-account-create-update-558dw"] Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.650054 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-42de-account-create-update-558dw"] Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.662700 4840 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.728559 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6092c570-2c07-432f-9bdd-b48bf0600ef9-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.728595 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwdfz\" (UniqueName: \"kubernetes.io/projected/6092c570-2c07-432f-9bdd-b48bf0600ef9-kube-api-access-vwdfz\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.728608 4840 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.728620 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3a964c24-3c53-4a29-98fb-ceaca467c372-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.732278 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a964c24-3c53-4a29-98fb-ceaca467c372-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "3a964c24-3c53-4a29-98fb-ceaca467c372" (UID: "3a964c24-3c53-4a29-98fb-ceaca467c372"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.830659 4840 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3a964c24-3c53-4a29-98fb-ceaca467c372-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.885074 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9sj79"] Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.893405 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9sj79"] Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.899501 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.904444 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 09:21:25 crc kubenswrapper[4840]: E0311 09:21:25.959383 4840 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Mar 11 09:21:25 crc kubenswrapper[4840]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-03-11T09:21:18Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Mar 11 09:21:25 crc kubenswrapper[4840]: /etc/init.d/functions: line 589: 407 Alarm clock "$@" Mar 11 09:21:25 crc kubenswrapper[4840]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-g2p7c" message=< Mar 11 09:21:25 crc kubenswrapper[4840]: Exiting ovn-controller (1) [FAILED] Mar 11 09:21:25 crc kubenswrapper[4840]: Killing ovn-controller (1) [ OK ] Mar 11 09:21:25 crc kubenswrapper[4840]: Killing ovn-controller (1) with SIGKILL [ OK ] Mar 11 09:21:25 crc kubenswrapper[4840]: 2026-03-11T09:21:18Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Mar 11 09:21:25 crc kubenswrapper[4840]: /etc/init.d/functions: line 589: 407 Alarm clock "$@" Mar 11 09:21:25 crc kubenswrapper[4840]: > Mar 11 09:21:25 crc kubenswrapper[4840]: E0311 09:21:25.959429 4840 kuberuntime_container.go:691] "PreStop hook failed" err=< Mar 11 09:21:25 crc kubenswrapper[4840]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-03-11T09:21:18Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Mar 11 09:21:25 crc kubenswrapper[4840]: /etc/init.d/functions: line 589: 407 Alarm clock "$@" Mar 11 09:21:25 crc kubenswrapper[4840]: > pod="openstack/ovn-controller-g2p7c" podUID="056df7c0-d577-4908-91a8-b5dfb95e0316" containerName="ovn-controller" containerID="cri-o://416b963acfae5e9237930d8b9e78836de7c9db73300426cdf2fdaf4f51eeb7fe" Mar 11 09:21:25 crc kubenswrapper[4840]: I0311 09:21:25.959499 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-g2p7c" podUID="056df7c0-d577-4908-91a8-b5dfb95e0316" containerName="ovn-controller" containerID="cri-o://416b963acfae5e9237930d8b9e78836de7c9db73300426cdf2fdaf4f51eeb7fe" gracePeriod=22 Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.073330 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15a50bea-c32e-4aed-8fd2-7289e1694f6e" path="/var/lib/kubelet/pods/15a50bea-c32e-4aed-8fd2-7289e1694f6e/volumes" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.074245 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6" path="/var/lib/kubelet/pods/1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6/volumes" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.076991 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a964c24-3c53-4a29-98fb-ceaca467c372" path="/var/lib/kubelet/pods/3a964c24-3c53-4a29-98fb-ceaca467c372/volumes" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.078274 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6092c570-2c07-432f-9bdd-b48bf0600ef9" path="/var/lib/kubelet/pods/6092c570-2c07-432f-9bdd-b48bf0600ef9/volumes" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.078648 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="629115e9-6bcf-45e8-a0da-d7c06386b7b7" path="/var/lib/kubelet/pods/629115e9-6bcf-45e8-a0da-d7c06386b7b7/volumes" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.079960 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71aaf352-8b91-4846-8ce4-1d83303ac203" path="/var/lib/kubelet/pods/71aaf352-8b91-4846-8ce4-1d83303ac203/volumes" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.081108 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86c17dbf-d890-4de3-bf5d-29e0aea4d968" path="/var/lib/kubelet/pods/86c17dbf-d890-4de3-bf5d-29e0aea4d968/volumes" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.081936 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="882e0084-6891-4852-aecb-2951f5763800" path="/var/lib/kubelet/pods/882e0084-6891-4852-aecb-2951f5763800/volumes" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.083226 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-97696577-2mh8q" podUID="f10cd3a6-0a55-4957-b861-678d9af3c338" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.176:8080/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.083259 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-97696577-2mh8q" podUID="f10cd3a6-0a55-4957-b861-678d9af3c338" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.176:8080/healthcheck\": dial tcp 10.217.0.176:8080: i/o timeout" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.083691 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="915955ff-c1d8-4f99-a621-f28d463c512f" path="/var/lib/kubelet/pods/915955ff-c1d8-4f99-a621-f28d463c512f/volumes" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.084369 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95c08694-92ed-44cb-8ca3-92a47b5571d4" path="/var/lib/kubelet/pods/95c08694-92ed-44cb-8ca3-92a47b5571d4/volumes" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.084878 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9699b913-55db-46fe-9831-1e1ac94ca609" path="/var/lib/kubelet/pods/9699b913-55db-46fe-9831-1e1ac94ca609/volumes" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.086674 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ada053fd-c71a-4425-8220-b950f0cab229" path="/var/lib/kubelet/pods/ada053fd-c71a-4425-8220-b950f0cab229/volumes" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.087682 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c44e0641-be37-4447-9666-14bf00c08827" path="/var/lib/kubelet/pods/c44e0641-be37-4447-9666-14bf00c08827/volumes" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.088293 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e" path="/var/lib/kubelet/pods/cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e/volumes" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.093095 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3ccfc13-7a62-4923-95ab-c68cb93aa03c" path="/var/lib/kubelet/pods/d3ccfc13-7a62-4923-95ab-c68cb93aa03c/volumes" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.095067 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.244969 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f31748d2-64a9-4839-ac55-691d9682ee8e-rabbitmq-plugins\") pod \"f31748d2-64a9-4839-ac55-691d9682ee8e\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.245068 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f31748d2-64a9-4839-ac55-691d9682ee8e-rabbitmq-tls\") pod \"f31748d2-64a9-4839-ac55-691d9682ee8e\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.245097 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f31748d2-64a9-4839-ac55-691d9682ee8e-rabbitmq-confd\") pod \"f31748d2-64a9-4839-ac55-691d9682ee8e\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.245117 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f31748d2-64a9-4839-ac55-691d9682ee8e-rabbitmq-erlang-cookie\") pod \"f31748d2-64a9-4839-ac55-691d9682ee8e\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.245168 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"f31748d2-64a9-4839-ac55-691d9682ee8e\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.245214 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-config-data\") pod \"f31748d2-64a9-4839-ac55-691d9682ee8e\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.245256 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f31748d2-64a9-4839-ac55-691d9682ee8e-pod-info\") pod \"f31748d2-64a9-4839-ac55-691d9682ee8e\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.245285 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hs2p\" (UniqueName: \"kubernetes.io/projected/f31748d2-64a9-4839-ac55-691d9682ee8e-kube-api-access-7hs2p\") pod \"f31748d2-64a9-4839-ac55-691d9682ee8e\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.245328 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-plugins-conf\") pod \"f31748d2-64a9-4839-ac55-691d9682ee8e\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.245379 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-server-conf\") pod \"f31748d2-64a9-4839-ac55-691d9682ee8e\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.245429 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f31748d2-64a9-4839-ac55-691d9682ee8e-erlang-cookie-secret\") pod \"f31748d2-64a9-4839-ac55-691d9682ee8e\" (UID: \"f31748d2-64a9-4839-ac55-691d9682ee8e\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.250534 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f31748d2-64a9-4839-ac55-691d9682ee8e" (UID: "f31748d2-64a9-4839-ac55-691d9682ee8e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.250593 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f31748d2-64a9-4839-ac55-691d9682ee8e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f31748d2-64a9-4839-ac55-691d9682ee8e" (UID: "f31748d2-64a9-4839-ac55-691d9682ee8e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.251310 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f31748d2-64a9-4839-ac55-691d9682ee8e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f31748d2-64a9-4839-ac55-691d9682ee8e" (UID: "f31748d2-64a9-4839-ac55-691d9682ee8e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.253427 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f31748d2-64a9-4839-ac55-691d9682ee8e-pod-info" (OuterVolumeSpecName: "pod-info") pod "f31748d2-64a9-4839-ac55-691d9682ee8e" (UID: "f31748d2-64a9-4839-ac55-691d9682ee8e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.253847 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f31748d2-64a9-4839-ac55-691d9682ee8e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f31748d2-64a9-4839-ac55-691d9682ee8e" (UID: "f31748d2-64a9-4839-ac55-691d9682ee8e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.253989 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f31748d2-64a9-4839-ac55-691d9682ee8e-kube-api-access-7hs2p" (OuterVolumeSpecName: "kube-api-access-7hs2p") pod "f31748d2-64a9-4839-ac55-691d9682ee8e" (UID: "f31748d2-64a9-4839-ac55-691d9682ee8e"). InnerVolumeSpecName "kube-api-access-7hs2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.255669 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f31748d2-64a9-4839-ac55-691d9682ee8e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f31748d2-64a9-4839-ac55-691d9682ee8e" (UID: "f31748d2-64a9-4839-ac55-691d9682ee8e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.259868 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "f31748d2-64a9-4839-ac55-691d9682ee8e" (UID: "f31748d2-64a9-4839-ac55-691d9682ee8e"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.285836 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-config-data" (OuterVolumeSpecName: "config-data") pod "f31748d2-64a9-4839-ac55-691d9682ee8e" (UID: "f31748d2-64a9-4839-ac55-691d9682ee8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.303609 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-server-conf" (OuterVolumeSpecName: "server-conf") pod "f31748d2-64a9-4839-ac55-691d9682ee8e" (UID: "f31748d2-64a9-4839-ac55-691d9682ee8e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.348008 4840 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f31748d2-64a9-4839-ac55-691d9682ee8e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.348044 4840 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f31748d2-64a9-4839-ac55-691d9682ee8e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.348073 4840 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.348085 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.348096 4840 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f31748d2-64a9-4839-ac55-691d9682ee8e-pod-info\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.348105 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hs2p\" (UniqueName: \"kubernetes.io/projected/f31748d2-64a9-4839-ac55-691d9682ee8e-kube-api-access-7hs2p\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.348112 4840 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.348120 4840 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f31748d2-64a9-4839-ac55-691d9682ee8e-server-conf\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.348128 4840 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f31748d2-64a9-4839-ac55-691d9682ee8e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.348136 4840 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f31748d2-64a9-4839-ac55-691d9682ee8e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.362768 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f31748d2-64a9-4839-ac55-691d9682ee8e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f31748d2-64a9-4839-ac55-691d9682ee8e" (UID: "f31748d2-64a9-4839-ac55-691d9682ee8e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.365000 4840 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.378860 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.448900 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5glxg\" (UniqueName: \"kubernetes.io/projected/f86e8b7a-656b-423e-8cf0-6d1025486c46-kube-api-access-5glxg\") pod \"f86e8b7a-656b-423e-8cf0-6d1025486c46\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.448974 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-internal-tls-certs\") pod \"f86e8b7a-656b-423e-8cf0-6d1025486c46\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.449055 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-credential-keys\") pod \"f86e8b7a-656b-423e-8cf0-6d1025486c46\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.449084 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-scripts\") pod \"f86e8b7a-656b-423e-8cf0-6d1025486c46\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.449110 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-fernet-keys\") pod \"f86e8b7a-656b-423e-8cf0-6d1025486c46\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.449164 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-public-tls-certs\") pod \"f86e8b7a-656b-423e-8cf0-6d1025486c46\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.449194 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-combined-ca-bundle\") pod \"f86e8b7a-656b-423e-8cf0-6d1025486c46\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.449265 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-config-data\") pod \"f86e8b7a-656b-423e-8cf0-6d1025486c46\" (UID: \"f86e8b7a-656b-423e-8cf0-6d1025486c46\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.449675 4840 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f31748d2-64a9-4839-ac55-691d9682ee8e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.449702 4840 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.478769 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f86e8b7a-656b-423e-8cf0-6d1025486c46" (UID: "f86e8b7a-656b-423e-8cf0-6d1025486c46"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.478775 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f86e8b7a-656b-423e-8cf0-6d1025486c46-kube-api-access-5glxg" (OuterVolumeSpecName: "kube-api-access-5glxg") pod "f86e8b7a-656b-423e-8cf0-6d1025486c46" (UID: "f86e8b7a-656b-423e-8cf0-6d1025486c46"). InnerVolumeSpecName "kube-api-access-5glxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.479722 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-scripts" (OuterVolumeSpecName: "scripts") pod "f86e8b7a-656b-423e-8cf0-6d1025486c46" (UID: "f86e8b7a-656b-423e-8cf0-6d1025486c46"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.479795 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f86e8b7a-656b-423e-8cf0-6d1025486c46" (UID: "f86e8b7a-656b-423e-8cf0-6d1025486c46"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.485124 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-config-data" (OuterVolumeSpecName: "config-data") pod "f86e8b7a-656b-423e-8cf0-6d1025486c46" (UID: "f86e8b7a-656b-423e-8cf0-6d1025486c46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.513159 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f86e8b7a-656b-423e-8cf0-6d1025486c46" (UID: "f86e8b7a-656b-423e-8cf0-6d1025486c46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.515817 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f86e8b7a-656b-423e-8cf0-6d1025486c46" (UID: "f86e8b7a-656b-423e-8cf0-6d1025486c46"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.535090 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f86e8b7a-656b-423e-8cf0-6d1025486c46" (UID: "f86e8b7a-656b-423e-8cf0-6d1025486c46"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.535777 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-g2p7c_056df7c0-d577-4908-91a8-b5dfb95e0316/ovn-controller/0.log" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.535875 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g2p7c" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.540642 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.541775 4840 generic.go:334] "Generic (PLEG): container finished" podID="f86e8b7a-656b-423e-8cf0-6d1025486c46" containerID="4eb33fb06f0c44c9373d79d5c966ac7a03ec43ee8c7e1b0b5af3343cd05a504e" exitCode=0 Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.541861 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bcbd67b5c-tnrsf" event={"ID":"f86e8b7a-656b-423e-8cf0-6d1025486c46","Type":"ContainerDied","Data":"4eb33fb06f0c44c9373d79d5c966ac7a03ec43ee8c7e1b0b5af3343cd05a504e"} Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.541900 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bcbd67b5c-tnrsf" event={"ID":"f86e8b7a-656b-423e-8cf0-6d1025486c46","Type":"ContainerDied","Data":"f3bebdc55940246604327029b281fd265e4c4fc1e22413c991f75f60a392c2c0"} Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.541927 4840 scope.go:117] "RemoveContainer" containerID="4eb33fb06f0c44c9373d79d5c966ac7a03ec43ee8c7e1b0b5af3343cd05a504e" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.542053 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bcbd67b5c-tnrsf" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.549301 4840 generic.go:334] "Generic (PLEG): container finished" podID="f31748d2-64a9-4839-ac55-691d9682ee8e" containerID="0e215c93773db23bc670a8e1e1523ba89c86e46f9708b9592117120dd8a29424" exitCode=0 Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.549390 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.549404 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f31748d2-64a9-4839-ac55-691d9682ee8e","Type":"ContainerDied","Data":"0e215c93773db23bc670a8e1e1523ba89c86e46f9708b9592117120dd8a29424"} Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.549448 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f31748d2-64a9-4839-ac55-691d9682ee8e","Type":"ContainerDied","Data":"162383aaff69e599f2ef0b4c20753017d692dc71c1fda750efcad8d28187360c"} Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.552848 4840 generic.go:334] "Generic (PLEG): container finished" podID="c6f4129d-4bc4-449b-be94-82fce07cf1f0" containerID="25ef9fd444056788b6ce2758460268d8703cbbd6d97d9234bfee447123075141" exitCode=0 Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.552970 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c6f4129d-4bc4-449b-be94-82fce07cf1f0","Type":"ContainerDied","Data":"25ef9fd444056788b6ce2758460268d8703cbbd6d97d9234bfee447123075141"} Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.553014 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c6f4129d-4bc4-449b-be94-82fce07cf1f0","Type":"ContainerDied","Data":"329e9bbbb5bfce62409b1d20efebb63436a5ad3a317150adfc3b37b992b688c7"} Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.553124 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.555707 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-g2p7c_056df7c0-d577-4908-91a8-b5dfb95e0316/ovn-controller/0.log" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.555757 4840 generic.go:334] "Generic (PLEG): container finished" podID="056df7c0-d577-4908-91a8-b5dfb95e0316" containerID="416b963acfae5e9237930d8b9e78836de7c9db73300426cdf2fdaf4f51eeb7fe" exitCode=137 Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.555824 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-g2p7c" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.555888 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g2p7c" event={"ID":"056df7c0-d577-4908-91a8-b5dfb95e0316","Type":"ContainerDied","Data":"416b963acfae5e9237930d8b9e78836de7c9db73300426cdf2fdaf4f51eeb7fe"} Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.555906 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-g2p7c" event={"ID":"056df7c0-d577-4908-91a8-b5dfb95e0316","Type":"ContainerDied","Data":"8aa14a3a147610e283868c4198f18e1a26ac30e3a62f89d6c61fecb20acc6f29"} Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.556376 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5glxg\" (UniqueName: \"kubernetes.io/projected/f86e8b7a-656b-423e-8cf0-6d1025486c46-kube-api-access-5glxg\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.556433 4840 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.556450 4840 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.556480 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.556494 4840 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.556507 4840 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.556520 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.556534 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f86e8b7a-656b-423e-8cf0-6d1025486c46-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.580370 4840 scope.go:117] "RemoveContainer" containerID="4eb33fb06f0c44c9373d79d5c966ac7a03ec43ee8c7e1b0b5af3343cd05a504e" Mar 11 09:21:26 crc kubenswrapper[4840]: E0311 09:21:26.581029 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4eb33fb06f0c44c9373d79d5c966ac7a03ec43ee8c7e1b0b5af3343cd05a504e\": container with ID starting with 4eb33fb06f0c44c9373d79d5c966ac7a03ec43ee8c7e1b0b5af3343cd05a504e not found: ID does not exist" containerID="4eb33fb06f0c44c9373d79d5c966ac7a03ec43ee8c7e1b0b5af3343cd05a504e" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.581063 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eb33fb06f0c44c9373d79d5c966ac7a03ec43ee8c7e1b0b5af3343cd05a504e"} err="failed to get container status \"4eb33fb06f0c44c9373d79d5c966ac7a03ec43ee8c7e1b0b5af3343cd05a504e\": rpc error: code = NotFound desc = could not find container \"4eb33fb06f0c44c9373d79d5c966ac7a03ec43ee8c7e1b0b5af3343cd05a504e\": container with ID starting with 4eb33fb06f0c44c9373d79d5c966ac7a03ec43ee8c7e1b0b5af3343cd05a504e not found: ID does not exist" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.581090 4840 scope.go:117] "RemoveContainer" containerID="0e215c93773db23bc670a8e1e1523ba89c86e46f9708b9592117120dd8a29424" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.657562 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/056df7c0-d577-4908-91a8-b5dfb95e0316-var-run\") pod \"056df7c0-d577-4908-91a8-b5dfb95e0316\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.657748 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c6f4129d-4bc4-449b-be94-82fce07cf1f0-kolla-config\") pod \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.657859 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/056df7c0-d577-4908-91a8-b5dfb95e0316-ovn-controller-tls-certs\") pod \"056df7c0-d577-4908-91a8-b5dfb95e0316\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.657903 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/056df7c0-d577-4908-91a8-b5dfb95e0316-combined-ca-bundle\") pod \"056df7c0-d577-4908-91a8-b5dfb95e0316\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.657955 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c6f4129d-4bc4-449b-be94-82fce07cf1f0-config-data-default\") pod \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.657991 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/056df7c0-d577-4908-91a8-b5dfb95e0316-scripts\") pod \"056df7c0-d577-4908-91a8-b5dfb95e0316\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.658100 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6f4129d-4bc4-449b-be94-82fce07cf1f0-combined-ca-bundle\") pod \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.658185 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6f4129d-4bc4-449b-be94-82fce07cf1f0-galera-tls-certs\") pod \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.658219 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/056df7c0-d577-4908-91a8-b5dfb95e0316-var-log-ovn\") pod \"056df7c0-d577-4908-91a8-b5dfb95e0316\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.658284 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6f4129d-4bc4-449b-be94-82fce07cf1f0-operator-scripts\") pod \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.658328 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/056df7c0-d577-4908-91a8-b5dfb95e0316-var-run-ovn\") pod \"056df7c0-d577-4908-91a8-b5dfb95e0316\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.658357 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c6f4129d-4bc4-449b-be94-82fce07cf1f0-config-data-generated\") pod \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.658406 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.658503 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6csbr\" (UniqueName: \"kubernetes.io/projected/c6f4129d-4bc4-449b-be94-82fce07cf1f0-kube-api-access-6csbr\") pod \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\" (UID: \"c6f4129d-4bc4-449b-be94-82fce07cf1f0\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.658534 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crjs9\" (UniqueName: \"kubernetes.io/projected/056df7c0-d577-4908-91a8-b5dfb95e0316-kube-api-access-crjs9\") pod \"056df7c0-d577-4908-91a8-b5dfb95e0316\" (UID: \"056df7c0-d577-4908-91a8-b5dfb95e0316\") " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.657700 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/056df7c0-d577-4908-91a8-b5dfb95e0316-var-run" (OuterVolumeSpecName: "var-run") pod "056df7c0-d577-4908-91a8-b5dfb95e0316" (UID: "056df7c0-d577-4908-91a8-b5dfb95e0316"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.658677 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/056df7c0-d577-4908-91a8-b5dfb95e0316-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "056df7c0-d577-4908-91a8-b5dfb95e0316" (UID: "056df7c0-d577-4908-91a8-b5dfb95e0316"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.659780 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/056df7c0-d577-4908-91a8-b5dfb95e0316-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "056df7c0-d577-4908-91a8-b5dfb95e0316" (UID: "056df7c0-d577-4908-91a8-b5dfb95e0316"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.661001 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6f4129d-4bc4-449b-be94-82fce07cf1f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c6f4129d-4bc4-449b-be94-82fce07cf1f0" (UID: "c6f4129d-4bc4-449b-be94-82fce07cf1f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.661055 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6f4129d-4bc4-449b-be94-82fce07cf1f0-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "c6f4129d-4bc4-449b-be94-82fce07cf1f0" (UID: "c6f4129d-4bc4-449b-be94-82fce07cf1f0"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.661916 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/056df7c0-d577-4908-91a8-b5dfb95e0316-scripts" (OuterVolumeSpecName: "scripts") pod "056df7c0-d577-4908-91a8-b5dfb95e0316" (UID: "056df7c0-d577-4908-91a8-b5dfb95e0316"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.662225 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6f4129d-4bc4-449b-be94-82fce07cf1f0-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "c6f4129d-4bc4-449b-be94-82fce07cf1f0" (UID: "c6f4129d-4bc4-449b-be94-82fce07cf1f0"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.662330 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6f4129d-4bc4-449b-be94-82fce07cf1f0-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "c6f4129d-4bc4-449b-be94-82fce07cf1f0" (UID: "c6f4129d-4bc4-449b-be94-82fce07cf1f0"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.666912 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6f4129d-4bc4-449b-be94-82fce07cf1f0-kube-api-access-6csbr" (OuterVolumeSpecName: "kube-api-access-6csbr") pod "c6f4129d-4bc4-449b-be94-82fce07cf1f0" (UID: "c6f4129d-4bc4-449b-be94-82fce07cf1f0"). InnerVolumeSpecName "kube-api-access-6csbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.668584 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/056df7c0-d577-4908-91a8-b5dfb95e0316-kube-api-access-crjs9" (OuterVolumeSpecName: "kube-api-access-crjs9") pod "056df7c0-d577-4908-91a8-b5dfb95e0316" (UID: "056df7c0-d577-4908-91a8-b5dfb95e0316"). InnerVolumeSpecName "kube-api-access-crjs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: E0311 09:21:26.675666 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.675729 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "mysql-db") pod "c6f4129d-4bc4-449b-be94-82fce07cf1f0" (UID: "c6f4129d-4bc4-449b-be94-82fce07cf1f0"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: E0311 09:21:26.675777 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 is running failed: container process not found" containerID="ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:21:26 crc kubenswrapper[4840]: E0311 09:21:26.678716 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:21:26 crc kubenswrapper[4840]: E0311 09:21:26.678761 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 is running failed: container process not found" containerID="ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:21:26 crc kubenswrapper[4840]: E0311 09:21:26.679120 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 is running failed: container process not found" containerID="ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:21:26 crc kubenswrapper[4840]: E0311 09:21:26.679193 4840 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-qtcdv" podUID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerName="ovsdb-server" Mar 11 09:21:26 crc kubenswrapper[4840]: E0311 09:21:26.683411 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:21:26 crc kubenswrapper[4840]: E0311 09:21:26.683529 4840 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-qtcdv" podUID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerName="ovs-vswitchd" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.685437 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/056df7c0-d577-4908-91a8-b5dfb95e0316-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "056df7c0-d577-4908-91a8-b5dfb95e0316" (UID: "056df7c0-d577-4908-91a8-b5dfb95e0316"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.686178 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6f4129d-4bc4-449b-be94-82fce07cf1f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6f4129d-4bc4-449b-be94-82fce07cf1f0" (UID: "c6f4129d-4bc4-449b-be94-82fce07cf1f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.709042 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6f4129d-4bc4-449b-be94-82fce07cf1f0-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "c6f4129d-4bc4-449b-be94-82fce07cf1f0" (UID: "c6f4129d-4bc4-449b-be94-82fce07cf1f0"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.734616 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/056df7c0-d577-4908-91a8-b5dfb95e0316-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "056df7c0-d577-4908-91a8-b5dfb95e0316" (UID: "056df7c0-d577-4908-91a8-b5dfb95e0316"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.761587 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6csbr\" (UniqueName: \"kubernetes.io/projected/c6f4129d-4bc4-449b-be94-82fce07cf1f0-kube-api-access-6csbr\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.761639 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crjs9\" (UniqueName: \"kubernetes.io/projected/056df7c0-d577-4908-91a8-b5dfb95e0316-kube-api-access-crjs9\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.761654 4840 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/056df7c0-d577-4908-91a8-b5dfb95e0316-var-run\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.761669 4840 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c6f4129d-4bc4-449b-be94-82fce07cf1f0-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.761681 4840 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/056df7c0-d577-4908-91a8-b5dfb95e0316-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.761692 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/056df7c0-d577-4908-91a8-b5dfb95e0316-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.761705 4840 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c6f4129d-4bc4-449b-be94-82fce07cf1f0-config-data-default\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.761717 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/056df7c0-d577-4908-91a8-b5dfb95e0316-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.761729 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6f4129d-4bc4-449b-be94-82fce07cf1f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.761741 4840 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6f4129d-4bc4-449b-be94-82fce07cf1f0-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.761752 4840 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/056df7c0-d577-4908-91a8-b5dfb95e0316-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.761763 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6f4129d-4bc4-449b-be94-82fce07cf1f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.761774 4840 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/056df7c0-d577-4908-91a8-b5dfb95e0316-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.761786 4840 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c6f4129d-4bc4-449b-be94-82fce07cf1f0-config-data-generated\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.761840 4840 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.780543 4840 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.800607 4840 scope.go:117] "RemoveContainer" containerID="ee9ff21cca22ce33a9fecf80bd4c008df2c67c2ba0d6988e89b3285d5e560733" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.801930 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bcbd67b5c-tnrsf"] Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.808747 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bcbd67b5c-tnrsf"] Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.841684 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.847281 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.863894 4840 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.864126 4840 scope.go:117] "RemoveContainer" containerID="0e215c93773db23bc670a8e1e1523ba89c86e46f9708b9592117120dd8a29424" Mar 11 09:21:26 crc kubenswrapper[4840]: E0311 09:21:26.864723 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e215c93773db23bc670a8e1e1523ba89c86e46f9708b9592117120dd8a29424\": container with ID starting with 0e215c93773db23bc670a8e1e1523ba89c86e46f9708b9592117120dd8a29424 not found: ID does not exist" containerID="0e215c93773db23bc670a8e1e1523ba89c86e46f9708b9592117120dd8a29424" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.864782 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e215c93773db23bc670a8e1e1523ba89c86e46f9708b9592117120dd8a29424"} err="failed to get container status \"0e215c93773db23bc670a8e1e1523ba89c86e46f9708b9592117120dd8a29424\": rpc error: code = NotFound desc = could not find container \"0e215c93773db23bc670a8e1e1523ba89c86e46f9708b9592117120dd8a29424\": container with ID starting with 0e215c93773db23bc670a8e1e1523ba89c86e46f9708b9592117120dd8a29424 not found: ID does not exist" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.864831 4840 scope.go:117] "RemoveContainer" containerID="ee9ff21cca22ce33a9fecf80bd4c008df2c67c2ba0d6988e89b3285d5e560733" Mar 11 09:21:26 crc kubenswrapper[4840]: E0311 09:21:26.865237 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee9ff21cca22ce33a9fecf80bd4c008df2c67c2ba0d6988e89b3285d5e560733\": container with ID starting with ee9ff21cca22ce33a9fecf80bd4c008df2c67c2ba0d6988e89b3285d5e560733 not found: ID does not exist" containerID="ee9ff21cca22ce33a9fecf80bd4c008df2c67c2ba0d6988e89b3285d5e560733" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.865271 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee9ff21cca22ce33a9fecf80bd4c008df2c67c2ba0d6988e89b3285d5e560733"} err="failed to get container status \"ee9ff21cca22ce33a9fecf80bd4c008df2c67c2ba0d6988e89b3285d5e560733\": rpc error: code = NotFound desc = could not find container \"ee9ff21cca22ce33a9fecf80bd4c008df2c67c2ba0d6988e89b3285d5e560733\": container with ID starting with ee9ff21cca22ce33a9fecf80bd4c008df2c67c2ba0d6988e89b3285d5e560733 not found: ID does not exist" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.865303 4840 scope.go:117] "RemoveContainer" containerID="25ef9fd444056788b6ce2758460268d8703cbbd6d97d9234bfee447123075141" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.944680 4840 scope.go:117] "RemoveContainer" containerID="864e18a4e761096f65b64998d9b212c7afd08411bbd197099d71b94005a7016a" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.947286 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-g2p7c"] Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.971524 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-g2p7c"] Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.982393 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.993240 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.994277 4840 scope.go:117] "RemoveContainer" containerID="25ef9fd444056788b6ce2758460268d8703cbbd6d97d9234bfee447123075141" Mar 11 09:21:26 crc kubenswrapper[4840]: E0311 09:21:26.994909 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25ef9fd444056788b6ce2758460268d8703cbbd6d97d9234bfee447123075141\": container with ID starting with 25ef9fd444056788b6ce2758460268d8703cbbd6d97d9234bfee447123075141 not found: ID does not exist" containerID="25ef9fd444056788b6ce2758460268d8703cbbd6d97d9234bfee447123075141" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.995048 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25ef9fd444056788b6ce2758460268d8703cbbd6d97d9234bfee447123075141"} err="failed to get container status \"25ef9fd444056788b6ce2758460268d8703cbbd6d97d9234bfee447123075141\": rpc error: code = NotFound desc = could not find container \"25ef9fd444056788b6ce2758460268d8703cbbd6d97d9234bfee447123075141\": container with ID starting with 25ef9fd444056788b6ce2758460268d8703cbbd6d97d9234bfee447123075141 not found: ID does not exist" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.995168 4840 scope.go:117] "RemoveContainer" containerID="864e18a4e761096f65b64998d9b212c7afd08411bbd197099d71b94005a7016a" Mar 11 09:21:26 crc kubenswrapper[4840]: E0311 09:21:26.995712 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"864e18a4e761096f65b64998d9b212c7afd08411bbd197099d71b94005a7016a\": container with ID starting with 864e18a4e761096f65b64998d9b212c7afd08411bbd197099d71b94005a7016a not found: ID does not exist" containerID="864e18a4e761096f65b64998d9b212c7afd08411bbd197099d71b94005a7016a" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.995742 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"864e18a4e761096f65b64998d9b212c7afd08411bbd197099d71b94005a7016a"} err="failed to get container status \"864e18a4e761096f65b64998d9b212c7afd08411bbd197099d71b94005a7016a\": rpc error: code = NotFound desc = could not find container \"864e18a4e761096f65b64998d9b212c7afd08411bbd197099d71b94005a7016a\": container with ID starting with 864e18a4e761096f65b64998d9b212c7afd08411bbd197099d71b94005a7016a not found: ID does not exist" Mar 11 09:21:26 crc kubenswrapper[4840]: I0311 09:21:26.995766 4840 scope.go:117] "RemoveContainer" containerID="416b963acfae5e9237930d8b9e78836de7c9db73300426cdf2fdaf4f51eeb7fe" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.018011 4840 scope.go:117] "RemoveContainer" containerID="416b963acfae5e9237930d8b9e78836de7c9db73300426cdf2fdaf4f51eeb7fe" Mar 11 09:21:27 crc kubenswrapper[4840]: E0311 09:21:27.018728 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"416b963acfae5e9237930d8b9e78836de7c9db73300426cdf2fdaf4f51eeb7fe\": container with ID starting with 416b963acfae5e9237930d8b9e78836de7c9db73300426cdf2fdaf4f51eeb7fe not found: ID does not exist" containerID="416b963acfae5e9237930d8b9e78836de7c9db73300426cdf2fdaf4f51eeb7fe" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.018802 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"416b963acfae5e9237930d8b9e78836de7c9db73300426cdf2fdaf4f51eeb7fe"} err="failed to get container status \"416b963acfae5e9237930d8b9e78836de7c9db73300426cdf2fdaf4f51eeb7fe\": rpc error: code = NotFound desc = could not find container \"416b963acfae5e9237930d8b9e78836de7c9db73300426cdf2fdaf4f51eeb7fe\": container with ID starting with 416b963acfae5e9237930d8b9e78836de7c9db73300426cdf2fdaf4f51eeb7fe not found: ID does not exist" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.346279 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.483215 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-sg-core-conf-yaml\") pod \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.483354 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx6wx\" (UniqueName: \"kubernetes.io/projected/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-kube-api-access-xx6wx\") pod \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.483455 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-run-httpd\") pod \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.483507 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-log-httpd\") pod \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.483529 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-ceilometer-tls-certs\") pod \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.483610 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-scripts\") pod \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.483665 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-config-data\") pod \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.483715 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-combined-ca-bundle\") pod \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\" (UID: \"f5fc86d0-4547-42ed-a880-f58c9e29d8a4\") " Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.484167 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f5fc86d0-4547-42ed-a880-f58c9e29d8a4" (UID: "f5fc86d0-4547-42ed-a880-f58c9e29d8a4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.484434 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f5fc86d0-4547-42ed-a880-f58c9e29d8a4" (UID: "f5fc86d0-4547-42ed-a880-f58c9e29d8a4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.485214 4840 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.485244 4840 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.489275 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-kube-api-access-xx6wx" (OuterVolumeSpecName: "kube-api-access-xx6wx") pod "f5fc86d0-4547-42ed-a880-f58c9e29d8a4" (UID: "f5fc86d0-4547-42ed-a880-f58c9e29d8a4"). InnerVolumeSpecName "kube-api-access-xx6wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.489834 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-scripts" (OuterVolumeSpecName: "scripts") pod "f5fc86d0-4547-42ed-a880-f58c9e29d8a4" (UID: "f5fc86d0-4547-42ed-a880-f58c9e29d8a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.521542 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f5fc86d0-4547-42ed-a880-f58c9e29d8a4" (UID: "f5fc86d0-4547-42ed-a880-f58c9e29d8a4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.524626 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f5fc86d0-4547-42ed-a880-f58c9e29d8a4" (UID: "f5fc86d0-4547-42ed-a880-f58c9e29d8a4"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.542216 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5fc86d0-4547-42ed-a880-f58c9e29d8a4" (UID: "f5fc86d0-4547-42ed-a880-f58c9e29d8a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.567603 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-config-data" (OuterVolumeSpecName: "config-data") pod "f5fc86d0-4547-42ed-a880-f58c9e29d8a4" (UID: "f5fc86d0-4547-42ed-a880-f58c9e29d8a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.573297 4840 generic.go:334] "Generic (PLEG): container finished" podID="f5fc86d0-4547-42ed-a880-f58c9e29d8a4" containerID="786131d91770a763753e005b03e25ce929741941d3ba83d0e07a62fe71b27388" exitCode=0 Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.573372 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5fc86d0-4547-42ed-a880-f58c9e29d8a4","Type":"ContainerDied","Data":"786131d91770a763753e005b03e25ce929741941d3ba83d0e07a62fe71b27388"} Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.573411 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f5fc86d0-4547-42ed-a880-f58c9e29d8a4","Type":"ContainerDied","Data":"05764fbcdcbd0463ee21c6466f7617c4ecaee82251dc219e10e55e67e393bc9a"} Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.573436 4840 scope.go:117] "RemoveContainer" containerID="f68b76db653f501b6c2cade1e67b940b2a3ee35bd560abe70b7979ccd257edaf" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.573590 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.587163 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.587201 4840 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.587211 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx6wx\" (UniqueName: \"kubernetes.io/projected/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-kube-api-access-xx6wx\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.587221 4840 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.587230 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.587238 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5fc86d0-4547-42ed-a880-f58c9e29d8a4-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.631851 4840 scope.go:117] "RemoveContainer" containerID="c1366968afef948e7ccf78e045f5f679b4fcb3ebe863e591eb2d9cc24c2f872b" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.656267 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.661265 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.669062 4840 scope.go:117] "RemoveContainer" containerID="786131d91770a763753e005b03e25ce929741941d3ba83d0e07a62fe71b27388" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.709306 4840 scope.go:117] "RemoveContainer" containerID="34979c436634b8f0e018557d27d62f341c687de81ef20f3c5d4a9fa769f1e4fb" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.745737 4840 scope.go:117] "RemoveContainer" containerID="f68b76db653f501b6c2cade1e67b940b2a3ee35bd560abe70b7979ccd257edaf" Mar 11 09:21:27 crc kubenswrapper[4840]: E0311 09:21:27.754197 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f68b76db653f501b6c2cade1e67b940b2a3ee35bd560abe70b7979ccd257edaf\": container with ID starting with f68b76db653f501b6c2cade1e67b940b2a3ee35bd560abe70b7979ccd257edaf not found: ID does not exist" containerID="f68b76db653f501b6c2cade1e67b940b2a3ee35bd560abe70b7979ccd257edaf" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.754298 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f68b76db653f501b6c2cade1e67b940b2a3ee35bd560abe70b7979ccd257edaf"} err="failed to get container status \"f68b76db653f501b6c2cade1e67b940b2a3ee35bd560abe70b7979ccd257edaf\": rpc error: code = NotFound desc = could not find container \"f68b76db653f501b6c2cade1e67b940b2a3ee35bd560abe70b7979ccd257edaf\": container with ID starting with f68b76db653f501b6c2cade1e67b940b2a3ee35bd560abe70b7979ccd257edaf not found: ID does not exist" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.754333 4840 scope.go:117] "RemoveContainer" containerID="c1366968afef948e7ccf78e045f5f679b4fcb3ebe863e591eb2d9cc24c2f872b" Mar 11 09:21:27 crc kubenswrapper[4840]: E0311 09:21:27.755311 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1366968afef948e7ccf78e045f5f679b4fcb3ebe863e591eb2d9cc24c2f872b\": container with ID starting with c1366968afef948e7ccf78e045f5f679b4fcb3ebe863e591eb2d9cc24c2f872b not found: ID does not exist" containerID="c1366968afef948e7ccf78e045f5f679b4fcb3ebe863e591eb2d9cc24c2f872b" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.755393 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1366968afef948e7ccf78e045f5f679b4fcb3ebe863e591eb2d9cc24c2f872b"} err="failed to get container status \"c1366968afef948e7ccf78e045f5f679b4fcb3ebe863e591eb2d9cc24c2f872b\": rpc error: code = NotFound desc = could not find container \"c1366968afef948e7ccf78e045f5f679b4fcb3ebe863e591eb2d9cc24c2f872b\": container with ID starting with c1366968afef948e7ccf78e045f5f679b4fcb3ebe863e591eb2d9cc24c2f872b not found: ID does not exist" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.755451 4840 scope.go:117] "RemoveContainer" containerID="786131d91770a763753e005b03e25ce929741941d3ba83d0e07a62fe71b27388" Mar 11 09:21:27 crc kubenswrapper[4840]: E0311 09:21:27.755961 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"786131d91770a763753e005b03e25ce929741941d3ba83d0e07a62fe71b27388\": container with ID starting with 786131d91770a763753e005b03e25ce929741941d3ba83d0e07a62fe71b27388 not found: ID does not exist" containerID="786131d91770a763753e005b03e25ce929741941d3ba83d0e07a62fe71b27388" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.756018 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"786131d91770a763753e005b03e25ce929741941d3ba83d0e07a62fe71b27388"} err="failed to get container status \"786131d91770a763753e005b03e25ce929741941d3ba83d0e07a62fe71b27388\": rpc error: code = NotFound desc = could not find container \"786131d91770a763753e005b03e25ce929741941d3ba83d0e07a62fe71b27388\": container with ID starting with 786131d91770a763753e005b03e25ce929741941d3ba83d0e07a62fe71b27388 not found: ID does not exist" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.756042 4840 scope.go:117] "RemoveContainer" containerID="34979c436634b8f0e018557d27d62f341c687de81ef20f3c5d4a9fa769f1e4fb" Mar 11 09:21:27 crc kubenswrapper[4840]: E0311 09:21:27.756369 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34979c436634b8f0e018557d27d62f341c687de81ef20f3c5d4a9fa769f1e4fb\": container with ID starting with 34979c436634b8f0e018557d27d62f341c687de81ef20f3c5d4a9fa769f1e4fb not found: ID does not exist" containerID="34979c436634b8f0e018557d27d62f341c687de81ef20f3c5d4a9fa769f1e4fb" Mar 11 09:21:27 crc kubenswrapper[4840]: I0311 09:21:27.756421 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34979c436634b8f0e018557d27d62f341c687de81ef20f3c5d4a9fa769f1e4fb"} err="failed to get container status \"34979c436634b8f0e018557d27d62f341c687de81ef20f3c5d4a9fa769f1e4fb\": rpc error: code = NotFound desc = could not find container \"34979c436634b8f0e018557d27d62f341c687de81ef20f3c5d4a9fa769f1e4fb\": container with ID starting with 34979c436634b8f0e018557d27d62f341c687de81ef20f3c5d4a9fa769f1e4fb not found: ID does not exist" Mar 11 09:21:28 crc kubenswrapper[4840]: I0311 09:21:28.068938 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="056df7c0-d577-4908-91a8-b5dfb95e0316" path="/var/lib/kubelet/pods/056df7c0-d577-4908-91a8-b5dfb95e0316/volumes" Mar 11 09:21:28 crc kubenswrapper[4840]: I0311 09:21:28.069823 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6f4129d-4bc4-449b-be94-82fce07cf1f0" path="/var/lib/kubelet/pods/c6f4129d-4bc4-449b-be94-82fce07cf1f0/volumes" Mar 11 09:21:28 crc kubenswrapper[4840]: I0311 09:21:28.070631 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f31748d2-64a9-4839-ac55-691d9682ee8e" path="/var/lib/kubelet/pods/f31748d2-64a9-4839-ac55-691d9682ee8e/volumes" Mar 11 09:21:28 crc kubenswrapper[4840]: I0311 09:21:28.071233 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5fc86d0-4547-42ed-a880-f58c9e29d8a4" path="/var/lib/kubelet/pods/f5fc86d0-4547-42ed-a880-f58c9e29d8a4/volumes" Mar 11 09:21:28 crc kubenswrapper[4840]: I0311 09:21:28.071901 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f86e8b7a-656b-423e-8cf0-6d1025486c46" path="/var/lib/kubelet/pods/f86e8b7a-656b-423e-8cf0-6d1025486c46/volumes" Mar 11 09:21:28 crc kubenswrapper[4840]: I0311 09:21:28.103522 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="86c17dbf-d890-4de3-bf5d-29e0aea4d968" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.178:8776/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 11 09:21:31 crc kubenswrapper[4840]: E0311 09:21:31.672177 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 is running failed: container process not found" containerID="ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:21:31 crc kubenswrapper[4840]: E0311 09:21:31.673003 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 is running failed: container process not found" containerID="ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:21:31 crc kubenswrapper[4840]: E0311 09:21:31.673282 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 is running failed: container process not found" containerID="ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:21:31 crc kubenswrapper[4840]: E0311 09:21:31.673338 4840 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-qtcdv" podUID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerName="ovsdb-server" Mar 11 09:21:31 crc kubenswrapper[4840]: E0311 09:21:31.674382 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:21:31 crc kubenswrapper[4840]: E0311 09:21:31.675702 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:21:31 crc kubenswrapper[4840]: E0311 09:21:31.677054 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:21:31 crc kubenswrapper[4840]: E0311 09:21:31.677081 4840 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-qtcdv" podUID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerName="ovs-vswitchd" Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.200630 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.311573 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-public-tls-certs\") pod \"550bab70-eacb-4c56-98fd-460c20f22dcc\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.311622 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-combined-ca-bundle\") pod \"550bab70-eacb-4c56-98fd-460c20f22dcc\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.311664 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-ovndb-tls-certs\") pod \"550bab70-eacb-4c56-98fd-460c20f22dcc\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.311684 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-httpd-config\") pod \"550bab70-eacb-4c56-98fd-460c20f22dcc\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.311714 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-config\") pod \"550bab70-eacb-4c56-98fd-460c20f22dcc\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.311761 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-internal-tls-certs\") pod \"550bab70-eacb-4c56-98fd-460c20f22dcc\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.311802 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m25fw\" (UniqueName: \"kubernetes.io/projected/550bab70-eacb-4c56-98fd-460c20f22dcc-kube-api-access-m25fw\") pod \"550bab70-eacb-4c56-98fd-460c20f22dcc\" (UID: \"550bab70-eacb-4c56-98fd-460c20f22dcc\") " Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.319980 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/550bab70-eacb-4c56-98fd-460c20f22dcc-kube-api-access-m25fw" (OuterVolumeSpecName: "kube-api-access-m25fw") pod "550bab70-eacb-4c56-98fd-460c20f22dcc" (UID: "550bab70-eacb-4c56-98fd-460c20f22dcc"). InnerVolumeSpecName "kube-api-access-m25fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.320217 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "550bab70-eacb-4c56-98fd-460c20f22dcc" (UID: "550bab70-eacb-4c56-98fd-460c20f22dcc"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.357973 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "550bab70-eacb-4c56-98fd-460c20f22dcc" (UID: "550bab70-eacb-4c56-98fd-460c20f22dcc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.358404 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-config" (OuterVolumeSpecName: "config") pod "550bab70-eacb-4c56-98fd-460c20f22dcc" (UID: "550bab70-eacb-4c56-98fd-460c20f22dcc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.360517 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "550bab70-eacb-4c56-98fd-460c20f22dcc" (UID: "550bab70-eacb-4c56-98fd-460c20f22dcc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.361612 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "550bab70-eacb-4c56-98fd-460c20f22dcc" (UID: "550bab70-eacb-4c56-98fd-460c20f22dcc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.382109 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "550bab70-eacb-4c56-98fd-460c20f22dcc" (UID: "550bab70-eacb-4c56-98fd-460c20f22dcc"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.413114 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.413154 4840 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.413169 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m25fw\" (UniqueName: \"kubernetes.io/projected/550bab70-eacb-4c56-98fd-460c20f22dcc-kube-api-access-m25fw\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.413179 4840 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.413187 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.413196 4840 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.413203 4840 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/550bab70-eacb-4c56-98fd-460c20f22dcc-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.648120 4840 generic.go:334] "Generic (PLEG): container finished" podID="550bab70-eacb-4c56-98fd-460c20f22dcc" containerID="990342c797ffb42187c174e55be3c0f82e87b7deb7f1882a35e7f82eeaf9dc84" exitCode=0 Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.648190 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58769b4545-5q2fv" event={"ID":"550bab70-eacb-4c56-98fd-460c20f22dcc","Type":"ContainerDied","Data":"990342c797ffb42187c174e55be3c0f82e87b7deb7f1882a35e7f82eeaf9dc84"} Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.648204 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58769b4545-5q2fv" Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.648231 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58769b4545-5q2fv" event={"ID":"550bab70-eacb-4c56-98fd-460c20f22dcc","Type":"ContainerDied","Data":"df9306f884966228ea14fe38bf8378c7b4461529db12ece11c70d9a1214b3365"} Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.648250 4840 scope.go:117] "RemoveContainer" containerID="de14b2ab95a92c30b8e51cfa0b41eeb1219f8858689935e5b0e0d5e56fbb4fc9" Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.671007 4840 scope.go:117] "RemoveContainer" containerID="990342c797ffb42187c174e55be3c0f82e87b7deb7f1882a35e7f82eeaf9dc84" Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.693658 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-58769b4545-5q2fv"] Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.699767 4840 scope.go:117] "RemoveContainer" containerID="de14b2ab95a92c30b8e51cfa0b41eeb1219f8858689935e5b0e0d5e56fbb4fc9" Mar 11 09:21:34 crc kubenswrapper[4840]: E0311 09:21:34.700295 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de14b2ab95a92c30b8e51cfa0b41eeb1219f8858689935e5b0e0d5e56fbb4fc9\": container with ID starting with de14b2ab95a92c30b8e51cfa0b41eeb1219f8858689935e5b0e0d5e56fbb4fc9 not found: ID does not exist" containerID="de14b2ab95a92c30b8e51cfa0b41eeb1219f8858689935e5b0e0d5e56fbb4fc9" Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.700328 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de14b2ab95a92c30b8e51cfa0b41eeb1219f8858689935e5b0e0d5e56fbb4fc9"} err="failed to get container status \"de14b2ab95a92c30b8e51cfa0b41eeb1219f8858689935e5b0e0d5e56fbb4fc9\": rpc error: code = NotFound desc = could not find container \"de14b2ab95a92c30b8e51cfa0b41eeb1219f8858689935e5b0e0d5e56fbb4fc9\": container with ID starting with de14b2ab95a92c30b8e51cfa0b41eeb1219f8858689935e5b0e0d5e56fbb4fc9 not found: ID does not exist" Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.700347 4840 scope.go:117] "RemoveContainer" containerID="990342c797ffb42187c174e55be3c0f82e87b7deb7f1882a35e7f82eeaf9dc84" Mar 11 09:21:34 crc kubenswrapper[4840]: E0311 09:21:34.700738 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"990342c797ffb42187c174e55be3c0f82e87b7deb7f1882a35e7f82eeaf9dc84\": container with ID starting with 990342c797ffb42187c174e55be3c0f82e87b7deb7f1882a35e7f82eeaf9dc84 not found: ID does not exist" containerID="990342c797ffb42187c174e55be3c0f82e87b7deb7f1882a35e7f82eeaf9dc84" Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.700762 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"990342c797ffb42187c174e55be3c0f82e87b7deb7f1882a35e7f82eeaf9dc84"} err="failed to get container status \"990342c797ffb42187c174e55be3c0f82e87b7deb7f1882a35e7f82eeaf9dc84\": rpc error: code = NotFound desc = could not find container \"990342c797ffb42187c174e55be3c0f82e87b7deb7f1882a35e7f82eeaf9dc84\": container with ID starting with 990342c797ffb42187c174e55be3c0f82e87b7deb7f1882a35e7f82eeaf9dc84 not found: ID does not exist" Mar 11 09:21:34 crc kubenswrapper[4840]: I0311 09:21:34.705274 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-58769b4545-5q2fv"] Mar 11 09:21:36 crc kubenswrapper[4840]: I0311 09:21:36.069162 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="550bab70-eacb-4c56-98fd-460c20f22dcc" path="/var/lib/kubelet/pods/550bab70-eacb-4c56-98fd-460c20f22dcc/volumes" Mar 11 09:21:36 crc kubenswrapper[4840]: E0311 09:21:36.671431 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 is running failed: container process not found" containerID="ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:21:36 crc kubenswrapper[4840]: E0311 09:21:36.672130 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 is running failed: container process not found" containerID="ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:21:36 crc kubenswrapper[4840]: E0311 09:21:36.672406 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 is running failed: container process not found" containerID="ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:21:36 crc kubenswrapper[4840]: E0311 09:21:36.672442 4840 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-qtcdv" podUID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerName="ovsdb-server" Mar 11 09:21:36 crc kubenswrapper[4840]: E0311 09:21:36.672658 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:21:36 crc kubenswrapper[4840]: E0311 09:21:36.675067 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:21:36 crc kubenswrapper[4840]: E0311 09:21:36.676198 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:21:36 crc kubenswrapper[4840]: E0311 09:21:36.676237 4840 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-qtcdv" podUID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerName="ovs-vswitchd" Mar 11 09:21:41 crc kubenswrapper[4840]: E0311 09:21:41.671683 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 is running failed: container process not found" containerID="ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:21:41 crc kubenswrapper[4840]: E0311 09:21:41.673071 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 is running failed: container process not found" containerID="ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:21:41 crc kubenswrapper[4840]: E0311 09:21:41.673400 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 is running failed: container process not found" containerID="ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:21:41 crc kubenswrapper[4840]: E0311 09:21:41.673437 4840 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-qtcdv" podUID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerName="ovsdb-server" Mar 11 09:21:41 crc kubenswrapper[4840]: E0311 09:21:41.674390 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:21:41 crc kubenswrapper[4840]: E0311 09:21:41.675980 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:21:41 crc kubenswrapper[4840]: E0311 09:21:41.679637 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:21:41 crc kubenswrapper[4840]: E0311 09:21:41.679682 4840 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-qtcdv" podUID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerName="ovs-vswitchd" Mar 11 09:21:46 crc kubenswrapper[4840]: E0311 09:21:46.672398 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 is running failed: container process not found" containerID="ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:21:46 crc kubenswrapper[4840]: E0311 09:21:46.673118 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 is running failed: container process not found" containerID="ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:21:46 crc kubenswrapper[4840]: E0311 09:21:46.673620 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 is running failed: container process not found" containerID="ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 11 09:21:46 crc kubenswrapper[4840]: E0311 09:21:46.674087 4840 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-qtcdv" podUID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerName="ovsdb-server" Mar 11 09:21:46 crc kubenswrapper[4840]: E0311 09:21:46.674157 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:21:46 crc kubenswrapper[4840]: E0311 09:21:46.675626 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:21:46 crc kubenswrapper[4840]: E0311 09:21:46.677182 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 11 09:21:46 crc kubenswrapper[4840]: E0311 09:21:46.677327 4840 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-qtcdv" podUID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerName="ovs-vswitchd" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.297341 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.332262 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-etc-machine-id\") pod \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\" (UID: \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\") " Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.332348 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-scripts\") pod \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\" (UID: \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\") " Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.332394 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fd8b15e6-0bb6-4d79-99aa-765ded51af1d" (UID: "fd8b15e6-0bb6-4d79-99aa-765ded51af1d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.332420 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-config-data\") pod \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\" (UID: \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\") " Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.332442 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-config-data-custom\") pod \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\" (UID: \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\") " Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.332524 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f26f\" (UniqueName: \"kubernetes.io/projected/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-kube-api-access-7f26f\") pod \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\" (UID: \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\") " Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.332617 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-combined-ca-bundle\") pod \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\" (UID: \"fd8b15e6-0bb6-4d79-99aa-765ded51af1d\") " Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.332915 4840 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.362951 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fd8b15e6-0bb6-4d79-99aa-765ded51af1d" (UID: "fd8b15e6-0bb6-4d79-99aa-765ded51af1d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.366102 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-kube-api-access-7f26f" (OuterVolumeSpecName: "kube-api-access-7f26f") pod "fd8b15e6-0bb6-4d79-99aa-765ded51af1d" (UID: "fd8b15e6-0bb6-4d79-99aa-765ded51af1d"). InnerVolumeSpecName "kube-api-access-7f26f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.371442 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-scripts" (OuterVolumeSpecName: "scripts") pod "fd8b15e6-0bb6-4d79-99aa-765ded51af1d" (UID: "fd8b15e6-0bb6-4d79-99aa-765ded51af1d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.381295 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd8b15e6-0bb6-4d79-99aa-765ded51af1d" (UID: "fd8b15e6-0bb6-4d79-99aa-765ded51af1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.416741 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-config-data" (OuterVolumeSpecName: "config-data") pod "fd8b15e6-0bb6-4d79-99aa-765ded51af1d" (UID: "fd8b15e6-0bb6-4d79-99aa-765ded51af1d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.433357 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.433636 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.433650 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.433658 4840 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.433670 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f26f\" (UniqueName: \"kubernetes.io/projected/fd8b15e6-0bb6-4d79-99aa-765ded51af1d-kube-api-access-7f26f\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.471536 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.635604 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.635985 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-etc-swift\") pod \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.636036 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-lock\") pod \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.636115 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxhkf\" (UniqueName: \"kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-kube-api-access-sxhkf\") pod \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.636157 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-cache\") pod \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.636202 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-combined-ca-bundle\") pod \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\" (UID: \"f1c7d7f4-dc60-4703-b6c3-6cd626db11af\") " Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.637319 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-lock" (OuterVolumeSpecName: "lock") pod "f1c7d7f4-dc60-4703-b6c3-6cd626db11af" (UID: "f1c7d7f4-dc60-4703-b6c3-6cd626db11af"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.637579 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-cache" (OuterVolumeSpecName: "cache") pod "f1c7d7f4-dc60-4703-b6c3-6cd626db11af" (UID: "f1c7d7f4-dc60-4703-b6c3-6cd626db11af"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.640761 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f1c7d7f4-dc60-4703-b6c3-6cd626db11af" (UID: "f1c7d7f4-dc60-4703-b6c3-6cd626db11af"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.640815 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-kube-api-access-sxhkf" (OuterVolumeSpecName: "kube-api-access-sxhkf") pod "f1c7d7f4-dc60-4703-b6c3-6cd626db11af" (UID: "f1c7d7f4-dc60-4703-b6c3-6cd626db11af"). InnerVolumeSpecName "kube-api-access-sxhkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.640850 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "swift") pod "f1c7d7f4-dc60-4703-b6c3-6cd626db11af" (UID: "f1c7d7f4-dc60-4703-b6c3-6cd626db11af"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.738424 4840 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.738461 4840 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.738490 4840 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-lock\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.738511 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxhkf\" (UniqueName: \"kubernetes.io/projected/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-kube-api-access-sxhkf\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.738528 4840 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-cache\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.756949 4840 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.770057 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qtcdv_b58b63e6-0eb4-444e-be2e-dca6bf37030e/ovs-vswitchd/0.log" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.772254 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.801121 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qtcdv_b58b63e6-0eb4-444e-be2e-dca6bf37030e/ovs-vswitchd/0.log" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.811237 4840 generic.go:334] "Generic (PLEG): container finished" podID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerID="dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d" exitCode=137 Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.811840 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qtcdv" event={"ID":"b58b63e6-0eb4-444e-be2e-dca6bf37030e","Type":"ContainerDied","Data":"dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d"} Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.811952 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qtcdv" event={"ID":"b58b63e6-0eb4-444e-be2e-dca6bf37030e","Type":"ContainerDied","Data":"c393d658b7a32c1283a2917efb5d0ed19bb7ecbc6adcecce5dc7b2e7a77eec1c"} Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.812048 4840 scope.go:117] "RemoveContainer" containerID="dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.812156 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qtcdv" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.835170 4840 generic.go:334] "Generic (PLEG): container finished" podID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerID="d1ae76c881c421b02784a6adef0131cd437bfae62ec031f6665865845df2bf74" exitCode=137 Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.835605 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerDied","Data":"d1ae76c881c421b02784a6adef0131cd437bfae62ec031f6665865845df2bf74"} Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.835709 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f1c7d7f4-dc60-4703-b6c3-6cd626db11af","Type":"ContainerDied","Data":"38a9c76d272ff241fea464bbc6ac3e0f3186e63c3ddfd314d564eeb1ec00ae9d"} Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.835930 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.840226 4840 generic.go:334] "Generic (PLEG): container finished" podID="fd8b15e6-0bb6-4d79-99aa-765ded51af1d" containerID="7bb1063b1c325078ccba724db26480966f82fe2b0184c38551e103081dc02f90" exitCode=137 Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.840310 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fd8b15e6-0bb6-4d79-99aa-765ded51af1d","Type":"ContainerDied","Data":"7bb1063b1c325078ccba724db26480966f82fe2b0184c38551e103081dc02f90"} Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.840486 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"fd8b15e6-0bb6-4d79-99aa-765ded51af1d","Type":"ContainerDied","Data":"9e1ca780ae5076ba8ed6b9609e1dbd287e07c1fce69d80f6d061f0edda2c5b2d"} Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.840410 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.841116 4840 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.846714 4840 scope.go:117] "RemoveContainer" containerID="ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.870022 4840 scope.go:117] "RemoveContainer" containerID="640a0d5f04bb44c120c2003ee9a5bf08967fe8a0fe3f28005a3c81fb1b406954" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.891123 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.896576 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.914185 4840 scope.go:117] "RemoveContainer" containerID="dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d" Mar 11 09:21:48 crc kubenswrapper[4840]: E0311 09:21:48.914797 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d\": container with ID starting with dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d not found: ID does not exist" containerID="dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.914854 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d"} err="failed to get container status \"dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d\": rpc error: code = NotFound desc = could not find container \"dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d\": container with ID starting with dbb5712b558407a1f6f6d13982b8b5ac8e7a4f375b85b99c3ab24d6067d8124d not found: ID does not exist" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.914876 4840 scope.go:117] "RemoveContainer" containerID="ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09" Mar 11 09:21:48 crc kubenswrapper[4840]: E0311 09:21:48.915387 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09\": container with ID starting with ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 not found: ID does not exist" containerID="ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.915444 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09"} err="failed to get container status \"ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09\": rpc error: code = NotFound desc = could not find container \"ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09\": container with ID starting with ab8019a8eaec3a085cd72641d3e46b29afc065b94fc74ebd4f5ad09f0611dd09 not found: ID does not exist" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.915495 4840 scope.go:117] "RemoveContainer" containerID="640a0d5f04bb44c120c2003ee9a5bf08967fe8a0fe3f28005a3c81fb1b406954" Mar 11 09:21:48 crc kubenswrapper[4840]: E0311 09:21:48.915879 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"640a0d5f04bb44c120c2003ee9a5bf08967fe8a0fe3f28005a3c81fb1b406954\": container with ID starting with 640a0d5f04bb44c120c2003ee9a5bf08967fe8a0fe3f28005a3c81fb1b406954 not found: ID does not exist" containerID="640a0d5f04bb44c120c2003ee9a5bf08967fe8a0fe3f28005a3c81fb1b406954" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.915908 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"640a0d5f04bb44c120c2003ee9a5bf08967fe8a0fe3f28005a3c81fb1b406954"} err="failed to get container status \"640a0d5f04bb44c120c2003ee9a5bf08967fe8a0fe3f28005a3c81fb1b406954\": rpc error: code = NotFound desc = could not find container \"640a0d5f04bb44c120c2003ee9a5bf08967fe8a0fe3f28005a3c81fb1b406954\": container with ID starting with 640a0d5f04bb44c120c2003ee9a5bf08967fe8a0fe3f28005a3c81fb1b406954 not found: ID does not exist" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.915925 4840 scope.go:117] "RemoveContainer" containerID="d1ae76c881c421b02784a6adef0131cd437bfae62ec031f6665865845df2bf74" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.935885 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1c7d7f4-dc60-4703-b6c3-6cd626db11af" (UID: "f1c7d7f4-dc60-4703-b6c3-6cd626db11af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.942004 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b58b63e6-0eb4-444e-be2e-dca6bf37030e-var-lib\") pod \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\" (UID: \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\") " Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.942075 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b58b63e6-0eb4-444e-be2e-dca6bf37030e-etc-ovs\") pod \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\" (UID: \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\") " Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.942099 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b58b63e6-0eb4-444e-be2e-dca6bf37030e-var-run\") pod \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\" (UID: \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\") " Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.942162 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b58b63e6-0eb4-444e-be2e-dca6bf37030e-var-lib" (OuterVolumeSpecName: "var-lib") pod "b58b63e6-0eb4-444e-be2e-dca6bf37030e" (UID: "b58b63e6-0eb4-444e-be2e-dca6bf37030e"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.942227 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g5q4\" (UniqueName: \"kubernetes.io/projected/b58b63e6-0eb4-444e-be2e-dca6bf37030e-kube-api-access-9g5q4\") pod \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\" (UID: \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\") " Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.942250 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b58b63e6-0eb4-444e-be2e-dca6bf37030e-var-log\") pod \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\" (UID: \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\") " Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.942281 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b58b63e6-0eb4-444e-be2e-dca6bf37030e-scripts\") pod \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\" (UID: \"b58b63e6-0eb4-444e-be2e-dca6bf37030e\") " Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.942176 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b58b63e6-0eb4-444e-be2e-dca6bf37030e-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "b58b63e6-0eb4-444e-be2e-dca6bf37030e" (UID: "b58b63e6-0eb4-444e-be2e-dca6bf37030e"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.942274 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b58b63e6-0eb4-444e-be2e-dca6bf37030e-var-run" (OuterVolumeSpecName: "var-run") pod "b58b63e6-0eb4-444e-be2e-dca6bf37030e" (UID: "b58b63e6-0eb4-444e-be2e-dca6bf37030e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.942299 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b58b63e6-0eb4-444e-be2e-dca6bf37030e-var-log" (OuterVolumeSpecName: "var-log") pod "b58b63e6-0eb4-444e-be2e-dca6bf37030e" (UID: "b58b63e6-0eb4-444e-be2e-dca6bf37030e"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.942571 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1c7d7f4-dc60-4703-b6c3-6cd626db11af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.942584 4840 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b58b63e6-0eb4-444e-be2e-dca6bf37030e-var-lib\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.942596 4840 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b58b63e6-0eb4-444e-be2e-dca6bf37030e-etc-ovs\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.942608 4840 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b58b63e6-0eb4-444e-be2e-dca6bf37030e-var-run\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.942616 4840 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b58b63e6-0eb4-444e-be2e-dca6bf37030e-var-log\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.944049 4840 scope.go:117] "RemoveContainer" containerID="905763d5f4df2194269413da5305c2cade78aefa23c40527af321dc5de2fb39b" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.944257 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b58b63e6-0eb4-444e-be2e-dca6bf37030e-scripts" (OuterVolumeSpecName: "scripts") pod "b58b63e6-0eb4-444e-be2e-dca6bf37030e" (UID: "b58b63e6-0eb4-444e-be2e-dca6bf37030e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.945443 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b58b63e6-0eb4-444e-be2e-dca6bf37030e-kube-api-access-9g5q4" (OuterVolumeSpecName: "kube-api-access-9g5q4") pod "b58b63e6-0eb4-444e-be2e-dca6bf37030e" (UID: "b58b63e6-0eb4-444e-be2e-dca6bf37030e"). InnerVolumeSpecName "kube-api-access-9g5q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.969929 4840 scope.go:117] "RemoveContainer" containerID="9feba492b523c8dcc3c35b13e0ce09e4fe030b4986a39633f2601ebb6e23baa2" Mar 11 09:21:48 crc kubenswrapper[4840]: I0311 09:21:48.995984 4840 scope.go:117] "RemoveContainer" containerID="dd4d20030dddae38ba6950210cd3d087c5fd1596090ecd0351b94eeaa971448c" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.044526 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g5q4\" (UniqueName: \"kubernetes.io/projected/b58b63e6-0eb4-444e-be2e-dca6bf37030e-kube-api-access-9g5q4\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.044561 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b58b63e6-0eb4-444e-be2e-dca6bf37030e-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.058426 4840 scope.go:117] "RemoveContainer" containerID="e1983bac8c82d7bd2cb300b8c29f24106251022a1b8393dbcfa2f98ffc26c55b" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.088440 4840 scope.go:117] "RemoveContainer" containerID="51312dd1b22898c3fcfef3be786a86dd0bc450bace061ed340d18f1903ce9f72" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.112544 4840 scope.go:117] "RemoveContainer" containerID="322dd29ccdd62a25f4af6457497d29a55dbd6bb69515090d89cfac3caafffe97" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.135790 4840 scope.go:117] "RemoveContainer" containerID="22f7a187c7f01516652e793a521148fe2ac2ab6befee1d8ed2d686c2bc8b8adc" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.153665 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-qtcdv"] Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.164623 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-qtcdv"] Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.181621 4840 scope.go:117] "RemoveContainer" containerID="25eb7124241e743ab0f23cf5b6fd16ff25372ee1ef90290032d988d844df4427" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.184577 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.191612 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.198965 4840 scope.go:117] "RemoveContainer" containerID="9b3746cc36c2137ab5787473b5942125bb1e7796da2df7d082d9e9313eaffba4" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.218557 4840 scope.go:117] "RemoveContainer" containerID="7ce263946b4448b3a0388e295bd5ae37d6a5148f4b13708343302829cff90b62" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.238244 4840 scope.go:117] "RemoveContainer" containerID="d57ba4418c21d87822207480d1742dfe663f600ae09365578d4cc8cf912a7fa3" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.258534 4840 scope.go:117] "RemoveContainer" containerID="804f1935b7ae1994c956dc341f3afb51d2d0d59c2f35239a216e61bc6f1d79d2" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.280502 4840 scope.go:117] "RemoveContainer" containerID="7c83bfc113dd757cc74c4935f44d862430d391d2b34e5c7877464152ad5f1bdd" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.320235 4840 scope.go:117] "RemoveContainer" containerID="1d4b8fe0d33d601094bd4c0c808ede9155728eb0e8cb7eb8ed2a56feaa32a3ad" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.343789 4840 scope.go:117] "RemoveContainer" containerID="d1ae76c881c421b02784a6adef0131cd437bfae62ec031f6665865845df2bf74" Mar 11 09:21:49 crc kubenswrapper[4840]: E0311 09:21:49.345018 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1ae76c881c421b02784a6adef0131cd437bfae62ec031f6665865845df2bf74\": container with ID starting with d1ae76c881c421b02784a6adef0131cd437bfae62ec031f6665865845df2bf74 not found: ID does not exist" containerID="d1ae76c881c421b02784a6adef0131cd437bfae62ec031f6665865845df2bf74" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.345058 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1ae76c881c421b02784a6adef0131cd437bfae62ec031f6665865845df2bf74"} err="failed to get container status \"d1ae76c881c421b02784a6adef0131cd437bfae62ec031f6665865845df2bf74\": rpc error: code = NotFound desc = could not find container \"d1ae76c881c421b02784a6adef0131cd437bfae62ec031f6665865845df2bf74\": container with ID starting with d1ae76c881c421b02784a6adef0131cd437bfae62ec031f6665865845df2bf74 not found: ID does not exist" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.345090 4840 scope.go:117] "RemoveContainer" containerID="905763d5f4df2194269413da5305c2cade78aefa23c40527af321dc5de2fb39b" Mar 11 09:21:49 crc kubenswrapper[4840]: E0311 09:21:49.345316 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"905763d5f4df2194269413da5305c2cade78aefa23c40527af321dc5de2fb39b\": container with ID starting with 905763d5f4df2194269413da5305c2cade78aefa23c40527af321dc5de2fb39b not found: ID does not exist" containerID="905763d5f4df2194269413da5305c2cade78aefa23c40527af321dc5de2fb39b" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.345339 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"905763d5f4df2194269413da5305c2cade78aefa23c40527af321dc5de2fb39b"} err="failed to get container status \"905763d5f4df2194269413da5305c2cade78aefa23c40527af321dc5de2fb39b\": rpc error: code = NotFound desc = could not find container \"905763d5f4df2194269413da5305c2cade78aefa23c40527af321dc5de2fb39b\": container with ID starting with 905763d5f4df2194269413da5305c2cade78aefa23c40527af321dc5de2fb39b not found: ID does not exist" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.345354 4840 scope.go:117] "RemoveContainer" containerID="9feba492b523c8dcc3c35b13e0ce09e4fe030b4986a39633f2601ebb6e23baa2" Mar 11 09:21:49 crc kubenswrapper[4840]: E0311 09:21:49.345989 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9feba492b523c8dcc3c35b13e0ce09e4fe030b4986a39633f2601ebb6e23baa2\": container with ID starting with 9feba492b523c8dcc3c35b13e0ce09e4fe030b4986a39633f2601ebb6e23baa2 not found: ID does not exist" containerID="9feba492b523c8dcc3c35b13e0ce09e4fe030b4986a39633f2601ebb6e23baa2" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.346016 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9feba492b523c8dcc3c35b13e0ce09e4fe030b4986a39633f2601ebb6e23baa2"} err="failed to get container status \"9feba492b523c8dcc3c35b13e0ce09e4fe030b4986a39633f2601ebb6e23baa2\": rpc error: code = NotFound desc = could not find container \"9feba492b523c8dcc3c35b13e0ce09e4fe030b4986a39633f2601ebb6e23baa2\": container with ID starting with 9feba492b523c8dcc3c35b13e0ce09e4fe030b4986a39633f2601ebb6e23baa2 not found: ID does not exist" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.346034 4840 scope.go:117] "RemoveContainer" containerID="dd4d20030dddae38ba6950210cd3d087c5fd1596090ecd0351b94eeaa971448c" Mar 11 09:21:49 crc kubenswrapper[4840]: E0311 09:21:49.347763 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd4d20030dddae38ba6950210cd3d087c5fd1596090ecd0351b94eeaa971448c\": container with ID starting with dd4d20030dddae38ba6950210cd3d087c5fd1596090ecd0351b94eeaa971448c not found: ID does not exist" containerID="dd4d20030dddae38ba6950210cd3d087c5fd1596090ecd0351b94eeaa971448c" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.347832 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd4d20030dddae38ba6950210cd3d087c5fd1596090ecd0351b94eeaa971448c"} err="failed to get container status \"dd4d20030dddae38ba6950210cd3d087c5fd1596090ecd0351b94eeaa971448c\": rpc error: code = NotFound desc = could not find container \"dd4d20030dddae38ba6950210cd3d087c5fd1596090ecd0351b94eeaa971448c\": container with ID starting with dd4d20030dddae38ba6950210cd3d087c5fd1596090ecd0351b94eeaa971448c not found: ID does not exist" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.347872 4840 scope.go:117] "RemoveContainer" containerID="e1983bac8c82d7bd2cb300b8c29f24106251022a1b8393dbcfa2f98ffc26c55b" Mar 11 09:21:49 crc kubenswrapper[4840]: E0311 09:21:49.348367 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1983bac8c82d7bd2cb300b8c29f24106251022a1b8393dbcfa2f98ffc26c55b\": container with ID starting with e1983bac8c82d7bd2cb300b8c29f24106251022a1b8393dbcfa2f98ffc26c55b not found: ID does not exist" containerID="e1983bac8c82d7bd2cb300b8c29f24106251022a1b8393dbcfa2f98ffc26c55b" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.348391 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1983bac8c82d7bd2cb300b8c29f24106251022a1b8393dbcfa2f98ffc26c55b"} err="failed to get container status \"e1983bac8c82d7bd2cb300b8c29f24106251022a1b8393dbcfa2f98ffc26c55b\": rpc error: code = NotFound desc = could not find container \"e1983bac8c82d7bd2cb300b8c29f24106251022a1b8393dbcfa2f98ffc26c55b\": container with ID starting with e1983bac8c82d7bd2cb300b8c29f24106251022a1b8393dbcfa2f98ffc26c55b not found: ID does not exist" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.348408 4840 scope.go:117] "RemoveContainer" containerID="51312dd1b22898c3fcfef3be786a86dd0bc450bace061ed340d18f1903ce9f72" Mar 11 09:21:49 crc kubenswrapper[4840]: E0311 09:21:49.348891 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51312dd1b22898c3fcfef3be786a86dd0bc450bace061ed340d18f1903ce9f72\": container with ID starting with 51312dd1b22898c3fcfef3be786a86dd0bc450bace061ed340d18f1903ce9f72 not found: ID does not exist" containerID="51312dd1b22898c3fcfef3be786a86dd0bc450bace061ed340d18f1903ce9f72" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.348936 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51312dd1b22898c3fcfef3be786a86dd0bc450bace061ed340d18f1903ce9f72"} err="failed to get container status \"51312dd1b22898c3fcfef3be786a86dd0bc450bace061ed340d18f1903ce9f72\": rpc error: code = NotFound desc = could not find container \"51312dd1b22898c3fcfef3be786a86dd0bc450bace061ed340d18f1903ce9f72\": container with ID starting with 51312dd1b22898c3fcfef3be786a86dd0bc450bace061ed340d18f1903ce9f72 not found: ID does not exist" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.348954 4840 scope.go:117] "RemoveContainer" containerID="322dd29ccdd62a25f4af6457497d29a55dbd6bb69515090d89cfac3caafffe97" Mar 11 09:21:49 crc kubenswrapper[4840]: E0311 09:21:49.349315 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"322dd29ccdd62a25f4af6457497d29a55dbd6bb69515090d89cfac3caafffe97\": container with ID starting with 322dd29ccdd62a25f4af6457497d29a55dbd6bb69515090d89cfac3caafffe97 not found: ID does not exist" containerID="322dd29ccdd62a25f4af6457497d29a55dbd6bb69515090d89cfac3caafffe97" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.349335 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"322dd29ccdd62a25f4af6457497d29a55dbd6bb69515090d89cfac3caafffe97"} err="failed to get container status \"322dd29ccdd62a25f4af6457497d29a55dbd6bb69515090d89cfac3caafffe97\": rpc error: code = NotFound desc = could not find container \"322dd29ccdd62a25f4af6457497d29a55dbd6bb69515090d89cfac3caafffe97\": container with ID starting with 322dd29ccdd62a25f4af6457497d29a55dbd6bb69515090d89cfac3caafffe97 not found: ID does not exist" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.349351 4840 scope.go:117] "RemoveContainer" containerID="22f7a187c7f01516652e793a521148fe2ac2ab6befee1d8ed2d686c2bc8b8adc" Mar 11 09:21:49 crc kubenswrapper[4840]: E0311 09:21:49.349734 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22f7a187c7f01516652e793a521148fe2ac2ab6befee1d8ed2d686c2bc8b8adc\": container with ID starting with 22f7a187c7f01516652e793a521148fe2ac2ab6befee1d8ed2d686c2bc8b8adc not found: ID does not exist" containerID="22f7a187c7f01516652e793a521148fe2ac2ab6befee1d8ed2d686c2bc8b8adc" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.349756 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22f7a187c7f01516652e793a521148fe2ac2ab6befee1d8ed2d686c2bc8b8adc"} err="failed to get container status \"22f7a187c7f01516652e793a521148fe2ac2ab6befee1d8ed2d686c2bc8b8adc\": rpc error: code = NotFound desc = could not find container \"22f7a187c7f01516652e793a521148fe2ac2ab6befee1d8ed2d686c2bc8b8adc\": container with ID starting with 22f7a187c7f01516652e793a521148fe2ac2ab6befee1d8ed2d686c2bc8b8adc not found: ID does not exist" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.349770 4840 scope.go:117] "RemoveContainer" containerID="25eb7124241e743ab0f23cf5b6fd16ff25372ee1ef90290032d988d844df4427" Mar 11 09:21:49 crc kubenswrapper[4840]: E0311 09:21:49.349993 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25eb7124241e743ab0f23cf5b6fd16ff25372ee1ef90290032d988d844df4427\": container with ID starting with 25eb7124241e743ab0f23cf5b6fd16ff25372ee1ef90290032d988d844df4427 not found: ID does not exist" containerID="25eb7124241e743ab0f23cf5b6fd16ff25372ee1ef90290032d988d844df4427" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.350012 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25eb7124241e743ab0f23cf5b6fd16ff25372ee1ef90290032d988d844df4427"} err="failed to get container status \"25eb7124241e743ab0f23cf5b6fd16ff25372ee1ef90290032d988d844df4427\": rpc error: code = NotFound desc = could not find container \"25eb7124241e743ab0f23cf5b6fd16ff25372ee1ef90290032d988d844df4427\": container with ID starting with 25eb7124241e743ab0f23cf5b6fd16ff25372ee1ef90290032d988d844df4427 not found: ID does not exist" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.350024 4840 scope.go:117] "RemoveContainer" containerID="9b3746cc36c2137ab5787473b5942125bb1e7796da2df7d082d9e9313eaffba4" Mar 11 09:21:49 crc kubenswrapper[4840]: E0311 09:21:49.350440 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b3746cc36c2137ab5787473b5942125bb1e7796da2df7d082d9e9313eaffba4\": container with ID starting with 9b3746cc36c2137ab5787473b5942125bb1e7796da2df7d082d9e9313eaffba4 not found: ID does not exist" containerID="9b3746cc36c2137ab5787473b5942125bb1e7796da2df7d082d9e9313eaffba4" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.350460 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b3746cc36c2137ab5787473b5942125bb1e7796da2df7d082d9e9313eaffba4"} err="failed to get container status \"9b3746cc36c2137ab5787473b5942125bb1e7796da2df7d082d9e9313eaffba4\": rpc error: code = NotFound desc = could not find container \"9b3746cc36c2137ab5787473b5942125bb1e7796da2df7d082d9e9313eaffba4\": container with ID starting with 9b3746cc36c2137ab5787473b5942125bb1e7796da2df7d082d9e9313eaffba4 not found: ID does not exist" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.350488 4840 scope.go:117] "RemoveContainer" containerID="7ce263946b4448b3a0388e295bd5ae37d6a5148f4b13708343302829cff90b62" Mar 11 09:21:49 crc kubenswrapper[4840]: E0311 09:21:49.350764 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ce263946b4448b3a0388e295bd5ae37d6a5148f4b13708343302829cff90b62\": container with ID starting with 7ce263946b4448b3a0388e295bd5ae37d6a5148f4b13708343302829cff90b62 not found: ID does not exist" containerID="7ce263946b4448b3a0388e295bd5ae37d6a5148f4b13708343302829cff90b62" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.350793 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ce263946b4448b3a0388e295bd5ae37d6a5148f4b13708343302829cff90b62"} err="failed to get container status \"7ce263946b4448b3a0388e295bd5ae37d6a5148f4b13708343302829cff90b62\": rpc error: code = NotFound desc = could not find container \"7ce263946b4448b3a0388e295bd5ae37d6a5148f4b13708343302829cff90b62\": container with ID starting with 7ce263946b4448b3a0388e295bd5ae37d6a5148f4b13708343302829cff90b62 not found: ID does not exist" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.350808 4840 scope.go:117] "RemoveContainer" containerID="d57ba4418c21d87822207480d1742dfe663f600ae09365578d4cc8cf912a7fa3" Mar 11 09:21:49 crc kubenswrapper[4840]: E0311 09:21:49.351271 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d57ba4418c21d87822207480d1742dfe663f600ae09365578d4cc8cf912a7fa3\": container with ID starting with d57ba4418c21d87822207480d1742dfe663f600ae09365578d4cc8cf912a7fa3 not found: ID does not exist" containerID="d57ba4418c21d87822207480d1742dfe663f600ae09365578d4cc8cf912a7fa3" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.351291 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d57ba4418c21d87822207480d1742dfe663f600ae09365578d4cc8cf912a7fa3"} err="failed to get container status \"d57ba4418c21d87822207480d1742dfe663f600ae09365578d4cc8cf912a7fa3\": rpc error: code = NotFound desc = could not find container \"d57ba4418c21d87822207480d1742dfe663f600ae09365578d4cc8cf912a7fa3\": container with ID starting with d57ba4418c21d87822207480d1742dfe663f600ae09365578d4cc8cf912a7fa3 not found: ID does not exist" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.351305 4840 scope.go:117] "RemoveContainer" containerID="804f1935b7ae1994c956dc341f3afb51d2d0d59c2f35239a216e61bc6f1d79d2" Mar 11 09:21:49 crc kubenswrapper[4840]: E0311 09:21:49.351658 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"804f1935b7ae1994c956dc341f3afb51d2d0d59c2f35239a216e61bc6f1d79d2\": container with ID starting with 804f1935b7ae1994c956dc341f3afb51d2d0d59c2f35239a216e61bc6f1d79d2 not found: ID does not exist" containerID="804f1935b7ae1994c956dc341f3afb51d2d0d59c2f35239a216e61bc6f1d79d2" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.351694 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"804f1935b7ae1994c956dc341f3afb51d2d0d59c2f35239a216e61bc6f1d79d2"} err="failed to get container status \"804f1935b7ae1994c956dc341f3afb51d2d0d59c2f35239a216e61bc6f1d79d2\": rpc error: code = NotFound desc = could not find container \"804f1935b7ae1994c956dc341f3afb51d2d0d59c2f35239a216e61bc6f1d79d2\": container with ID starting with 804f1935b7ae1994c956dc341f3afb51d2d0d59c2f35239a216e61bc6f1d79d2 not found: ID does not exist" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.351747 4840 scope.go:117] "RemoveContainer" containerID="7c83bfc113dd757cc74c4935f44d862430d391d2b34e5c7877464152ad5f1bdd" Mar 11 09:21:49 crc kubenswrapper[4840]: E0311 09:21:49.352044 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c83bfc113dd757cc74c4935f44d862430d391d2b34e5c7877464152ad5f1bdd\": container with ID starting with 7c83bfc113dd757cc74c4935f44d862430d391d2b34e5c7877464152ad5f1bdd not found: ID does not exist" containerID="7c83bfc113dd757cc74c4935f44d862430d391d2b34e5c7877464152ad5f1bdd" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.352064 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c83bfc113dd757cc74c4935f44d862430d391d2b34e5c7877464152ad5f1bdd"} err="failed to get container status \"7c83bfc113dd757cc74c4935f44d862430d391d2b34e5c7877464152ad5f1bdd\": rpc error: code = NotFound desc = could not find container \"7c83bfc113dd757cc74c4935f44d862430d391d2b34e5c7877464152ad5f1bdd\": container with ID starting with 7c83bfc113dd757cc74c4935f44d862430d391d2b34e5c7877464152ad5f1bdd not found: ID does not exist" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.352076 4840 scope.go:117] "RemoveContainer" containerID="1d4b8fe0d33d601094bd4c0c808ede9155728eb0e8cb7eb8ed2a56feaa32a3ad" Mar 11 09:21:49 crc kubenswrapper[4840]: E0311 09:21:49.352324 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d4b8fe0d33d601094bd4c0c808ede9155728eb0e8cb7eb8ed2a56feaa32a3ad\": container with ID starting with 1d4b8fe0d33d601094bd4c0c808ede9155728eb0e8cb7eb8ed2a56feaa32a3ad not found: ID does not exist" containerID="1d4b8fe0d33d601094bd4c0c808ede9155728eb0e8cb7eb8ed2a56feaa32a3ad" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.352344 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d4b8fe0d33d601094bd4c0c808ede9155728eb0e8cb7eb8ed2a56feaa32a3ad"} err="failed to get container status \"1d4b8fe0d33d601094bd4c0c808ede9155728eb0e8cb7eb8ed2a56feaa32a3ad\": rpc error: code = NotFound desc = could not find container \"1d4b8fe0d33d601094bd4c0c808ede9155728eb0e8cb7eb8ed2a56feaa32a3ad\": container with ID starting with 1d4b8fe0d33d601094bd4c0c808ede9155728eb0e8cb7eb8ed2a56feaa32a3ad not found: ID does not exist" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.352357 4840 scope.go:117] "RemoveContainer" containerID="8d27b219acdbf1ed0f6cb152b86c9b44e8ed6f976378d28441b06575714a48ed" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.371448 4840 scope.go:117] "RemoveContainer" containerID="7bb1063b1c325078ccba724db26480966f82fe2b0184c38551e103081dc02f90" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.390407 4840 scope.go:117] "RemoveContainer" containerID="8d27b219acdbf1ed0f6cb152b86c9b44e8ed6f976378d28441b06575714a48ed" Mar 11 09:21:49 crc kubenswrapper[4840]: E0311 09:21:49.391170 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d27b219acdbf1ed0f6cb152b86c9b44e8ed6f976378d28441b06575714a48ed\": container with ID starting with 8d27b219acdbf1ed0f6cb152b86c9b44e8ed6f976378d28441b06575714a48ed not found: ID does not exist" containerID="8d27b219acdbf1ed0f6cb152b86c9b44e8ed6f976378d28441b06575714a48ed" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.391233 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d27b219acdbf1ed0f6cb152b86c9b44e8ed6f976378d28441b06575714a48ed"} err="failed to get container status \"8d27b219acdbf1ed0f6cb152b86c9b44e8ed6f976378d28441b06575714a48ed\": rpc error: code = NotFound desc = could not find container \"8d27b219acdbf1ed0f6cb152b86c9b44e8ed6f976378d28441b06575714a48ed\": container with ID starting with 8d27b219acdbf1ed0f6cb152b86c9b44e8ed6f976378d28441b06575714a48ed not found: ID does not exist" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.391285 4840 scope.go:117] "RemoveContainer" containerID="7bb1063b1c325078ccba724db26480966f82fe2b0184c38551e103081dc02f90" Mar 11 09:21:49 crc kubenswrapper[4840]: E0311 09:21:49.391840 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bb1063b1c325078ccba724db26480966f82fe2b0184c38551e103081dc02f90\": container with ID starting with 7bb1063b1c325078ccba724db26480966f82fe2b0184c38551e103081dc02f90 not found: ID does not exist" containerID="7bb1063b1c325078ccba724db26480966f82fe2b0184c38551e103081dc02f90" Mar 11 09:21:49 crc kubenswrapper[4840]: I0311 09:21:49.391875 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bb1063b1c325078ccba724db26480966f82fe2b0184c38551e103081dc02f90"} err="failed to get container status \"7bb1063b1c325078ccba724db26480966f82fe2b0184c38551e103081dc02f90\": rpc error: code = NotFound desc = could not find container \"7bb1063b1c325078ccba724db26480966f82fe2b0184c38551e103081dc02f90\": container with ID starting with 7bb1063b1c325078ccba724db26480966f82fe2b0184c38551e103081dc02f90 not found: ID does not exist" Mar 11 09:21:50 crc kubenswrapper[4840]: I0311 09:21:50.070032 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" path="/var/lib/kubelet/pods/b58b63e6-0eb4-444e-be2e-dca6bf37030e/volumes" Mar 11 09:21:50 crc kubenswrapper[4840]: I0311 09:21:50.070832 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" path="/var/lib/kubelet/pods/f1c7d7f4-dc60-4703-b6c3-6cd626db11af/volumes" Mar 11 09:21:50 crc kubenswrapper[4840]: I0311 09:21:50.073061 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd8b15e6-0bb6-4d79-99aa-765ded51af1d" path="/var/lib/kubelet/pods/fd8b15e6-0bb6-4d79-99aa-765ded51af1d/volumes" Mar 11 09:21:57 crc kubenswrapper[4840]: I0311 09:21:57.446563 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:21:57 crc kubenswrapper[4840]: I0311 09:21:57.447372 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.152611 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553682-fct6w"] Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.153437 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6f4129d-4bc4-449b-be94-82fce07cf1f0" containerName="mysql-bootstrap" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.153454 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6f4129d-4bc4-449b-be94-82fce07cf1f0" containerName="mysql-bootstrap" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.153634 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerName="ovs-vswitchd" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.153650 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerName="ovs-vswitchd" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.153661 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd8b15e6-0bb6-4d79-99aa-765ded51af1d" containerName="cinder-scheduler" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.153670 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd8b15e6-0bb6-4d79-99aa-765ded51af1d" containerName="cinder-scheduler" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.153683 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerName="ovsdb-server" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.153690 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerName="ovsdb-server" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.153702 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="container-updater" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.153713 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="container-updater" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.153728 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056df7c0-d577-4908-91a8-b5dfb95e0316" containerName="ovn-controller" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.153735 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="056df7c0-d577-4908-91a8-b5dfb95e0316" containerName="ovn-controller" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.153751 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="object-server" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.153758 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="object-server" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.153771 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86c17dbf-d890-4de3-bf5d-29e0aea4d968" containerName="cinder-api" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.153779 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="86c17dbf-d890-4de3-bf5d-29e0aea4d968" containerName="cinder-api" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.153788 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e" containerName="kube-state-metrics" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.153796 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e" containerName="kube-state-metrics" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.153804 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6f4129d-4bc4-449b-be94-82fce07cf1f0" containerName="galera" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.153811 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6f4129d-4bc4-449b-be94-82fce07cf1f0" containerName="galera" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.153821 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="object-replicator" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.153828 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="object-replicator" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.153836 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a50bea-c32e-4aed-8fd2-7289e1694f6e" containerName="barbican-api" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.153844 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a50bea-c32e-4aed-8fd2-7289e1694f6e" containerName="barbican-api" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.153852 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="object-updater" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.153859 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="object-updater" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.153868 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="swift-recon-cron" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.153875 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="swift-recon-cron" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.153888 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86c17dbf-d890-4de3-bf5d-29e0aea4d968" containerName="cinder-api-log" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.153895 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="86c17dbf-d890-4de3-bf5d-29e0aea4d968" containerName="cinder-api-log" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.153905 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71aaf352-8b91-4846-8ce4-1d83303ac203" containerName="glance-log" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.153912 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="71aaf352-8b91-4846-8ce4-1d83303ac203" containerName="glance-log" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.153924 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerName="ovsdb-server-init" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.153933 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerName="ovsdb-server-init" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.153947 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ada053fd-c71a-4425-8220-b950f0cab229" containerName="nova-scheduler-scheduler" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.153954 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="ada053fd-c71a-4425-8220-b950f0cab229" containerName="nova-scheduler-scheduler" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.153967 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f31748d2-64a9-4839-ac55-691d9682ee8e" containerName="rabbitmq" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.153974 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f31748d2-64a9-4839-ac55-691d9682ee8e" containerName="rabbitmq" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.153984 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f86e8b7a-656b-423e-8cf0-6d1025486c46" containerName="keystone-api" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.153991 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f86e8b7a-656b-423e-8cf0-6d1025486c46" containerName="keystone-api" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154003 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f31748d2-64a9-4839-ac55-691d9682ee8e" containerName="setup-container" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154010 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f31748d2-64a9-4839-ac55-691d9682ee8e" containerName="setup-container" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154022 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629115e9-6bcf-45e8-a0da-d7c06386b7b7" containerName="nova-api-api" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154028 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="629115e9-6bcf-45e8-a0da-d7c06386b7b7" containerName="nova-api-api" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154043 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6" containerName="ovn-northd" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154051 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6" containerName="ovn-northd" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154063 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="account-reaper" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154070 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="account-reaper" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154078 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ccfc13-7a62-4923-95ab-c68cb93aa03c" containerName="glance-httpd" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154085 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ccfc13-7a62-4923-95ab-c68cb93aa03c" containerName="glance-httpd" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154095 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="account-replicator" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154103 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="account-replicator" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154112 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a964c24-3c53-4a29-98fb-ceaca467c372" containerName="rabbitmq" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154119 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a964c24-3c53-4a29-98fb-ceaca467c372" containerName="rabbitmq" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154127 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629115e9-6bcf-45e8-a0da-d7c06386b7b7" containerName="nova-api-log" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154134 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="629115e9-6bcf-45e8-a0da-d7c06386b7b7" containerName="nova-api-log" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154145 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5fc86d0-4547-42ed-a880-f58c9e29d8a4" containerName="ceilometer-central-agent" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154153 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5fc86d0-4547-42ed-a880-f58c9e29d8a4" containerName="ceilometer-central-agent" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154165 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9699b913-55db-46fe-9831-1e1ac94ca609" containerName="nova-cell1-conductor-conductor" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154172 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="9699b913-55db-46fe-9831-1e1ac94ca609" containerName="nova-cell1-conductor-conductor" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154183 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="object-expirer" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154189 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="object-expirer" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154204 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="915955ff-c1d8-4f99-a621-f28d463c512f" containerName="nova-metadata-log" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154212 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="915955ff-c1d8-4f99-a621-f28d463c512f" containerName="nova-metadata-log" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154220 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="container-replicator" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154227 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="container-replicator" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154241 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="container-server" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154248 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="container-server" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154258 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6" containerName="openstack-network-exporter" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154264 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6" containerName="openstack-network-exporter" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154275 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="account-auditor" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154281 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="account-auditor" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154292 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ccfc13-7a62-4923-95ab-c68cb93aa03c" containerName="glance-log" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154301 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ccfc13-7a62-4923-95ab-c68cb93aa03c" containerName="glance-log" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154314 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5fc86d0-4547-42ed-a880-f58c9e29d8a4" containerName="ceilometer-notification-agent" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154321 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5fc86d0-4547-42ed-a880-f58c9e29d8a4" containerName="ceilometer-notification-agent" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154331 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a50bea-c32e-4aed-8fd2-7289e1694f6e" containerName="barbican-api-log" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154339 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a50bea-c32e-4aed-8fd2-7289e1694f6e" containerName="barbican-api-log" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154349 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="rsync" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154355 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="rsync" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154367 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="account-server" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154374 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="account-server" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154381 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="container-auditor" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154391 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="container-auditor" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154406 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="550bab70-eacb-4c56-98fd-460c20f22dcc" containerName="neutron-httpd" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154414 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="550bab70-eacb-4c56-98fd-460c20f22dcc" containerName="neutron-httpd" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154425 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a964c24-3c53-4a29-98fb-ceaca467c372" containerName="setup-container" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154435 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a964c24-3c53-4a29-98fb-ceaca467c372" containerName="setup-container" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154448 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44e0641-be37-4447-9666-14bf00c08827" containerName="memcached" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154455 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44e0641-be37-4447-9666-14bf00c08827" containerName="memcached" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154487 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="550bab70-eacb-4c56-98fd-460c20f22dcc" containerName="neutron-api" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154496 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="550bab70-eacb-4c56-98fd-460c20f22dcc" containerName="neutron-api" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154653 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71aaf352-8b91-4846-8ce4-1d83303ac203" containerName="glance-httpd" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154662 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="71aaf352-8b91-4846-8ce4-1d83303ac203" containerName="glance-httpd" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154674 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5fc86d0-4547-42ed-a880-f58c9e29d8a4" containerName="sg-core" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154681 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5fc86d0-4547-42ed-a880-f58c9e29d8a4" containerName="sg-core" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154694 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="915955ff-c1d8-4f99-a621-f28d463c512f" containerName="nova-metadata-metadata" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154701 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="915955ff-c1d8-4f99-a621-f28d463c512f" containerName="nova-metadata-metadata" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154714 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c08694-92ed-44cb-8ca3-92a47b5571d4" containerName="nova-cell0-conductor-conductor" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154722 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c08694-92ed-44cb-8ca3-92a47b5571d4" containerName="nova-cell0-conductor-conductor" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154732 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd8b15e6-0bb6-4d79-99aa-765ded51af1d" containerName="probe" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154739 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd8b15e6-0bb6-4d79-99aa-765ded51af1d" containerName="probe" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154748 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="object-auditor" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154755 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="object-auditor" Mar 11 09:22:00 crc kubenswrapper[4840]: E0311 09:22:00.154766 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5fc86d0-4547-42ed-a880-f58c9e29d8a4" containerName="proxy-httpd" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.154775 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5fc86d0-4547-42ed-a880-f58c9e29d8a4" containerName="proxy-httpd" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155031 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerName="ovsdb-server" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155050 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f31748d2-64a9-4839-ac55-691d9682ee8e" containerName="rabbitmq" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155064 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="account-reaper" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155072 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="object-replicator" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155084 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a964c24-3c53-4a29-98fb-ceaca467c372" containerName="rabbitmq" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155093 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="account-server" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155102 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ccfc13-7a62-4923-95ab-c68cb93aa03c" containerName="glance-log" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155110 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="629115e9-6bcf-45e8-a0da-d7c06386b7b7" containerName="nova-api-log" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155118 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="container-auditor" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155130 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b58b63e6-0eb4-444e-be2e-dca6bf37030e" containerName="ovs-vswitchd" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155141 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd8b15e6-0bb6-4d79-99aa-765ded51af1d" containerName="probe" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155150 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="915955ff-c1d8-4f99-a621-f28d463c512f" containerName="nova-metadata-metadata" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155158 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="15a50bea-c32e-4aed-8fd2-7289e1694f6e" containerName="barbican-api" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155169 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="550bab70-eacb-4c56-98fd-460c20f22dcc" containerName="neutron-httpd" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155178 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="object-server" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155186 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="86c17dbf-d890-4de3-bf5d-29e0aea4d968" containerName="cinder-api" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155194 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6" containerName="openstack-network-exporter" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155202 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="object-updater" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155212 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="container-server" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155222 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="056df7c0-d577-4908-91a8-b5dfb95e0316" containerName="ovn-controller" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155231 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="rsync" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155238 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5fc86d0-4547-42ed-a880-f58c9e29d8a4" containerName="ceilometer-central-agent" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155250 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="object-expirer" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155261 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="71aaf352-8b91-4846-8ce4-1d83303ac203" containerName="glance-httpd" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155269 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b0d4c8e-fe4e-4ca7-8cfe-1d4fb97c3ac6" containerName="ovn-northd" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155279 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="629115e9-6bcf-45e8-a0da-d7c06386b7b7" containerName="nova-api-api" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155290 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6f4129d-4bc4-449b-be94-82fce07cf1f0" containerName="galera" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155300 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="86c17dbf-d890-4de3-bf5d-29e0aea4d968" containerName="cinder-api-log" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155310 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd8b15e6-0bb6-4d79-99aa-765ded51af1d" containerName="cinder-scheduler" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155321 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="account-auditor" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155331 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="15a50bea-c32e-4aed-8fd2-7289e1694f6e" containerName="barbican-api-log" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155339 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="object-auditor" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155350 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="550bab70-eacb-4c56-98fd-460c20f22dcc" containerName="neutron-api" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155357 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="915955ff-c1d8-4f99-a621-f28d463c512f" containerName="nova-metadata-log" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155364 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="container-updater" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155376 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5fc86d0-4547-42ed-a880-f58c9e29d8a4" containerName="proxy-httpd" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155386 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="c44e0641-be37-4447-9666-14bf00c08827" containerName="memcached" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155395 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="account-replicator" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155403 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="71aaf352-8b91-4846-8ce4-1d83303ac203" containerName="glance-log" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155414 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="9699b913-55db-46fe-9831-1e1ac94ca609" containerName="nova-cell1-conductor-conductor" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155422 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="ada053fd-c71a-4425-8220-b950f0cab229" containerName="nova-scheduler-scheduler" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155430 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="95c08694-92ed-44cb-8ca3-92a47b5571d4" containerName="nova-cell0-conductor-conductor" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155441 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc57f3e-df7f-40fe-9cc0-7ad00ecd651e" containerName="kube-state-metrics" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155455 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="swift-recon-cron" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155480 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ccfc13-7a62-4923-95ab-c68cb93aa03c" containerName="glance-httpd" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155491 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5fc86d0-4547-42ed-a880-f58c9e29d8a4" containerName="ceilometer-notification-agent" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155504 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f86e8b7a-656b-423e-8cf0-6d1025486c46" containerName="keystone-api" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155515 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1c7d7f4-dc60-4703-b6c3-6cd626db11af" containerName="container-replicator" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.155525 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5fc86d0-4547-42ed-a880-f58c9e29d8a4" containerName="sg-core" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.156246 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553682-fct6w" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.159064 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553682-fct6w"] Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.160059 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.160578 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.163433 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.214353 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dgjh\" (UniqueName: \"kubernetes.io/projected/56b21cc0-0f51-4347-a0db-845f66ec531e-kube-api-access-6dgjh\") pod \"auto-csr-approver-29553682-fct6w\" (UID: \"56b21cc0-0f51-4347-a0db-845f66ec531e\") " pod="openshift-infra/auto-csr-approver-29553682-fct6w" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.316715 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dgjh\" (UniqueName: \"kubernetes.io/projected/56b21cc0-0f51-4347-a0db-845f66ec531e-kube-api-access-6dgjh\") pod \"auto-csr-approver-29553682-fct6w\" (UID: \"56b21cc0-0f51-4347-a0db-845f66ec531e\") " pod="openshift-infra/auto-csr-approver-29553682-fct6w" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.341224 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dgjh\" (UniqueName: \"kubernetes.io/projected/56b21cc0-0f51-4347-a0db-845f66ec531e-kube-api-access-6dgjh\") pod \"auto-csr-approver-29553682-fct6w\" (UID: \"56b21cc0-0f51-4347-a0db-845f66ec531e\") " pod="openshift-infra/auto-csr-approver-29553682-fct6w" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.480453 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553682-fct6w" Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.930114 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553682-fct6w"] Mar 11 09:22:00 crc kubenswrapper[4840]: W0311 09:22:00.936529 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56b21cc0_0f51_4347_a0db_845f66ec531e.slice/crio-27ad8a6d4943331b5afce1a82f3dc3aacfd1bed232cfcc4f604b0d7170c80fb6 WatchSource:0}: Error finding container 27ad8a6d4943331b5afce1a82f3dc3aacfd1bed232cfcc4f604b0d7170c80fb6: Status 404 returned error can't find the container with id 27ad8a6d4943331b5afce1a82f3dc3aacfd1bed232cfcc4f604b0d7170c80fb6 Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.939290 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 09:22:00 crc kubenswrapper[4840]: I0311 09:22:00.966804 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553682-fct6w" event={"ID":"56b21cc0-0f51-4347-a0db-845f66ec531e","Type":"ContainerStarted","Data":"27ad8a6d4943331b5afce1a82f3dc3aacfd1bed232cfcc4f604b0d7170c80fb6"} Mar 11 09:22:02 crc kubenswrapper[4840]: I0311 09:22:02.984635 4840 generic.go:334] "Generic (PLEG): container finished" podID="56b21cc0-0f51-4347-a0db-845f66ec531e" containerID="6cd7f23e0f6c16605b0e7d514cc4c9935f7c468c7c874b187448685d002cf147" exitCode=0 Mar 11 09:22:02 crc kubenswrapper[4840]: I0311 09:22:02.984687 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553682-fct6w" event={"ID":"56b21cc0-0f51-4347-a0db-845f66ec531e","Type":"ContainerDied","Data":"6cd7f23e0f6c16605b0e7d514cc4c9935f7c468c7c874b187448685d002cf147"} Mar 11 09:22:04 crc kubenswrapper[4840]: I0311 09:22:04.252772 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553682-fct6w" Mar 11 09:22:04 crc kubenswrapper[4840]: I0311 09:22:04.377460 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dgjh\" (UniqueName: \"kubernetes.io/projected/56b21cc0-0f51-4347-a0db-845f66ec531e-kube-api-access-6dgjh\") pod \"56b21cc0-0f51-4347-a0db-845f66ec531e\" (UID: \"56b21cc0-0f51-4347-a0db-845f66ec531e\") " Mar 11 09:22:04 crc kubenswrapper[4840]: I0311 09:22:04.383183 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56b21cc0-0f51-4347-a0db-845f66ec531e-kube-api-access-6dgjh" (OuterVolumeSpecName: "kube-api-access-6dgjh") pod "56b21cc0-0f51-4347-a0db-845f66ec531e" (UID: "56b21cc0-0f51-4347-a0db-845f66ec531e"). InnerVolumeSpecName "kube-api-access-6dgjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:22:04 crc kubenswrapper[4840]: I0311 09:22:04.480069 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dgjh\" (UniqueName: \"kubernetes.io/projected/56b21cc0-0f51-4347-a0db-845f66ec531e-kube-api-access-6dgjh\") on node \"crc\" DevicePath \"\"" Mar 11 09:22:05 crc kubenswrapper[4840]: I0311 09:22:05.005247 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553682-fct6w" event={"ID":"56b21cc0-0f51-4347-a0db-845f66ec531e","Type":"ContainerDied","Data":"27ad8a6d4943331b5afce1a82f3dc3aacfd1bed232cfcc4f604b0d7170c80fb6"} Mar 11 09:22:05 crc kubenswrapper[4840]: I0311 09:22:05.005529 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27ad8a6d4943331b5afce1a82f3dc3aacfd1bed232cfcc4f604b0d7170c80fb6" Mar 11 09:22:05 crc kubenswrapper[4840]: I0311 09:22:05.005367 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553682-fct6w" Mar 11 09:22:05 crc kubenswrapper[4840]: I0311 09:22:05.332392 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553676-kjd2f"] Mar 11 09:22:05 crc kubenswrapper[4840]: I0311 09:22:05.339156 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553676-kjd2f"] Mar 11 09:22:06 crc kubenswrapper[4840]: I0311 09:22:06.070154 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a085068d-adb7-44fc-9d8a-7b413ceeee17" path="/var/lib/kubelet/pods/a085068d-adb7-44fc-9d8a-7b413ceeee17/volumes" Mar 11 09:22:15 crc kubenswrapper[4840]: I0311 09:22:15.474309 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lct2t"] Mar 11 09:22:15 crc kubenswrapper[4840]: E0311 09:22:15.475263 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56b21cc0-0f51-4347-a0db-845f66ec531e" containerName="oc" Mar 11 09:22:15 crc kubenswrapper[4840]: I0311 09:22:15.475283 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="56b21cc0-0f51-4347-a0db-845f66ec531e" containerName="oc" Mar 11 09:22:15 crc kubenswrapper[4840]: I0311 09:22:15.475515 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="56b21cc0-0f51-4347-a0db-845f66ec531e" containerName="oc" Mar 11 09:22:15 crc kubenswrapper[4840]: I0311 09:22:15.476751 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lct2t" Mar 11 09:22:15 crc kubenswrapper[4840]: I0311 09:22:15.490103 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lct2t"] Mar 11 09:22:15 crc kubenswrapper[4840]: I0311 09:22:15.646575 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wm6v\" (UniqueName: \"kubernetes.io/projected/ff902957-2dbc-4653-960a-99a9f5d58202-kube-api-access-7wm6v\") pod \"community-operators-lct2t\" (UID: \"ff902957-2dbc-4653-960a-99a9f5d58202\") " pod="openshift-marketplace/community-operators-lct2t" Mar 11 09:22:15 crc kubenswrapper[4840]: I0311 09:22:15.646703 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff902957-2dbc-4653-960a-99a9f5d58202-catalog-content\") pod \"community-operators-lct2t\" (UID: \"ff902957-2dbc-4653-960a-99a9f5d58202\") " pod="openshift-marketplace/community-operators-lct2t" Mar 11 09:22:15 crc kubenswrapper[4840]: I0311 09:22:15.646761 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff902957-2dbc-4653-960a-99a9f5d58202-utilities\") pod \"community-operators-lct2t\" (UID: \"ff902957-2dbc-4653-960a-99a9f5d58202\") " pod="openshift-marketplace/community-operators-lct2t" Mar 11 09:22:15 crc kubenswrapper[4840]: I0311 09:22:15.749198 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wm6v\" (UniqueName: \"kubernetes.io/projected/ff902957-2dbc-4653-960a-99a9f5d58202-kube-api-access-7wm6v\") pod \"community-operators-lct2t\" (UID: \"ff902957-2dbc-4653-960a-99a9f5d58202\") " pod="openshift-marketplace/community-operators-lct2t" Mar 11 09:22:15 crc kubenswrapper[4840]: I0311 09:22:15.749285 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff902957-2dbc-4653-960a-99a9f5d58202-catalog-content\") pod \"community-operators-lct2t\" (UID: \"ff902957-2dbc-4653-960a-99a9f5d58202\") " pod="openshift-marketplace/community-operators-lct2t" Mar 11 09:22:15 crc kubenswrapper[4840]: I0311 09:22:15.749346 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff902957-2dbc-4653-960a-99a9f5d58202-utilities\") pod \"community-operators-lct2t\" (UID: \"ff902957-2dbc-4653-960a-99a9f5d58202\") " pod="openshift-marketplace/community-operators-lct2t" Mar 11 09:22:15 crc kubenswrapper[4840]: I0311 09:22:15.749993 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff902957-2dbc-4653-960a-99a9f5d58202-catalog-content\") pod \"community-operators-lct2t\" (UID: \"ff902957-2dbc-4653-960a-99a9f5d58202\") " pod="openshift-marketplace/community-operators-lct2t" Mar 11 09:22:15 crc kubenswrapper[4840]: I0311 09:22:15.750021 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff902957-2dbc-4653-960a-99a9f5d58202-utilities\") pod \"community-operators-lct2t\" (UID: \"ff902957-2dbc-4653-960a-99a9f5d58202\") " pod="openshift-marketplace/community-operators-lct2t" Mar 11 09:22:15 crc kubenswrapper[4840]: I0311 09:22:15.770195 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wm6v\" (UniqueName: \"kubernetes.io/projected/ff902957-2dbc-4653-960a-99a9f5d58202-kube-api-access-7wm6v\") pod \"community-operators-lct2t\" (UID: \"ff902957-2dbc-4653-960a-99a9f5d58202\") " pod="openshift-marketplace/community-operators-lct2t" Mar 11 09:22:15 crc kubenswrapper[4840]: I0311 09:22:15.798359 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lct2t" Mar 11 09:22:16 crc kubenswrapper[4840]: I0311 09:22:16.129644 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lct2t"] Mar 11 09:22:17 crc kubenswrapper[4840]: I0311 09:22:17.113630 4840 generic.go:334] "Generic (PLEG): container finished" podID="ff902957-2dbc-4653-960a-99a9f5d58202" containerID="9499781e3bec2b963d29675e7cc922c442e2ff525bcb00ac274c0aa10dce0902" exitCode=0 Mar 11 09:22:17 crc kubenswrapper[4840]: I0311 09:22:17.113802 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lct2t" event={"ID":"ff902957-2dbc-4653-960a-99a9f5d58202","Type":"ContainerDied","Data":"9499781e3bec2b963d29675e7cc922c442e2ff525bcb00ac274c0aa10dce0902"} Mar 11 09:22:17 crc kubenswrapper[4840]: I0311 09:22:17.113995 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lct2t" event={"ID":"ff902957-2dbc-4653-960a-99a9f5d58202","Type":"ContainerStarted","Data":"cd8e0b71c65f36dbe7817869d239e3baa4e453f98bd297f0edbfdd32e7d4d8ba"} Mar 11 09:22:18 crc kubenswrapper[4840]: I0311 09:22:18.122954 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lct2t" event={"ID":"ff902957-2dbc-4653-960a-99a9f5d58202","Type":"ContainerStarted","Data":"1f1278031e634a35322537ef49f525fa04c08811492093b16ccb6b91b5e3a0a0"} Mar 11 09:22:19 crc kubenswrapper[4840]: I0311 09:22:19.133995 4840 generic.go:334] "Generic (PLEG): container finished" podID="ff902957-2dbc-4653-960a-99a9f5d58202" containerID="1f1278031e634a35322537ef49f525fa04c08811492093b16ccb6b91b5e3a0a0" exitCode=0 Mar 11 09:22:19 crc kubenswrapper[4840]: I0311 09:22:19.134058 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lct2t" event={"ID":"ff902957-2dbc-4653-960a-99a9f5d58202","Type":"ContainerDied","Data":"1f1278031e634a35322537ef49f525fa04c08811492093b16ccb6b91b5e3a0a0"} Mar 11 09:22:20 crc kubenswrapper[4840]: I0311 09:22:20.143736 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lct2t" event={"ID":"ff902957-2dbc-4653-960a-99a9f5d58202","Type":"ContainerStarted","Data":"ead0caa5468744b396d760e06d06a4d64ea41e1057728844fdd489b8ea9d2f04"} Mar 11 09:22:20 crc kubenswrapper[4840]: I0311 09:22:20.164505 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lct2t" podStartSLOduration=2.70951305 podStartE2EDuration="5.164483598s" podCreationTimestamp="2026-03-11 09:22:15 +0000 UTC" firstStartedPulling="2026-03-11 09:22:17.11558525 +0000 UTC m=+1535.781255065" lastFinishedPulling="2026-03-11 09:22:19.570555798 +0000 UTC m=+1538.236225613" observedRunningTime="2026-03-11 09:22:20.161241957 +0000 UTC m=+1538.826911772" watchObservedRunningTime="2026-03-11 09:22:20.164483598 +0000 UTC m=+1538.830153413" Mar 11 09:22:25 crc kubenswrapper[4840]: I0311 09:22:25.798861 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lct2t" Mar 11 09:22:25 crc kubenswrapper[4840]: I0311 09:22:25.800195 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lct2t" Mar 11 09:22:25 crc kubenswrapper[4840]: I0311 09:22:25.839951 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lct2t" Mar 11 09:22:26 crc kubenswrapper[4840]: I0311 09:22:26.234595 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lct2t" Mar 11 09:22:26 crc kubenswrapper[4840]: I0311 09:22:26.276988 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lct2t"] Mar 11 09:22:27 crc kubenswrapper[4840]: I0311 09:22:27.446962 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:22:27 crc kubenswrapper[4840]: I0311 09:22:27.447736 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:22:28 crc kubenswrapper[4840]: I0311 09:22:28.210597 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lct2t" podUID="ff902957-2dbc-4653-960a-99a9f5d58202" containerName="registry-server" containerID="cri-o://ead0caa5468744b396d760e06d06a4d64ea41e1057728844fdd489b8ea9d2f04" gracePeriod=2 Mar 11 09:22:28 crc kubenswrapper[4840]: I0311 09:22:28.631951 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lct2t" Mar 11 09:22:28 crc kubenswrapper[4840]: I0311 09:22:28.744750 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff902957-2dbc-4653-960a-99a9f5d58202-utilities\") pod \"ff902957-2dbc-4653-960a-99a9f5d58202\" (UID: \"ff902957-2dbc-4653-960a-99a9f5d58202\") " Mar 11 09:22:28 crc kubenswrapper[4840]: I0311 09:22:28.744806 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff902957-2dbc-4653-960a-99a9f5d58202-catalog-content\") pod \"ff902957-2dbc-4653-960a-99a9f5d58202\" (UID: \"ff902957-2dbc-4653-960a-99a9f5d58202\") " Mar 11 09:22:28 crc kubenswrapper[4840]: I0311 09:22:28.744860 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wm6v\" (UniqueName: \"kubernetes.io/projected/ff902957-2dbc-4653-960a-99a9f5d58202-kube-api-access-7wm6v\") pod \"ff902957-2dbc-4653-960a-99a9f5d58202\" (UID: \"ff902957-2dbc-4653-960a-99a9f5d58202\") " Mar 11 09:22:28 crc kubenswrapper[4840]: I0311 09:22:28.745898 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff902957-2dbc-4653-960a-99a9f5d58202-utilities" (OuterVolumeSpecName: "utilities") pod "ff902957-2dbc-4653-960a-99a9f5d58202" (UID: "ff902957-2dbc-4653-960a-99a9f5d58202"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:22:28 crc kubenswrapper[4840]: I0311 09:22:28.751435 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff902957-2dbc-4653-960a-99a9f5d58202-kube-api-access-7wm6v" (OuterVolumeSpecName: "kube-api-access-7wm6v") pod "ff902957-2dbc-4653-960a-99a9f5d58202" (UID: "ff902957-2dbc-4653-960a-99a9f5d58202"). InnerVolumeSpecName "kube-api-access-7wm6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:22:28 crc kubenswrapper[4840]: I0311 09:22:28.809281 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff902957-2dbc-4653-960a-99a9f5d58202-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff902957-2dbc-4653-960a-99a9f5d58202" (UID: "ff902957-2dbc-4653-960a-99a9f5d58202"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:22:28 crc kubenswrapper[4840]: I0311 09:22:28.846691 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff902957-2dbc-4653-960a-99a9f5d58202-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:22:28 crc kubenswrapper[4840]: I0311 09:22:28.846739 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff902957-2dbc-4653-960a-99a9f5d58202-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:22:28 crc kubenswrapper[4840]: I0311 09:22:28.846752 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wm6v\" (UniqueName: \"kubernetes.io/projected/ff902957-2dbc-4653-960a-99a9f5d58202-kube-api-access-7wm6v\") on node \"crc\" DevicePath \"\"" Mar 11 09:22:29 crc kubenswrapper[4840]: I0311 09:22:29.220887 4840 generic.go:334] "Generic (PLEG): container finished" podID="ff902957-2dbc-4653-960a-99a9f5d58202" containerID="ead0caa5468744b396d760e06d06a4d64ea41e1057728844fdd489b8ea9d2f04" exitCode=0 Mar 11 09:22:29 crc kubenswrapper[4840]: I0311 09:22:29.220959 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lct2t" event={"ID":"ff902957-2dbc-4653-960a-99a9f5d58202","Type":"ContainerDied","Data":"ead0caa5468744b396d760e06d06a4d64ea41e1057728844fdd489b8ea9d2f04"} Mar 11 09:22:29 crc kubenswrapper[4840]: I0311 09:22:29.220993 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lct2t" Mar 11 09:22:29 crc kubenswrapper[4840]: I0311 09:22:29.221021 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lct2t" event={"ID":"ff902957-2dbc-4653-960a-99a9f5d58202","Type":"ContainerDied","Data":"cd8e0b71c65f36dbe7817869d239e3baa4e453f98bd297f0edbfdd32e7d4d8ba"} Mar 11 09:22:29 crc kubenswrapper[4840]: I0311 09:22:29.221040 4840 scope.go:117] "RemoveContainer" containerID="ead0caa5468744b396d760e06d06a4d64ea41e1057728844fdd489b8ea9d2f04" Mar 11 09:22:29 crc kubenswrapper[4840]: I0311 09:22:29.256778 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lct2t"] Mar 11 09:22:29 crc kubenswrapper[4840]: I0311 09:22:29.258307 4840 scope.go:117] "RemoveContainer" containerID="1f1278031e634a35322537ef49f525fa04c08811492093b16ccb6b91b5e3a0a0" Mar 11 09:22:29 crc kubenswrapper[4840]: I0311 09:22:29.263679 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lct2t"] Mar 11 09:22:29 crc kubenswrapper[4840]: I0311 09:22:29.281522 4840 scope.go:117] "RemoveContainer" containerID="9499781e3bec2b963d29675e7cc922c442e2ff525bcb00ac274c0aa10dce0902" Mar 11 09:22:29 crc kubenswrapper[4840]: I0311 09:22:29.305271 4840 scope.go:117] "RemoveContainer" containerID="ead0caa5468744b396d760e06d06a4d64ea41e1057728844fdd489b8ea9d2f04" Mar 11 09:22:29 crc kubenswrapper[4840]: E0311 09:22:29.305897 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ead0caa5468744b396d760e06d06a4d64ea41e1057728844fdd489b8ea9d2f04\": container with ID starting with ead0caa5468744b396d760e06d06a4d64ea41e1057728844fdd489b8ea9d2f04 not found: ID does not exist" containerID="ead0caa5468744b396d760e06d06a4d64ea41e1057728844fdd489b8ea9d2f04" Mar 11 09:22:29 crc kubenswrapper[4840]: I0311 09:22:29.305958 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ead0caa5468744b396d760e06d06a4d64ea41e1057728844fdd489b8ea9d2f04"} err="failed to get container status \"ead0caa5468744b396d760e06d06a4d64ea41e1057728844fdd489b8ea9d2f04\": rpc error: code = NotFound desc = could not find container \"ead0caa5468744b396d760e06d06a4d64ea41e1057728844fdd489b8ea9d2f04\": container with ID starting with ead0caa5468744b396d760e06d06a4d64ea41e1057728844fdd489b8ea9d2f04 not found: ID does not exist" Mar 11 09:22:29 crc kubenswrapper[4840]: I0311 09:22:29.305990 4840 scope.go:117] "RemoveContainer" containerID="1f1278031e634a35322537ef49f525fa04c08811492093b16ccb6b91b5e3a0a0" Mar 11 09:22:29 crc kubenswrapper[4840]: E0311 09:22:29.306532 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f1278031e634a35322537ef49f525fa04c08811492093b16ccb6b91b5e3a0a0\": container with ID starting with 1f1278031e634a35322537ef49f525fa04c08811492093b16ccb6b91b5e3a0a0 not found: ID does not exist" containerID="1f1278031e634a35322537ef49f525fa04c08811492093b16ccb6b91b5e3a0a0" Mar 11 09:22:29 crc kubenswrapper[4840]: I0311 09:22:29.306567 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f1278031e634a35322537ef49f525fa04c08811492093b16ccb6b91b5e3a0a0"} err="failed to get container status \"1f1278031e634a35322537ef49f525fa04c08811492093b16ccb6b91b5e3a0a0\": rpc error: code = NotFound desc = could not find container \"1f1278031e634a35322537ef49f525fa04c08811492093b16ccb6b91b5e3a0a0\": container with ID starting with 1f1278031e634a35322537ef49f525fa04c08811492093b16ccb6b91b5e3a0a0 not found: ID does not exist" Mar 11 09:22:29 crc kubenswrapper[4840]: I0311 09:22:29.306629 4840 scope.go:117] "RemoveContainer" containerID="9499781e3bec2b963d29675e7cc922c442e2ff525bcb00ac274c0aa10dce0902" Mar 11 09:22:29 crc kubenswrapper[4840]: E0311 09:22:29.306867 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9499781e3bec2b963d29675e7cc922c442e2ff525bcb00ac274c0aa10dce0902\": container with ID starting with 9499781e3bec2b963d29675e7cc922c442e2ff525bcb00ac274c0aa10dce0902 not found: ID does not exist" containerID="9499781e3bec2b963d29675e7cc922c442e2ff525bcb00ac274c0aa10dce0902" Mar 11 09:22:29 crc kubenswrapper[4840]: I0311 09:22:29.306890 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9499781e3bec2b963d29675e7cc922c442e2ff525bcb00ac274c0aa10dce0902"} err="failed to get container status \"9499781e3bec2b963d29675e7cc922c442e2ff525bcb00ac274c0aa10dce0902\": rpc error: code = NotFound desc = could not find container \"9499781e3bec2b963d29675e7cc922c442e2ff525bcb00ac274c0aa10dce0902\": container with ID starting with 9499781e3bec2b963d29675e7cc922c442e2ff525bcb00ac274c0aa10dce0902 not found: ID does not exist" Mar 11 09:22:30 crc kubenswrapper[4840]: I0311 09:22:30.069108 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff902957-2dbc-4653-960a-99a9f5d58202" path="/var/lib/kubelet/pods/ff902957-2dbc-4653-960a-99a9f5d58202/volumes" Mar 11 09:22:48 crc kubenswrapper[4840]: I0311 09:22:48.524663 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7m8nd"] Mar 11 09:22:48 crc kubenswrapper[4840]: E0311 09:22:48.525547 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff902957-2dbc-4653-960a-99a9f5d58202" containerName="extract-utilities" Mar 11 09:22:48 crc kubenswrapper[4840]: I0311 09:22:48.525566 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff902957-2dbc-4653-960a-99a9f5d58202" containerName="extract-utilities" Mar 11 09:22:48 crc kubenswrapper[4840]: E0311 09:22:48.525577 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff902957-2dbc-4653-960a-99a9f5d58202" containerName="registry-server" Mar 11 09:22:48 crc kubenswrapper[4840]: I0311 09:22:48.525585 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff902957-2dbc-4653-960a-99a9f5d58202" containerName="registry-server" Mar 11 09:22:48 crc kubenswrapper[4840]: E0311 09:22:48.525598 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff902957-2dbc-4653-960a-99a9f5d58202" containerName="extract-content" Mar 11 09:22:48 crc kubenswrapper[4840]: I0311 09:22:48.525606 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff902957-2dbc-4653-960a-99a9f5d58202" containerName="extract-content" Mar 11 09:22:48 crc kubenswrapper[4840]: I0311 09:22:48.525779 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff902957-2dbc-4653-960a-99a9f5d58202" containerName="registry-server" Mar 11 09:22:48 crc kubenswrapper[4840]: I0311 09:22:48.527249 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m8nd" Mar 11 09:22:48 crc kubenswrapper[4840]: I0311 09:22:48.546876 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m8nd"] Mar 11 09:22:48 crc kubenswrapper[4840]: I0311 09:22:48.645967 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7764d72-431e-4f92-ba82-b17b8d16b0e2-catalog-content\") pod \"redhat-marketplace-7m8nd\" (UID: \"d7764d72-431e-4f92-ba82-b17b8d16b0e2\") " pod="openshift-marketplace/redhat-marketplace-7m8nd" Mar 11 09:22:48 crc kubenswrapper[4840]: I0311 09:22:48.646087 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmw49\" (UniqueName: \"kubernetes.io/projected/d7764d72-431e-4f92-ba82-b17b8d16b0e2-kube-api-access-mmw49\") pod \"redhat-marketplace-7m8nd\" (UID: \"d7764d72-431e-4f92-ba82-b17b8d16b0e2\") " pod="openshift-marketplace/redhat-marketplace-7m8nd" Mar 11 09:22:48 crc kubenswrapper[4840]: I0311 09:22:48.646214 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7764d72-431e-4f92-ba82-b17b8d16b0e2-utilities\") pod \"redhat-marketplace-7m8nd\" (UID: \"d7764d72-431e-4f92-ba82-b17b8d16b0e2\") " pod="openshift-marketplace/redhat-marketplace-7m8nd" Mar 11 09:22:48 crc kubenswrapper[4840]: I0311 09:22:48.747515 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7764d72-431e-4f92-ba82-b17b8d16b0e2-catalog-content\") pod \"redhat-marketplace-7m8nd\" (UID: \"d7764d72-431e-4f92-ba82-b17b8d16b0e2\") " pod="openshift-marketplace/redhat-marketplace-7m8nd" Mar 11 09:22:48 crc kubenswrapper[4840]: I0311 09:22:48.747578 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmw49\" (UniqueName: \"kubernetes.io/projected/d7764d72-431e-4f92-ba82-b17b8d16b0e2-kube-api-access-mmw49\") pod \"redhat-marketplace-7m8nd\" (UID: \"d7764d72-431e-4f92-ba82-b17b8d16b0e2\") " pod="openshift-marketplace/redhat-marketplace-7m8nd" Mar 11 09:22:48 crc kubenswrapper[4840]: I0311 09:22:48.747633 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7764d72-431e-4f92-ba82-b17b8d16b0e2-utilities\") pod \"redhat-marketplace-7m8nd\" (UID: \"d7764d72-431e-4f92-ba82-b17b8d16b0e2\") " pod="openshift-marketplace/redhat-marketplace-7m8nd" Mar 11 09:22:48 crc kubenswrapper[4840]: I0311 09:22:48.748276 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7764d72-431e-4f92-ba82-b17b8d16b0e2-utilities\") pod \"redhat-marketplace-7m8nd\" (UID: \"d7764d72-431e-4f92-ba82-b17b8d16b0e2\") " pod="openshift-marketplace/redhat-marketplace-7m8nd" Mar 11 09:22:48 crc kubenswrapper[4840]: I0311 09:22:48.748606 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7764d72-431e-4f92-ba82-b17b8d16b0e2-catalog-content\") pod \"redhat-marketplace-7m8nd\" (UID: \"d7764d72-431e-4f92-ba82-b17b8d16b0e2\") " pod="openshift-marketplace/redhat-marketplace-7m8nd" Mar 11 09:22:48 crc kubenswrapper[4840]: I0311 09:22:48.768422 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmw49\" (UniqueName: \"kubernetes.io/projected/d7764d72-431e-4f92-ba82-b17b8d16b0e2-kube-api-access-mmw49\") pod \"redhat-marketplace-7m8nd\" (UID: \"d7764d72-431e-4f92-ba82-b17b8d16b0e2\") " pod="openshift-marketplace/redhat-marketplace-7m8nd" Mar 11 09:22:48 crc kubenswrapper[4840]: I0311 09:22:48.851351 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m8nd" Mar 11 09:22:49 crc kubenswrapper[4840]: I0311 09:22:49.299705 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m8nd"] Mar 11 09:22:49 crc kubenswrapper[4840]: I0311 09:22:49.393558 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m8nd" event={"ID":"d7764d72-431e-4f92-ba82-b17b8d16b0e2","Type":"ContainerStarted","Data":"e0a60a4bc9ee5a4115030adc3d381f5019e7a43a118e6d84827a0ba3ceac900a"} Mar 11 09:22:50 crc kubenswrapper[4840]: I0311 09:22:50.406115 4840 generic.go:334] "Generic (PLEG): container finished" podID="d7764d72-431e-4f92-ba82-b17b8d16b0e2" containerID="922414a46e9a7de1fe01a9938f700a72ecabbbb3124bcd492b8da0ffaabb5aa3" exitCode=0 Mar 11 09:22:50 crc kubenswrapper[4840]: I0311 09:22:50.406353 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m8nd" event={"ID":"d7764d72-431e-4f92-ba82-b17b8d16b0e2","Type":"ContainerDied","Data":"922414a46e9a7de1fe01a9938f700a72ecabbbb3124bcd492b8da0ffaabb5aa3"} Mar 11 09:22:51 crc kubenswrapper[4840]: I0311 09:22:51.420570 4840 generic.go:334] "Generic (PLEG): container finished" podID="d7764d72-431e-4f92-ba82-b17b8d16b0e2" containerID="716f0a2048b43fc5c742fed07093d0031c6af68a9e1589c983cc0931ac3d4936" exitCode=0 Mar 11 09:22:51 crc kubenswrapper[4840]: I0311 09:22:51.420677 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m8nd" event={"ID":"d7764d72-431e-4f92-ba82-b17b8d16b0e2","Type":"ContainerDied","Data":"716f0a2048b43fc5c742fed07093d0031c6af68a9e1589c983cc0931ac3d4936"} Mar 11 09:22:52 crc kubenswrapper[4840]: I0311 09:22:52.435103 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m8nd" event={"ID":"d7764d72-431e-4f92-ba82-b17b8d16b0e2","Type":"ContainerStarted","Data":"cc38716e07dc799cb97a1832b368dbc65839a4a3cf54aef59889e65f6599f009"} Mar 11 09:22:52 crc kubenswrapper[4840]: I0311 09:22:52.462383 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7m8nd" podStartSLOduration=2.828275798 podStartE2EDuration="4.462358492s" podCreationTimestamp="2026-03-11 09:22:48 +0000 UTC" firstStartedPulling="2026-03-11 09:22:50.409414424 +0000 UTC m=+1569.075084239" lastFinishedPulling="2026-03-11 09:22:52.043497108 +0000 UTC m=+1570.709166933" observedRunningTime="2026-03-11 09:22:52.45793327 +0000 UTC m=+1571.123603085" watchObservedRunningTime="2026-03-11 09:22:52.462358492 +0000 UTC m=+1571.128028307" Mar 11 09:22:57 crc kubenswrapper[4840]: I0311 09:22:57.446414 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:22:57 crc kubenswrapper[4840]: I0311 09:22:57.446790 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:22:57 crc kubenswrapper[4840]: I0311 09:22:57.446832 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 09:22:57 crc kubenswrapper[4840]: I0311 09:22:57.447431 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f"} pod="openshift-machine-config-operator/machine-config-daemon-brtht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 09:22:57 crc kubenswrapper[4840]: I0311 09:22:57.447524 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" containerID="cri-o://169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" gracePeriod=600 Mar 11 09:22:57 crc kubenswrapper[4840]: E0311 09:22:57.565871 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:22:58 crc kubenswrapper[4840]: I0311 09:22:58.482695 4840 generic.go:334] "Generic (PLEG): container finished" podID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" exitCode=0 Mar 11 09:22:58 crc kubenswrapper[4840]: I0311 09:22:58.482770 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerDied","Data":"169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f"} Mar 11 09:22:58 crc kubenswrapper[4840]: I0311 09:22:58.483074 4840 scope.go:117] "RemoveContainer" containerID="6d23981dcac257731751dcd40b5609471911fb1c4a3f0ea9223434b40578e948" Mar 11 09:22:58 crc kubenswrapper[4840]: I0311 09:22:58.483889 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:22:58 crc kubenswrapper[4840]: E0311 09:22:58.484341 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:22:58 crc kubenswrapper[4840]: I0311 09:22:58.851653 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7m8nd" Mar 11 09:22:58 crc kubenswrapper[4840]: I0311 09:22:58.851732 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7m8nd" Mar 11 09:22:58 crc kubenswrapper[4840]: I0311 09:22:58.902409 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7m8nd" Mar 11 09:22:59 crc kubenswrapper[4840]: I0311 09:22:59.536264 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7m8nd" Mar 11 09:22:59 crc kubenswrapper[4840]: I0311 09:22:59.588328 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m8nd"] Mar 11 09:23:01 crc kubenswrapper[4840]: I0311 09:23:01.517074 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7m8nd" podUID="d7764d72-431e-4f92-ba82-b17b8d16b0e2" containerName="registry-server" containerID="cri-o://cc38716e07dc799cb97a1832b368dbc65839a4a3cf54aef59889e65f6599f009" gracePeriod=2 Mar 11 09:23:01 crc kubenswrapper[4840]: I0311 09:23:01.942537 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m8nd" Mar 11 09:23:02 crc kubenswrapper[4840]: I0311 09:23:02.031312 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7764d72-431e-4f92-ba82-b17b8d16b0e2-catalog-content\") pod \"d7764d72-431e-4f92-ba82-b17b8d16b0e2\" (UID: \"d7764d72-431e-4f92-ba82-b17b8d16b0e2\") " Mar 11 09:23:02 crc kubenswrapper[4840]: I0311 09:23:02.031384 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmw49\" (UniqueName: \"kubernetes.io/projected/d7764d72-431e-4f92-ba82-b17b8d16b0e2-kube-api-access-mmw49\") pod \"d7764d72-431e-4f92-ba82-b17b8d16b0e2\" (UID: \"d7764d72-431e-4f92-ba82-b17b8d16b0e2\") " Mar 11 09:23:02 crc kubenswrapper[4840]: I0311 09:23:02.031421 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7764d72-431e-4f92-ba82-b17b8d16b0e2-utilities\") pod \"d7764d72-431e-4f92-ba82-b17b8d16b0e2\" (UID: \"d7764d72-431e-4f92-ba82-b17b8d16b0e2\") " Mar 11 09:23:02 crc kubenswrapper[4840]: I0311 09:23:02.033002 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7764d72-431e-4f92-ba82-b17b8d16b0e2-utilities" (OuterVolumeSpecName: "utilities") pod "d7764d72-431e-4f92-ba82-b17b8d16b0e2" (UID: "d7764d72-431e-4f92-ba82-b17b8d16b0e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:23:02 crc kubenswrapper[4840]: I0311 09:23:02.037970 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7764d72-431e-4f92-ba82-b17b8d16b0e2-kube-api-access-mmw49" (OuterVolumeSpecName: "kube-api-access-mmw49") pod "d7764d72-431e-4f92-ba82-b17b8d16b0e2" (UID: "d7764d72-431e-4f92-ba82-b17b8d16b0e2"). InnerVolumeSpecName "kube-api-access-mmw49". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:23:02 crc kubenswrapper[4840]: I0311 09:23:02.066226 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7764d72-431e-4f92-ba82-b17b8d16b0e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7764d72-431e-4f92-ba82-b17b8d16b0e2" (UID: "d7764d72-431e-4f92-ba82-b17b8d16b0e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:23:02 crc kubenswrapper[4840]: I0311 09:23:02.132922 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7764d72-431e-4f92-ba82-b17b8d16b0e2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:23:02 crc kubenswrapper[4840]: I0311 09:23:02.132967 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmw49\" (UniqueName: \"kubernetes.io/projected/d7764d72-431e-4f92-ba82-b17b8d16b0e2-kube-api-access-mmw49\") on node \"crc\" DevicePath \"\"" Mar 11 09:23:02 crc kubenswrapper[4840]: I0311 09:23:02.132984 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7764d72-431e-4f92-ba82-b17b8d16b0e2-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:23:02 crc kubenswrapper[4840]: I0311 09:23:02.528131 4840 generic.go:334] "Generic (PLEG): container finished" podID="d7764d72-431e-4f92-ba82-b17b8d16b0e2" containerID="cc38716e07dc799cb97a1832b368dbc65839a4a3cf54aef59889e65f6599f009" exitCode=0 Mar 11 09:23:02 crc kubenswrapper[4840]: I0311 09:23:02.528185 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m8nd" event={"ID":"d7764d72-431e-4f92-ba82-b17b8d16b0e2","Type":"ContainerDied","Data":"cc38716e07dc799cb97a1832b368dbc65839a4a3cf54aef59889e65f6599f009"} Mar 11 09:23:02 crc kubenswrapper[4840]: I0311 09:23:02.528196 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m8nd" Mar 11 09:23:02 crc kubenswrapper[4840]: I0311 09:23:02.528245 4840 scope.go:117] "RemoveContainer" containerID="cc38716e07dc799cb97a1832b368dbc65839a4a3cf54aef59889e65f6599f009" Mar 11 09:23:02 crc kubenswrapper[4840]: I0311 09:23:02.528229 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m8nd" event={"ID":"d7764d72-431e-4f92-ba82-b17b8d16b0e2","Type":"ContainerDied","Data":"e0a60a4bc9ee5a4115030adc3d381f5019e7a43a118e6d84827a0ba3ceac900a"} Mar 11 09:23:02 crc kubenswrapper[4840]: I0311 09:23:02.548088 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m8nd"] Mar 11 09:23:02 crc kubenswrapper[4840]: I0311 09:23:02.555041 4840 scope.go:117] "RemoveContainer" containerID="716f0a2048b43fc5c742fed07093d0031c6af68a9e1589c983cc0931ac3d4936" Mar 11 09:23:02 crc kubenswrapper[4840]: I0311 09:23:02.555572 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m8nd"] Mar 11 09:23:02 crc kubenswrapper[4840]: I0311 09:23:02.574870 4840 scope.go:117] "RemoveContainer" containerID="922414a46e9a7de1fe01a9938f700a72ecabbbb3124bcd492b8da0ffaabb5aa3" Mar 11 09:23:02 crc kubenswrapper[4840]: I0311 09:23:02.596116 4840 scope.go:117] "RemoveContainer" containerID="cc38716e07dc799cb97a1832b368dbc65839a4a3cf54aef59889e65f6599f009" Mar 11 09:23:02 crc kubenswrapper[4840]: E0311 09:23:02.596853 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc38716e07dc799cb97a1832b368dbc65839a4a3cf54aef59889e65f6599f009\": container with ID starting with cc38716e07dc799cb97a1832b368dbc65839a4a3cf54aef59889e65f6599f009 not found: ID does not exist" containerID="cc38716e07dc799cb97a1832b368dbc65839a4a3cf54aef59889e65f6599f009" Mar 11 09:23:02 crc kubenswrapper[4840]: I0311 09:23:02.596899 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc38716e07dc799cb97a1832b368dbc65839a4a3cf54aef59889e65f6599f009"} err="failed to get container status \"cc38716e07dc799cb97a1832b368dbc65839a4a3cf54aef59889e65f6599f009\": rpc error: code = NotFound desc = could not find container \"cc38716e07dc799cb97a1832b368dbc65839a4a3cf54aef59889e65f6599f009\": container with ID starting with cc38716e07dc799cb97a1832b368dbc65839a4a3cf54aef59889e65f6599f009 not found: ID does not exist" Mar 11 09:23:02 crc kubenswrapper[4840]: I0311 09:23:02.596926 4840 scope.go:117] "RemoveContainer" containerID="716f0a2048b43fc5c742fed07093d0031c6af68a9e1589c983cc0931ac3d4936" Mar 11 09:23:02 crc kubenswrapper[4840]: E0311 09:23:02.597245 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"716f0a2048b43fc5c742fed07093d0031c6af68a9e1589c983cc0931ac3d4936\": container with ID starting with 716f0a2048b43fc5c742fed07093d0031c6af68a9e1589c983cc0931ac3d4936 not found: ID does not exist" containerID="716f0a2048b43fc5c742fed07093d0031c6af68a9e1589c983cc0931ac3d4936" Mar 11 09:23:02 crc kubenswrapper[4840]: I0311 09:23:02.597274 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"716f0a2048b43fc5c742fed07093d0031c6af68a9e1589c983cc0931ac3d4936"} err="failed to get container status \"716f0a2048b43fc5c742fed07093d0031c6af68a9e1589c983cc0931ac3d4936\": rpc error: code = NotFound desc = could not find container \"716f0a2048b43fc5c742fed07093d0031c6af68a9e1589c983cc0931ac3d4936\": container with ID starting with 716f0a2048b43fc5c742fed07093d0031c6af68a9e1589c983cc0931ac3d4936 not found: ID does not exist" Mar 11 09:23:02 crc kubenswrapper[4840]: I0311 09:23:02.597291 4840 scope.go:117] "RemoveContainer" containerID="922414a46e9a7de1fe01a9938f700a72ecabbbb3124bcd492b8da0ffaabb5aa3" Mar 11 09:23:02 crc kubenswrapper[4840]: E0311 09:23:02.598165 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"922414a46e9a7de1fe01a9938f700a72ecabbbb3124bcd492b8da0ffaabb5aa3\": container with ID starting with 922414a46e9a7de1fe01a9938f700a72ecabbbb3124bcd492b8da0ffaabb5aa3 not found: ID does not exist" containerID="922414a46e9a7de1fe01a9938f700a72ecabbbb3124bcd492b8da0ffaabb5aa3" Mar 11 09:23:02 crc kubenswrapper[4840]: I0311 09:23:02.598198 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"922414a46e9a7de1fe01a9938f700a72ecabbbb3124bcd492b8da0ffaabb5aa3"} err="failed to get container status \"922414a46e9a7de1fe01a9938f700a72ecabbbb3124bcd492b8da0ffaabb5aa3\": rpc error: code = NotFound desc = could not find container \"922414a46e9a7de1fe01a9938f700a72ecabbbb3124bcd492b8da0ffaabb5aa3\": container with ID starting with 922414a46e9a7de1fe01a9938f700a72ecabbbb3124bcd492b8da0ffaabb5aa3 not found: ID does not exist" Mar 11 09:23:04 crc kubenswrapper[4840]: I0311 09:23:04.068403 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7764d72-431e-4f92-ba82-b17b8d16b0e2" path="/var/lib/kubelet/pods/d7764d72-431e-4f92-ba82-b17b8d16b0e2/volumes" Mar 11 09:23:06 crc kubenswrapper[4840]: I0311 09:23:06.950249 4840 scope.go:117] "RemoveContainer" containerID="10d38f83e8c1cf32b2d2f74c18bdafacb8d637d49dd6f29cc3410a5f98c1889a" Mar 11 09:23:06 crc kubenswrapper[4840]: I0311 09:23:06.992014 4840 scope.go:117] "RemoveContainer" containerID="3bad46acf302210e1c1e5e33a7fd79daab79e6bbe2d7d0b0cff1789d3f58adb7" Mar 11 09:23:07 crc kubenswrapper[4840]: I0311 09:23:07.013518 4840 scope.go:117] "RemoveContainer" containerID="5acef0cbaf6b8eacac1e6d33812371d5045212d01c390a7c87175c13608077b8" Mar 11 09:23:07 crc kubenswrapper[4840]: I0311 09:23:07.043507 4840 scope.go:117] "RemoveContainer" containerID="8551ecd9337f942d6c61636f9ae5d3b6cd5d9a84c9b912aa6cbf186033a4a63b" Mar 11 09:23:07 crc kubenswrapper[4840]: I0311 09:23:07.066957 4840 scope.go:117] "RemoveContainer" containerID="1a1f1b693e6c0b1951096fc3d520bfefde55c6b825a9679fa0d7422d3542e748" Mar 11 09:23:07 crc kubenswrapper[4840]: I0311 09:23:07.091939 4840 scope.go:117] "RemoveContainer" containerID="4012476edfb45bc66ade0425e6e15cb295f6d01f29410cdf38ce8e24f4f925b0" Mar 11 09:23:07 crc kubenswrapper[4840]: I0311 09:23:07.115250 4840 scope.go:117] "RemoveContainer" containerID="706a39957f553a13cf274703cc198be45700e0b2ee1f59e77e71aaa0883c6b97" Mar 11 09:23:07 crc kubenswrapper[4840]: I0311 09:23:07.140887 4840 scope.go:117] "RemoveContainer" containerID="da45ef6825f99f5918718a576904526c5a0a326e16f3e54f2f9f1f1632566812" Mar 11 09:23:07 crc kubenswrapper[4840]: I0311 09:23:07.161802 4840 scope.go:117] "RemoveContainer" containerID="c0f39a62c6f22a1acbea4f945d5fd15a25e7d9220f312f83f9f23b88277b126a" Mar 11 09:23:07 crc kubenswrapper[4840]: I0311 09:23:07.181278 4840 scope.go:117] "RemoveContainer" containerID="2f7ff4f6ca7eaa7a90b03a36ff78a6df0e5070796f410f01246f899e6c6e9f45" Mar 11 09:23:07 crc kubenswrapper[4840]: I0311 09:23:07.216264 4840 scope.go:117] "RemoveContainer" containerID="11fb75cf0ca0a87baab391a652f351b5b6942820369225966a95d8485b62934e" Mar 11 09:23:07 crc kubenswrapper[4840]: I0311 09:23:07.234481 4840 scope.go:117] "RemoveContainer" containerID="b24882edefa76ca986a45d7c188c0776d468318b10c2284d9028365926598890" Mar 11 09:23:07 crc kubenswrapper[4840]: I0311 09:23:07.259906 4840 scope.go:117] "RemoveContainer" containerID="0cc7935f4bd52c3f2596d14db90ae0d5b79d3c959d9c7ad317c5181908f66f7d" Mar 11 09:23:07 crc kubenswrapper[4840]: I0311 09:23:07.283235 4840 scope.go:117] "RemoveContainer" containerID="2996d2155c329176eff9a6a52124bee55a98d549119d59487efbbb39299564e2" Mar 11 09:23:07 crc kubenswrapper[4840]: I0311 09:23:07.300616 4840 scope.go:117] "RemoveContainer" containerID="5a178fa21bb4e04275cb65def582e45646bde0ab8e88575805ce80df87b91ebb" Mar 11 09:23:07 crc kubenswrapper[4840]: I0311 09:23:07.317611 4840 scope.go:117] "RemoveContainer" containerID="d1b5ffec586cd47bb674a9326320739259785cd510dbc8ce17b400241ea7be05" Mar 11 09:23:07 crc kubenswrapper[4840]: I0311 09:23:07.337406 4840 scope.go:117] "RemoveContainer" containerID="b38323b6b7e8b7295410c590c4ccbf9d6e88a12f4db5e89ab486bcac669132fd" Mar 11 09:23:07 crc kubenswrapper[4840]: I0311 09:23:07.355422 4840 scope.go:117] "RemoveContainer" containerID="bf4e4f39e745fee6f9adb212657d2a4ae726f6d09bf565361a186edbb38fcdec" Mar 11 09:23:07 crc kubenswrapper[4840]: I0311 09:23:07.402327 4840 scope.go:117] "RemoveContainer" containerID="199a0328b700e036e67f97e8adc8b8b9207dfa1a9a9eb09434590923473cd2bb" Mar 11 09:23:11 crc kubenswrapper[4840]: I0311 09:23:11.060404 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:23:11 crc kubenswrapper[4840]: E0311 09:23:11.060840 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:23:25 crc kubenswrapper[4840]: I0311 09:23:25.060613 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:23:25 crc kubenswrapper[4840]: E0311 09:23:25.061386 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:23:38 crc kubenswrapper[4840]: I0311 09:23:38.061742 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:23:38 crc kubenswrapper[4840]: E0311 09:23:38.062616 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:23:51 crc kubenswrapper[4840]: I0311 09:23:51.061105 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:23:51 crc kubenswrapper[4840]: E0311 09:23:51.061789 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:24:00 crc kubenswrapper[4840]: I0311 09:24:00.138843 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553684-h7krd"] Mar 11 09:24:00 crc kubenswrapper[4840]: E0311 09:24:00.139852 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7764d72-431e-4f92-ba82-b17b8d16b0e2" containerName="registry-server" Mar 11 09:24:00 crc kubenswrapper[4840]: I0311 09:24:00.139866 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7764d72-431e-4f92-ba82-b17b8d16b0e2" containerName="registry-server" Mar 11 09:24:00 crc kubenswrapper[4840]: E0311 09:24:00.139884 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7764d72-431e-4f92-ba82-b17b8d16b0e2" containerName="extract-utilities" Mar 11 09:24:00 crc kubenswrapper[4840]: I0311 09:24:00.139891 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7764d72-431e-4f92-ba82-b17b8d16b0e2" containerName="extract-utilities" Mar 11 09:24:00 crc kubenswrapper[4840]: E0311 09:24:00.139905 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7764d72-431e-4f92-ba82-b17b8d16b0e2" containerName="extract-content" Mar 11 09:24:00 crc kubenswrapper[4840]: I0311 09:24:00.139911 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7764d72-431e-4f92-ba82-b17b8d16b0e2" containerName="extract-content" Mar 11 09:24:00 crc kubenswrapper[4840]: I0311 09:24:00.140052 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7764d72-431e-4f92-ba82-b17b8d16b0e2" containerName="registry-server" Mar 11 09:24:00 crc kubenswrapper[4840]: I0311 09:24:00.140597 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553684-h7krd" Mar 11 09:24:00 crc kubenswrapper[4840]: I0311 09:24:00.143132 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:24:00 crc kubenswrapper[4840]: I0311 09:24:00.143316 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:24:00 crc kubenswrapper[4840]: I0311 09:24:00.147649 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:24:00 crc kubenswrapper[4840]: I0311 09:24:00.155967 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553684-h7krd"] Mar 11 09:24:00 crc kubenswrapper[4840]: I0311 09:24:00.291682 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wrpd\" (UniqueName: \"kubernetes.io/projected/d32f0678-e814-47b7-b976-62548a3fa240-kube-api-access-6wrpd\") pod \"auto-csr-approver-29553684-h7krd\" (UID: \"d32f0678-e814-47b7-b976-62548a3fa240\") " pod="openshift-infra/auto-csr-approver-29553684-h7krd" Mar 11 09:24:00 crc kubenswrapper[4840]: I0311 09:24:00.393454 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wrpd\" (UniqueName: \"kubernetes.io/projected/d32f0678-e814-47b7-b976-62548a3fa240-kube-api-access-6wrpd\") pod \"auto-csr-approver-29553684-h7krd\" (UID: \"d32f0678-e814-47b7-b976-62548a3fa240\") " pod="openshift-infra/auto-csr-approver-29553684-h7krd" Mar 11 09:24:00 crc kubenswrapper[4840]: I0311 09:24:00.416168 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wrpd\" (UniqueName: \"kubernetes.io/projected/d32f0678-e814-47b7-b976-62548a3fa240-kube-api-access-6wrpd\") pod \"auto-csr-approver-29553684-h7krd\" (UID: \"d32f0678-e814-47b7-b976-62548a3fa240\") " pod="openshift-infra/auto-csr-approver-29553684-h7krd" Mar 11 09:24:00 crc kubenswrapper[4840]: I0311 09:24:00.458888 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553684-h7krd" Mar 11 09:24:00 crc kubenswrapper[4840]: I0311 09:24:00.916380 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553684-h7krd"] Mar 11 09:24:01 crc kubenswrapper[4840]: I0311 09:24:01.005410 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553684-h7krd" event={"ID":"d32f0678-e814-47b7-b976-62548a3fa240","Type":"ContainerStarted","Data":"ac53f69166a64303709a024f4884d96f716bf02d9d25d61eb985a2442305a83b"} Mar 11 09:24:03 crc kubenswrapper[4840]: I0311 09:24:03.026817 4840 generic.go:334] "Generic (PLEG): container finished" podID="d32f0678-e814-47b7-b976-62548a3fa240" containerID="1c62c186fe302f42e736484571b96c27d7c45e50ba11521b771084700eb3ff0d" exitCode=0 Mar 11 09:24:03 crc kubenswrapper[4840]: I0311 09:24:03.026890 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553684-h7krd" event={"ID":"d32f0678-e814-47b7-b976-62548a3fa240","Type":"ContainerDied","Data":"1c62c186fe302f42e736484571b96c27d7c45e50ba11521b771084700eb3ff0d"} Mar 11 09:24:03 crc kubenswrapper[4840]: I0311 09:24:03.060922 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:24:03 crc kubenswrapper[4840]: E0311 09:24:03.061315 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:24:04 crc kubenswrapper[4840]: I0311 09:24:04.396104 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553684-h7krd" Mar 11 09:24:04 crc kubenswrapper[4840]: I0311 09:24:04.557649 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wrpd\" (UniqueName: \"kubernetes.io/projected/d32f0678-e814-47b7-b976-62548a3fa240-kube-api-access-6wrpd\") pod \"d32f0678-e814-47b7-b976-62548a3fa240\" (UID: \"d32f0678-e814-47b7-b976-62548a3fa240\") " Mar 11 09:24:04 crc kubenswrapper[4840]: I0311 09:24:04.563323 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d32f0678-e814-47b7-b976-62548a3fa240-kube-api-access-6wrpd" (OuterVolumeSpecName: "kube-api-access-6wrpd") pod "d32f0678-e814-47b7-b976-62548a3fa240" (UID: "d32f0678-e814-47b7-b976-62548a3fa240"). InnerVolumeSpecName "kube-api-access-6wrpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:24:04 crc kubenswrapper[4840]: I0311 09:24:04.660366 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wrpd\" (UniqueName: \"kubernetes.io/projected/d32f0678-e814-47b7-b976-62548a3fa240-kube-api-access-6wrpd\") on node \"crc\" DevicePath \"\"" Mar 11 09:24:05 crc kubenswrapper[4840]: I0311 09:24:05.045066 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553684-h7krd" event={"ID":"d32f0678-e814-47b7-b976-62548a3fa240","Type":"ContainerDied","Data":"ac53f69166a64303709a024f4884d96f716bf02d9d25d61eb985a2442305a83b"} Mar 11 09:24:05 crc kubenswrapper[4840]: I0311 09:24:05.045521 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac53f69166a64303709a024f4884d96f716bf02d9d25d61eb985a2442305a83b" Mar 11 09:24:05 crc kubenswrapper[4840]: I0311 09:24:05.045122 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553684-h7krd" Mar 11 09:24:05 crc kubenswrapper[4840]: I0311 09:24:05.460639 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553678-xn9kq"] Mar 11 09:24:05 crc kubenswrapper[4840]: I0311 09:24:05.465480 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553678-xn9kq"] Mar 11 09:24:06 crc kubenswrapper[4840]: I0311 09:24:06.069172 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f05f155b-14bd-45d4-9d0d-dd92fd7a3eb0" path="/var/lib/kubelet/pods/f05f155b-14bd-45d4-9d0d-dd92fd7a3eb0/volumes" Mar 11 09:24:07 crc kubenswrapper[4840]: I0311 09:24:07.718033 4840 scope.go:117] "RemoveContainer" containerID="b4a7c49eb1ab96ca872e6f2ffbdf1839b577c44a108a40de7c18a2cb34f4e9f7" Mar 11 09:24:07 crc kubenswrapper[4840]: I0311 09:24:07.779307 4840 scope.go:117] "RemoveContainer" containerID="db1b4e68139a5c264cc99c225a5d17869429e390074a77b4d5a954cfda2abd92" Mar 11 09:24:07 crc kubenswrapper[4840]: I0311 09:24:07.804927 4840 scope.go:117] "RemoveContainer" containerID="c928b472f2542eafc13cd171a09a9a135f9ed04cc8a7a9eb07a835da8146c648" Mar 11 09:24:07 crc kubenswrapper[4840]: I0311 09:24:07.842581 4840 scope.go:117] "RemoveContainer" containerID="3e6fbcf31ada0f36ed6f518ad3411ae695bfb221a54e277c0e2faccd63610af3" Mar 11 09:24:07 crc kubenswrapper[4840]: I0311 09:24:07.878007 4840 scope.go:117] "RemoveContainer" containerID="058efdd9153e199b2781bb56a2799093fa38fbf8d86f0fa7912fbc7f671e203f" Mar 11 09:24:07 crc kubenswrapper[4840]: I0311 09:24:07.896859 4840 scope.go:117] "RemoveContainer" containerID="820aa259a8b7e35284681d77e3f29d049e162d8ecbf5d0f603d802b8267103a7" Mar 11 09:24:07 crc kubenswrapper[4840]: I0311 09:24:07.915637 4840 scope.go:117] "RemoveContainer" containerID="d497f54d8fae9f2a73679effc95823f4140980c021583551b3be7bba6bfbf0a9" Mar 11 09:24:07 crc kubenswrapper[4840]: I0311 09:24:07.934049 4840 scope.go:117] "RemoveContainer" containerID="9c0b5e241b0b51a690dd9f346219c8bee2d0c8e7039ad56b281f6ca277463616" Mar 11 09:24:07 crc kubenswrapper[4840]: I0311 09:24:07.966747 4840 scope.go:117] "RemoveContainer" containerID="43c4acb8183ccb5e90836bfc3e4bcd3459d9f4424c2deb8ad2b687db74f8be98" Mar 11 09:24:07 crc kubenswrapper[4840]: I0311 09:24:07.993215 4840 scope.go:117] "RemoveContainer" containerID="d21c0f90e51840f3f77422c2bb05bbcae4d2b8d27155ff011bccb70548bf78f9" Mar 11 09:24:08 crc kubenswrapper[4840]: I0311 09:24:08.035377 4840 scope.go:117] "RemoveContainer" containerID="3df4f12dfe2d857d0c6efd1fb792b32e835e818cf6e3f66e38349596457c709f" Mar 11 09:24:08 crc kubenswrapper[4840]: I0311 09:24:08.083525 4840 scope.go:117] "RemoveContainer" containerID="b9fdfd8bf27758ee9513c698619fd2a58d3ef811e44b476641b6d47eafc6a701" Mar 11 09:24:08 crc kubenswrapper[4840]: I0311 09:24:08.104666 4840 scope.go:117] "RemoveContainer" containerID="272158c7dca509954e755fb757e669d86d1ba331be7966e095475284418f966b" Mar 11 09:24:17 crc kubenswrapper[4840]: I0311 09:24:17.060122 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:24:17 crc kubenswrapper[4840]: E0311 09:24:17.061039 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:24:31 crc kubenswrapper[4840]: I0311 09:24:31.060151 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:24:31 crc kubenswrapper[4840]: E0311 09:24:31.060890 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:24:43 crc kubenswrapper[4840]: I0311 09:24:43.060821 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:24:43 crc kubenswrapper[4840]: E0311 09:24:43.061613 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:24:57 crc kubenswrapper[4840]: I0311 09:24:57.059788 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:24:57 crc kubenswrapper[4840]: E0311 09:24:57.060549 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:25:08 crc kubenswrapper[4840]: I0311 09:25:08.060780 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:25:08 crc kubenswrapper[4840]: E0311 09:25:08.062758 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:25:08 crc kubenswrapper[4840]: I0311 09:25:08.315741 4840 scope.go:117] "RemoveContainer" containerID="0eda35b74c7024d65999ffc9b92358f1a7d45fd985b08bbc809d93bd30d201d7" Mar 11 09:25:08 crc kubenswrapper[4840]: I0311 09:25:08.337777 4840 scope.go:117] "RemoveContainer" containerID="672eca719856eb310b27bc3eb87b0f51f097ef830a3ce432a15efdb1c9cb003b" Mar 11 09:25:08 crc kubenswrapper[4840]: I0311 09:25:08.382198 4840 scope.go:117] "RemoveContainer" containerID="d90b0fbf2cbe6f6a8079326c29aaed29f17ab5f835d86bb6f9720cfe6a3f2e16" Mar 11 09:25:08 crc kubenswrapper[4840]: I0311 09:25:08.402685 4840 scope.go:117] "RemoveContainer" containerID="b57eefef57ea1e67d05ef7a344edc34490c14c383f807109b3c6f25730ff91f8" Mar 11 09:25:08 crc kubenswrapper[4840]: I0311 09:25:08.428974 4840 scope.go:117] "RemoveContainer" containerID="7090d3ac747a15763e7046444045a8df17215da7e48e042854b41c49b658c877" Mar 11 09:25:08 crc kubenswrapper[4840]: I0311 09:25:08.454428 4840 scope.go:117] "RemoveContainer" containerID="2ad9683900e7600f53df802b0bbd447af349254e001c1e0d18204012a848f901" Mar 11 09:25:08 crc kubenswrapper[4840]: I0311 09:25:08.477359 4840 scope.go:117] "RemoveContainer" containerID="c6f36f6775379a090ca9ce1ac0bd9477183ab08a224e5ad7f14b54317170e04f" Mar 11 09:25:08 crc kubenswrapper[4840]: I0311 09:25:08.498475 4840 scope.go:117] "RemoveContainer" containerID="fa51fa6846a6391fd4d41433bac4cdd3a55817184d7eb6184af7110475d61e48" Mar 11 09:25:08 crc kubenswrapper[4840]: I0311 09:25:08.516699 4840 scope.go:117] "RemoveContainer" containerID="1443dcc20b53c34d6b5983e69576991044f6bc08da4320cceeb036e8ad539edf" Mar 11 09:25:21 crc kubenswrapper[4840]: I0311 09:25:21.060458 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:25:21 crc kubenswrapper[4840]: E0311 09:25:21.061394 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:25:35 crc kubenswrapper[4840]: I0311 09:25:35.060063 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:25:35 crc kubenswrapper[4840]: E0311 09:25:35.060983 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:25:35 crc kubenswrapper[4840]: I0311 09:25:35.116337 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b8d9p"] Mar 11 09:25:35 crc kubenswrapper[4840]: E0311 09:25:35.116780 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d32f0678-e814-47b7-b976-62548a3fa240" containerName="oc" Mar 11 09:25:35 crc kubenswrapper[4840]: I0311 09:25:35.116807 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="d32f0678-e814-47b7-b976-62548a3fa240" containerName="oc" Mar 11 09:25:35 crc kubenswrapper[4840]: I0311 09:25:35.116995 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="d32f0678-e814-47b7-b976-62548a3fa240" containerName="oc" Mar 11 09:25:35 crc kubenswrapper[4840]: I0311 09:25:35.118261 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8d9p" Mar 11 09:25:35 crc kubenswrapper[4840]: I0311 09:25:35.139611 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b8d9p"] Mar 11 09:25:35 crc kubenswrapper[4840]: I0311 09:25:35.167823 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzhhf\" (UniqueName: \"kubernetes.io/projected/dc824dfc-233b-4613-a7ed-4cb6371a1404-kube-api-access-fzhhf\") pod \"redhat-operators-b8d9p\" (UID: \"dc824dfc-233b-4613-a7ed-4cb6371a1404\") " pod="openshift-marketplace/redhat-operators-b8d9p" Mar 11 09:25:35 crc kubenswrapper[4840]: I0311 09:25:35.167913 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc824dfc-233b-4613-a7ed-4cb6371a1404-utilities\") pod \"redhat-operators-b8d9p\" (UID: \"dc824dfc-233b-4613-a7ed-4cb6371a1404\") " pod="openshift-marketplace/redhat-operators-b8d9p" Mar 11 09:25:35 crc kubenswrapper[4840]: I0311 09:25:35.167991 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc824dfc-233b-4613-a7ed-4cb6371a1404-catalog-content\") pod \"redhat-operators-b8d9p\" (UID: \"dc824dfc-233b-4613-a7ed-4cb6371a1404\") " pod="openshift-marketplace/redhat-operators-b8d9p" Mar 11 09:25:35 crc kubenswrapper[4840]: I0311 09:25:35.269242 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc824dfc-233b-4613-a7ed-4cb6371a1404-catalog-content\") pod \"redhat-operators-b8d9p\" (UID: \"dc824dfc-233b-4613-a7ed-4cb6371a1404\") " pod="openshift-marketplace/redhat-operators-b8d9p" Mar 11 09:25:35 crc kubenswrapper[4840]: I0311 09:25:35.269340 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzhhf\" (UniqueName: \"kubernetes.io/projected/dc824dfc-233b-4613-a7ed-4cb6371a1404-kube-api-access-fzhhf\") pod \"redhat-operators-b8d9p\" (UID: \"dc824dfc-233b-4613-a7ed-4cb6371a1404\") " pod="openshift-marketplace/redhat-operators-b8d9p" Mar 11 09:25:35 crc kubenswrapper[4840]: I0311 09:25:35.269381 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc824dfc-233b-4613-a7ed-4cb6371a1404-utilities\") pod \"redhat-operators-b8d9p\" (UID: \"dc824dfc-233b-4613-a7ed-4cb6371a1404\") " pod="openshift-marketplace/redhat-operators-b8d9p" Mar 11 09:25:35 crc kubenswrapper[4840]: I0311 09:25:35.269936 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc824dfc-233b-4613-a7ed-4cb6371a1404-catalog-content\") pod \"redhat-operators-b8d9p\" (UID: \"dc824dfc-233b-4613-a7ed-4cb6371a1404\") " pod="openshift-marketplace/redhat-operators-b8d9p" Mar 11 09:25:35 crc kubenswrapper[4840]: I0311 09:25:35.270028 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc824dfc-233b-4613-a7ed-4cb6371a1404-utilities\") pod \"redhat-operators-b8d9p\" (UID: \"dc824dfc-233b-4613-a7ed-4cb6371a1404\") " pod="openshift-marketplace/redhat-operators-b8d9p" Mar 11 09:25:35 crc kubenswrapper[4840]: I0311 09:25:35.290607 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzhhf\" (UniqueName: \"kubernetes.io/projected/dc824dfc-233b-4613-a7ed-4cb6371a1404-kube-api-access-fzhhf\") pod \"redhat-operators-b8d9p\" (UID: \"dc824dfc-233b-4613-a7ed-4cb6371a1404\") " pod="openshift-marketplace/redhat-operators-b8d9p" Mar 11 09:25:35 crc kubenswrapper[4840]: I0311 09:25:35.438455 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b8d9p" Mar 11 09:25:35 crc kubenswrapper[4840]: I0311 09:25:35.944119 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b8d9p"] Mar 11 09:25:36 crc kubenswrapper[4840]: I0311 09:25:36.887812 4840 generic.go:334] "Generic (PLEG): container finished" podID="dc824dfc-233b-4613-a7ed-4cb6371a1404" containerID="2e4cdbbd1a23343e3330449c076ade4a1724b39a0c95cb360e3d1b1d2ac061e2" exitCode=0 Mar 11 09:25:36 crc kubenswrapper[4840]: I0311 09:25:36.887875 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8d9p" event={"ID":"dc824dfc-233b-4613-a7ed-4cb6371a1404","Type":"ContainerDied","Data":"2e4cdbbd1a23343e3330449c076ade4a1724b39a0c95cb360e3d1b1d2ac061e2"} Mar 11 09:25:36 crc kubenswrapper[4840]: I0311 09:25:36.887912 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8d9p" event={"ID":"dc824dfc-233b-4613-a7ed-4cb6371a1404","Type":"ContainerStarted","Data":"0eff9d7dfec2ffb8ba768a38de668a0720d403f19b2f4571a2abd3d1501bbd68"} Mar 11 09:25:45 crc kubenswrapper[4840]: I0311 09:25:45.973848 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8d9p" event={"ID":"dc824dfc-233b-4613-a7ed-4cb6371a1404","Type":"ContainerStarted","Data":"35a94e93431fbf3f7e45e5d9af30ce0b4575be57f3e9b7084334c413af548f66"} Mar 11 09:25:46 crc kubenswrapper[4840]: I0311 09:25:46.982073 4840 generic.go:334] "Generic (PLEG): container finished" podID="dc824dfc-233b-4613-a7ed-4cb6371a1404" containerID="35a94e93431fbf3f7e45e5d9af30ce0b4575be57f3e9b7084334c413af548f66" exitCode=0 Mar 11 09:25:46 crc kubenswrapper[4840]: I0311 09:25:46.982123 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8d9p" event={"ID":"dc824dfc-233b-4613-a7ed-4cb6371a1404","Type":"ContainerDied","Data":"35a94e93431fbf3f7e45e5d9af30ce0b4575be57f3e9b7084334c413af548f66"} Mar 11 09:25:47 crc kubenswrapper[4840]: I0311 09:25:47.993060 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b8d9p" event={"ID":"dc824dfc-233b-4613-a7ed-4cb6371a1404","Type":"ContainerStarted","Data":"69c4aa36d7af45e1a0a704966a476937ccd8540547fb4761038cffc6837f6a74"} Mar 11 09:25:48 crc kubenswrapper[4840]: I0311 09:25:48.016832 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b8d9p" podStartSLOduration=2.211133681 podStartE2EDuration="13.016805534s" podCreationTimestamp="2026-03-11 09:25:35 +0000 UTC" firstStartedPulling="2026-03-11 09:25:36.889412425 +0000 UTC m=+1735.555082240" lastFinishedPulling="2026-03-11 09:25:47.695084278 +0000 UTC m=+1746.360754093" observedRunningTime="2026-03-11 09:25:48.0122741 +0000 UTC m=+1746.677943915" watchObservedRunningTime="2026-03-11 09:25:48.016805534 +0000 UTC m=+1746.682475349" Mar 11 09:25:48 crc kubenswrapper[4840]: I0311 09:25:48.117270 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:25:48 crc kubenswrapper[4840]: E0311 09:25:48.117508 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:25:55 crc kubenswrapper[4840]: I0311 09:25:55.439668 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b8d9p" Mar 11 09:25:55 crc kubenswrapper[4840]: I0311 09:25:55.440277 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b8d9p" Mar 11 09:25:56 crc kubenswrapper[4840]: I0311 09:25:56.493985 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b8d9p" podUID="dc824dfc-233b-4613-a7ed-4cb6371a1404" containerName="registry-server" probeResult="failure" output=< Mar 11 09:25:56 crc kubenswrapper[4840]: timeout: failed to connect service ":50051" within 1s Mar 11 09:25:56 crc kubenswrapper[4840]: > Mar 11 09:26:00 crc kubenswrapper[4840]: I0311 09:26:00.060332 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:26:00 crc kubenswrapper[4840]: E0311 09:26:00.060958 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:26:00 crc kubenswrapper[4840]: I0311 09:26:00.166260 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553686-wpkpx"] Mar 11 09:26:00 crc kubenswrapper[4840]: I0311 09:26:00.167482 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553686-wpkpx" Mar 11 09:26:00 crc kubenswrapper[4840]: I0311 09:26:00.170349 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:26:00 crc kubenswrapper[4840]: I0311 09:26:00.170722 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:26:00 crc kubenswrapper[4840]: I0311 09:26:00.214877 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:26:00 crc kubenswrapper[4840]: I0311 09:26:00.265635 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553686-wpkpx"] Mar 11 09:26:00 crc kubenswrapper[4840]: I0311 09:26:00.315413 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn446\" (UniqueName: \"kubernetes.io/projected/a3d08069-fc56-45d2-bce0-246ce578db77-kube-api-access-bn446\") pod \"auto-csr-approver-29553686-wpkpx\" (UID: \"a3d08069-fc56-45d2-bce0-246ce578db77\") " pod="openshift-infra/auto-csr-approver-29553686-wpkpx" Mar 11 09:26:00 crc kubenswrapper[4840]: I0311 09:26:00.417070 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn446\" (UniqueName: \"kubernetes.io/projected/a3d08069-fc56-45d2-bce0-246ce578db77-kube-api-access-bn446\") pod \"auto-csr-approver-29553686-wpkpx\" (UID: \"a3d08069-fc56-45d2-bce0-246ce578db77\") " pod="openshift-infra/auto-csr-approver-29553686-wpkpx" Mar 11 09:26:00 crc kubenswrapper[4840]: I0311 09:26:00.446512 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn446\" (UniqueName: \"kubernetes.io/projected/a3d08069-fc56-45d2-bce0-246ce578db77-kube-api-access-bn446\") pod \"auto-csr-approver-29553686-wpkpx\" (UID: \"a3d08069-fc56-45d2-bce0-246ce578db77\") " pod="openshift-infra/auto-csr-approver-29553686-wpkpx" Mar 11 09:26:00 crc kubenswrapper[4840]: I0311 09:26:00.503990 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553686-wpkpx" Mar 11 09:26:00 crc kubenswrapper[4840]: I0311 09:26:00.934187 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553686-wpkpx"] Mar 11 09:26:01 crc kubenswrapper[4840]: I0311 09:26:01.847072 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553686-wpkpx" event={"ID":"a3d08069-fc56-45d2-bce0-246ce578db77","Type":"ContainerStarted","Data":"b35c9b009777f453e0ad537015222db701b96cf8a72e63e0cf4f150c9a0fa5c0"} Mar 11 09:26:05 crc kubenswrapper[4840]: I0311 09:26:05.489518 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b8d9p" Mar 11 09:26:05 crc kubenswrapper[4840]: I0311 09:26:05.532416 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b8d9p" Mar 11 09:26:06 crc kubenswrapper[4840]: I0311 09:26:06.223345 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b8d9p"] Mar 11 09:26:06 crc kubenswrapper[4840]: I0311 09:26:06.320749 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s4mg2"] Mar 11 09:26:06 crc kubenswrapper[4840]: I0311 09:26:06.321803 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s4mg2" podUID="9b673938-72a6-421a-9e73-1b2c5226e039" containerName="registry-server" containerID="cri-o://0a29649f67dcc0577e422cc5d667be0bebcab1e1449a75ddf1795c37999fa935" gracePeriod=2 Mar 11 09:26:07 crc kubenswrapper[4840]: I0311 09:26:07.894778 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553686-wpkpx" event={"ID":"a3d08069-fc56-45d2-bce0-246ce578db77","Type":"ContainerStarted","Data":"f04903422aff0dbc18066a5f3bbc0944f55f41f875478cd0c5d9db5570bceddc"} Mar 11 09:26:07 crc kubenswrapper[4840]: I0311 09:26:07.898089 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4mg2" Mar 11 09:26:07 crc kubenswrapper[4840]: I0311 09:26:07.898135 4840 generic.go:334] "Generic (PLEG): container finished" podID="9b673938-72a6-421a-9e73-1b2c5226e039" containerID="0a29649f67dcc0577e422cc5d667be0bebcab1e1449a75ddf1795c37999fa935" exitCode=0 Mar 11 09:26:07 crc kubenswrapper[4840]: I0311 09:26:07.898174 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4mg2" event={"ID":"9b673938-72a6-421a-9e73-1b2c5226e039","Type":"ContainerDied","Data":"0a29649f67dcc0577e422cc5d667be0bebcab1e1449a75ddf1795c37999fa935"} Mar 11 09:26:07 crc kubenswrapper[4840]: I0311 09:26:07.898210 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4mg2" event={"ID":"9b673938-72a6-421a-9e73-1b2c5226e039","Type":"ContainerDied","Data":"8675bf4a72bb365e68d22a80037215a18585ccd67521a37c7a23ef85a176b715"} Mar 11 09:26:07 crc kubenswrapper[4840]: I0311 09:26:07.898228 4840 scope.go:117] "RemoveContainer" containerID="0a29649f67dcc0577e422cc5d667be0bebcab1e1449a75ddf1795c37999fa935" Mar 11 09:26:07 crc kubenswrapper[4840]: I0311 09:26:07.909950 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553686-wpkpx" podStartSLOduration=1.421697434 podStartE2EDuration="7.909924417s" podCreationTimestamp="2026-03-11 09:26:00 +0000 UTC" firstStartedPulling="2026-03-11 09:26:00.93871861 +0000 UTC m=+1759.604388445" lastFinishedPulling="2026-03-11 09:26:07.426945613 +0000 UTC m=+1766.092615428" observedRunningTime="2026-03-11 09:26:07.909832124 +0000 UTC m=+1766.575501940" watchObservedRunningTime="2026-03-11 09:26:07.909924417 +0000 UTC m=+1766.575594232" Mar 11 09:26:07 crc kubenswrapper[4840]: I0311 09:26:07.918305 4840 scope.go:117] "RemoveContainer" containerID="1cf744cb5728082b3dde17ff7abae47cfaef9f27b480942c38a8cf20042dc85a" Mar 11 09:26:07 crc kubenswrapper[4840]: I0311 09:26:07.944540 4840 scope.go:117] "RemoveContainer" containerID="84fb23694c0178b7becd9dfda59d38252037cebc83c792ecd19eb6c5e9bd8c3e" Mar 11 09:26:07 crc kubenswrapper[4840]: I0311 09:26:07.980639 4840 scope.go:117] "RemoveContainer" containerID="0a29649f67dcc0577e422cc5d667be0bebcab1e1449a75ddf1795c37999fa935" Mar 11 09:26:07 crc kubenswrapper[4840]: E0311 09:26:07.981696 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a29649f67dcc0577e422cc5d667be0bebcab1e1449a75ddf1795c37999fa935\": container with ID starting with 0a29649f67dcc0577e422cc5d667be0bebcab1e1449a75ddf1795c37999fa935 not found: ID does not exist" containerID="0a29649f67dcc0577e422cc5d667be0bebcab1e1449a75ddf1795c37999fa935" Mar 11 09:26:07 crc kubenswrapper[4840]: I0311 09:26:07.981749 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a29649f67dcc0577e422cc5d667be0bebcab1e1449a75ddf1795c37999fa935"} err="failed to get container status \"0a29649f67dcc0577e422cc5d667be0bebcab1e1449a75ddf1795c37999fa935\": rpc error: code = NotFound desc = could not find container \"0a29649f67dcc0577e422cc5d667be0bebcab1e1449a75ddf1795c37999fa935\": container with ID starting with 0a29649f67dcc0577e422cc5d667be0bebcab1e1449a75ddf1795c37999fa935 not found: ID does not exist" Mar 11 09:26:07 crc kubenswrapper[4840]: I0311 09:26:07.981788 4840 scope.go:117] "RemoveContainer" containerID="1cf744cb5728082b3dde17ff7abae47cfaef9f27b480942c38a8cf20042dc85a" Mar 11 09:26:07 crc kubenswrapper[4840]: E0311 09:26:07.982288 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cf744cb5728082b3dde17ff7abae47cfaef9f27b480942c38a8cf20042dc85a\": container with ID starting with 1cf744cb5728082b3dde17ff7abae47cfaef9f27b480942c38a8cf20042dc85a not found: ID does not exist" containerID="1cf744cb5728082b3dde17ff7abae47cfaef9f27b480942c38a8cf20042dc85a" Mar 11 09:26:07 crc kubenswrapper[4840]: I0311 09:26:07.982359 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cf744cb5728082b3dde17ff7abae47cfaef9f27b480942c38a8cf20042dc85a"} err="failed to get container status \"1cf744cb5728082b3dde17ff7abae47cfaef9f27b480942c38a8cf20042dc85a\": rpc error: code = NotFound desc = could not find container \"1cf744cb5728082b3dde17ff7abae47cfaef9f27b480942c38a8cf20042dc85a\": container with ID starting with 1cf744cb5728082b3dde17ff7abae47cfaef9f27b480942c38a8cf20042dc85a not found: ID does not exist" Mar 11 09:26:07 crc kubenswrapper[4840]: I0311 09:26:07.982398 4840 scope.go:117] "RemoveContainer" containerID="84fb23694c0178b7becd9dfda59d38252037cebc83c792ecd19eb6c5e9bd8c3e" Mar 11 09:26:07 crc kubenswrapper[4840]: E0311 09:26:07.982978 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84fb23694c0178b7becd9dfda59d38252037cebc83c792ecd19eb6c5e9bd8c3e\": container with ID starting with 84fb23694c0178b7becd9dfda59d38252037cebc83c792ecd19eb6c5e9bd8c3e not found: ID does not exist" containerID="84fb23694c0178b7becd9dfda59d38252037cebc83c792ecd19eb6c5e9bd8c3e" Mar 11 09:26:07 crc kubenswrapper[4840]: I0311 09:26:07.983026 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84fb23694c0178b7becd9dfda59d38252037cebc83c792ecd19eb6c5e9bd8c3e"} err="failed to get container status \"84fb23694c0178b7becd9dfda59d38252037cebc83c792ecd19eb6c5e9bd8c3e\": rpc error: code = NotFound desc = could not find container \"84fb23694c0178b7becd9dfda59d38252037cebc83c792ecd19eb6c5e9bd8c3e\": container with ID starting with 84fb23694c0178b7becd9dfda59d38252037cebc83c792ecd19eb6c5e9bd8c3e not found: ID does not exist" Mar 11 09:26:08 crc kubenswrapper[4840]: I0311 09:26:08.070927 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b673938-72a6-421a-9e73-1b2c5226e039-catalog-content\") pod \"9b673938-72a6-421a-9e73-1b2c5226e039\" (UID: \"9b673938-72a6-421a-9e73-1b2c5226e039\") " Mar 11 09:26:08 crc kubenswrapper[4840]: I0311 09:26:08.071023 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b673938-72a6-421a-9e73-1b2c5226e039-utilities\") pod \"9b673938-72a6-421a-9e73-1b2c5226e039\" (UID: \"9b673938-72a6-421a-9e73-1b2c5226e039\") " Mar 11 09:26:08 crc kubenswrapper[4840]: I0311 09:26:08.071088 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xpd9\" (UniqueName: \"kubernetes.io/projected/9b673938-72a6-421a-9e73-1b2c5226e039-kube-api-access-5xpd9\") pod \"9b673938-72a6-421a-9e73-1b2c5226e039\" (UID: \"9b673938-72a6-421a-9e73-1b2c5226e039\") " Mar 11 09:26:08 crc kubenswrapper[4840]: I0311 09:26:08.071622 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b673938-72a6-421a-9e73-1b2c5226e039-utilities" (OuterVolumeSpecName: "utilities") pod "9b673938-72a6-421a-9e73-1b2c5226e039" (UID: "9b673938-72a6-421a-9e73-1b2c5226e039"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:26:08 crc kubenswrapper[4840]: I0311 09:26:08.077577 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b673938-72a6-421a-9e73-1b2c5226e039-kube-api-access-5xpd9" (OuterVolumeSpecName: "kube-api-access-5xpd9") pod "9b673938-72a6-421a-9e73-1b2c5226e039" (UID: "9b673938-72a6-421a-9e73-1b2c5226e039"). InnerVolumeSpecName "kube-api-access-5xpd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:26:08 crc kubenswrapper[4840]: I0311 09:26:08.172582 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b673938-72a6-421a-9e73-1b2c5226e039-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:26:08 crc kubenswrapper[4840]: I0311 09:26:08.172619 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xpd9\" (UniqueName: \"kubernetes.io/projected/9b673938-72a6-421a-9e73-1b2c5226e039-kube-api-access-5xpd9\") on node \"crc\" DevicePath \"\"" Mar 11 09:26:08 crc kubenswrapper[4840]: I0311 09:26:08.186552 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b673938-72a6-421a-9e73-1b2c5226e039-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b673938-72a6-421a-9e73-1b2c5226e039" (UID: "9b673938-72a6-421a-9e73-1b2c5226e039"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:26:08 crc kubenswrapper[4840]: I0311 09:26:08.274390 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b673938-72a6-421a-9e73-1b2c5226e039-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:26:08 crc kubenswrapper[4840]: I0311 09:26:08.633615 4840 scope.go:117] "RemoveContainer" containerID="c78a34698d838d68e83ec9791eb567ee6047145f621deb1b5314394e0674c6c5" Mar 11 09:26:08 crc kubenswrapper[4840]: I0311 09:26:08.650997 4840 scope.go:117] "RemoveContainer" containerID="6e805737b5c9e7e0a90405eecbec7fbac37b33b617516eebfd9b44d4a63815b1" Mar 11 09:26:08 crc kubenswrapper[4840]: I0311 09:26:08.699329 4840 scope.go:117] "RemoveContainer" containerID="e8f1da4d0f3e7a2f8bc4b508dafa4919bf88cefa86b7bf10b4d0be8c2a589de6" Mar 11 09:26:08 crc kubenswrapper[4840]: I0311 09:26:08.719170 4840 scope.go:117] "RemoveContainer" containerID="48f647feaba3271860d14aae492462ec4fa6f0b34e03e667ec047f9b3e726377" Mar 11 09:26:08 crc kubenswrapper[4840]: I0311 09:26:08.757061 4840 scope.go:117] "RemoveContainer" containerID="a8b0caed3a79d6574e838b5ca30ae9f7dc0d2e8987551e440fb3bb935fcc2b90" Mar 11 09:26:08 crc kubenswrapper[4840]: I0311 09:26:08.909264 4840 generic.go:334] "Generic (PLEG): container finished" podID="a3d08069-fc56-45d2-bce0-246ce578db77" containerID="f04903422aff0dbc18066a5f3bbc0944f55f41f875478cd0c5d9db5570bceddc" exitCode=0 Mar 11 09:26:08 crc kubenswrapper[4840]: I0311 09:26:08.909422 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4mg2" Mar 11 09:26:08 crc kubenswrapper[4840]: I0311 09:26:08.909402 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553686-wpkpx" event={"ID":"a3d08069-fc56-45d2-bce0-246ce578db77","Type":"ContainerDied","Data":"f04903422aff0dbc18066a5f3bbc0944f55f41f875478cd0c5d9db5570bceddc"} Mar 11 09:26:08 crc kubenswrapper[4840]: I0311 09:26:08.997835 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s4mg2"] Mar 11 09:26:09 crc kubenswrapper[4840]: I0311 09:26:09.008764 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s4mg2"] Mar 11 09:26:10 crc kubenswrapper[4840]: I0311 09:26:10.073130 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b673938-72a6-421a-9e73-1b2c5226e039" path="/var/lib/kubelet/pods/9b673938-72a6-421a-9e73-1b2c5226e039/volumes" Mar 11 09:26:10 crc kubenswrapper[4840]: I0311 09:26:10.242398 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553686-wpkpx" Mar 11 09:26:10 crc kubenswrapper[4840]: I0311 09:26:10.403907 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn446\" (UniqueName: \"kubernetes.io/projected/a3d08069-fc56-45d2-bce0-246ce578db77-kube-api-access-bn446\") pod \"a3d08069-fc56-45d2-bce0-246ce578db77\" (UID: \"a3d08069-fc56-45d2-bce0-246ce578db77\") " Mar 11 09:26:10 crc kubenswrapper[4840]: I0311 09:26:10.421135 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d08069-fc56-45d2-bce0-246ce578db77-kube-api-access-bn446" (OuterVolumeSpecName: "kube-api-access-bn446") pod "a3d08069-fc56-45d2-bce0-246ce578db77" (UID: "a3d08069-fc56-45d2-bce0-246ce578db77"). InnerVolumeSpecName "kube-api-access-bn446". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:26:10 crc kubenswrapper[4840]: I0311 09:26:10.505832 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn446\" (UniqueName: \"kubernetes.io/projected/a3d08069-fc56-45d2-bce0-246ce578db77-kube-api-access-bn446\") on node \"crc\" DevicePath \"\"" Mar 11 09:26:10 crc kubenswrapper[4840]: I0311 09:26:10.926524 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553686-wpkpx" event={"ID":"a3d08069-fc56-45d2-bce0-246ce578db77","Type":"ContainerDied","Data":"b35c9b009777f453e0ad537015222db701b96cf8a72e63e0cf4f150c9a0fa5c0"} Mar 11 09:26:10 crc kubenswrapper[4840]: I0311 09:26:10.926573 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b35c9b009777f453e0ad537015222db701b96cf8a72e63e0cf4f150c9a0fa5c0" Mar 11 09:26:10 crc kubenswrapper[4840]: I0311 09:26:10.926577 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553686-wpkpx" Mar 11 09:26:10 crc kubenswrapper[4840]: I0311 09:26:10.987107 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553680-9qvzt"] Mar 11 09:26:10 crc kubenswrapper[4840]: I0311 09:26:10.993493 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553680-9qvzt"] Mar 11 09:26:12 crc kubenswrapper[4840]: I0311 09:26:12.069194 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86f28307-7fbd-48fc-9d46-81c82922ff2a" path="/var/lib/kubelet/pods/86f28307-7fbd-48fc-9d46-81c82922ff2a/volumes" Mar 11 09:26:15 crc kubenswrapper[4840]: I0311 09:26:15.060146 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:26:15 crc kubenswrapper[4840]: E0311 09:26:15.061743 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:26:30 crc kubenswrapper[4840]: I0311 09:26:30.060874 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:26:30 crc kubenswrapper[4840]: E0311 09:26:30.061645 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:26:43 crc kubenswrapper[4840]: I0311 09:26:43.060538 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:26:43 crc kubenswrapper[4840]: E0311 09:26:43.061338 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:26:54 crc kubenswrapper[4840]: I0311 09:26:54.060632 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:26:54 crc kubenswrapper[4840]: E0311 09:26:54.062802 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:27:08 crc kubenswrapper[4840]: I0311 09:27:08.905815 4840 scope.go:117] "RemoveContainer" containerID="79d6c360cf6670ce989cf158051911b571a246bf0cfafff3518f20c7eac6b15f" Mar 11 09:27:08 crc kubenswrapper[4840]: I0311 09:27:08.954160 4840 scope.go:117] "RemoveContainer" containerID="147a6e0fe52bd51fcc1ea39e1aa1513f8d5a2df2f4c665e22d2c4a5fbfada6cf" Mar 11 09:27:09 crc kubenswrapper[4840]: I0311 09:27:09.060572 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:27:09 crc kubenswrapper[4840]: E0311 09:27:09.061004 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:27:20 crc kubenswrapper[4840]: I0311 09:27:20.060808 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:27:20 crc kubenswrapper[4840]: E0311 09:27:20.061351 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:27:31 crc kubenswrapper[4840]: I0311 09:27:31.061408 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:27:31 crc kubenswrapper[4840]: E0311 09:27:31.062290 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:27:42 crc kubenswrapper[4840]: I0311 09:27:42.064948 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:27:42 crc kubenswrapper[4840]: E0311 09:27:42.065618 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:27:57 crc kubenswrapper[4840]: I0311 09:27:57.060170 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:27:57 crc kubenswrapper[4840]: E0311 09:27:57.060954 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:28:00 crc kubenswrapper[4840]: I0311 09:28:00.144459 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553688-wqhck"] Mar 11 09:28:00 crc kubenswrapper[4840]: E0311 09:28:00.145212 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d08069-fc56-45d2-bce0-246ce578db77" containerName="oc" Mar 11 09:28:00 crc kubenswrapper[4840]: I0311 09:28:00.145227 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d08069-fc56-45d2-bce0-246ce578db77" containerName="oc" Mar 11 09:28:00 crc kubenswrapper[4840]: E0311 09:28:00.145253 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b673938-72a6-421a-9e73-1b2c5226e039" containerName="extract-content" Mar 11 09:28:00 crc kubenswrapper[4840]: I0311 09:28:00.145261 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b673938-72a6-421a-9e73-1b2c5226e039" containerName="extract-content" Mar 11 09:28:00 crc kubenswrapper[4840]: E0311 09:28:00.145274 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b673938-72a6-421a-9e73-1b2c5226e039" containerName="extract-utilities" Mar 11 09:28:00 crc kubenswrapper[4840]: I0311 09:28:00.145281 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b673938-72a6-421a-9e73-1b2c5226e039" containerName="extract-utilities" Mar 11 09:28:00 crc kubenswrapper[4840]: E0311 09:28:00.145294 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b673938-72a6-421a-9e73-1b2c5226e039" containerName="registry-server" Mar 11 09:28:00 crc kubenswrapper[4840]: I0311 09:28:00.145300 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b673938-72a6-421a-9e73-1b2c5226e039" containerName="registry-server" Mar 11 09:28:00 crc kubenswrapper[4840]: I0311 09:28:00.145427 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3d08069-fc56-45d2-bce0-246ce578db77" containerName="oc" Mar 11 09:28:00 crc kubenswrapper[4840]: I0311 09:28:00.145445 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b673938-72a6-421a-9e73-1b2c5226e039" containerName="registry-server" Mar 11 09:28:00 crc kubenswrapper[4840]: I0311 09:28:00.146189 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553688-wqhck" Mar 11 09:28:00 crc kubenswrapper[4840]: I0311 09:28:00.150053 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:28:00 crc kubenswrapper[4840]: I0311 09:28:00.150378 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:28:00 crc kubenswrapper[4840]: I0311 09:28:00.150622 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:28:00 crc kubenswrapper[4840]: I0311 09:28:00.156714 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553688-wqhck"] Mar 11 09:28:00 crc kubenswrapper[4840]: I0311 09:28:00.292224 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlklz\" (UniqueName: \"kubernetes.io/projected/2296cd25-d175-4f2e-863c-ddd56b7b9c42-kube-api-access-dlklz\") pod \"auto-csr-approver-29553688-wqhck\" (UID: \"2296cd25-d175-4f2e-863c-ddd56b7b9c42\") " pod="openshift-infra/auto-csr-approver-29553688-wqhck" Mar 11 09:28:00 crc kubenswrapper[4840]: I0311 09:28:00.393644 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlklz\" (UniqueName: \"kubernetes.io/projected/2296cd25-d175-4f2e-863c-ddd56b7b9c42-kube-api-access-dlklz\") pod \"auto-csr-approver-29553688-wqhck\" (UID: \"2296cd25-d175-4f2e-863c-ddd56b7b9c42\") " pod="openshift-infra/auto-csr-approver-29553688-wqhck" Mar 11 09:28:00 crc kubenswrapper[4840]: I0311 09:28:00.413355 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlklz\" (UniqueName: \"kubernetes.io/projected/2296cd25-d175-4f2e-863c-ddd56b7b9c42-kube-api-access-dlklz\") pod \"auto-csr-approver-29553688-wqhck\" (UID: \"2296cd25-d175-4f2e-863c-ddd56b7b9c42\") " pod="openshift-infra/auto-csr-approver-29553688-wqhck" Mar 11 09:28:00 crc kubenswrapper[4840]: I0311 09:28:00.861051 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553688-wqhck" Mar 11 09:28:01 crc kubenswrapper[4840]: I0311 09:28:01.289422 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553688-wqhck"] Mar 11 09:28:01 crc kubenswrapper[4840]: I0311 09:28:01.310441 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 09:28:02 crc kubenswrapper[4840]: I0311 09:28:02.123401 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553688-wqhck" event={"ID":"2296cd25-d175-4f2e-863c-ddd56b7b9c42","Type":"ContainerStarted","Data":"8cb83e8eb8d31ee7a558398921a198c316c1f520478be2d20a5259af3c0ee967"} Mar 11 09:28:03 crc kubenswrapper[4840]: I0311 09:28:03.135216 4840 generic.go:334] "Generic (PLEG): container finished" podID="2296cd25-d175-4f2e-863c-ddd56b7b9c42" containerID="bc2c527670bee77b724c800e5ac574cb32766e25648caad4ac95de83f4142a7f" exitCode=0 Mar 11 09:28:03 crc kubenswrapper[4840]: I0311 09:28:03.135280 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553688-wqhck" event={"ID":"2296cd25-d175-4f2e-863c-ddd56b7b9c42","Type":"ContainerDied","Data":"bc2c527670bee77b724c800e5ac574cb32766e25648caad4ac95de83f4142a7f"} Mar 11 09:28:04 crc kubenswrapper[4840]: I0311 09:28:04.411781 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553688-wqhck" Mar 11 09:28:04 crc kubenswrapper[4840]: I0311 09:28:04.516087 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlklz\" (UniqueName: \"kubernetes.io/projected/2296cd25-d175-4f2e-863c-ddd56b7b9c42-kube-api-access-dlklz\") pod \"2296cd25-d175-4f2e-863c-ddd56b7b9c42\" (UID: \"2296cd25-d175-4f2e-863c-ddd56b7b9c42\") " Mar 11 09:28:04 crc kubenswrapper[4840]: I0311 09:28:04.522673 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2296cd25-d175-4f2e-863c-ddd56b7b9c42-kube-api-access-dlklz" (OuterVolumeSpecName: "kube-api-access-dlklz") pod "2296cd25-d175-4f2e-863c-ddd56b7b9c42" (UID: "2296cd25-d175-4f2e-863c-ddd56b7b9c42"). InnerVolumeSpecName "kube-api-access-dlklz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:28:04 crc kubenswrapper[4840]: I0311 09:28:04.617603 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlklz\" (UniqueName: \"kubernetes.io/projected/2296cd25-d175-4f2e-863c-ddd56b7b9c42-kube-api-access-dlklz\") on node \"crc\" DevicePath \"\"" Mar 11 09:28:05 crc kubenswrapper[4840]: I0311 09:28:05.151435 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553688-wqhck" event={"ID":"2296cd25-d175-4f2e-863c-ddd56b7b9c42","Type":"ContainerDied","Data":"8cb83e8eb8d31ee7a558398921a198c316c1f520478be2d20a5259af3c0ee967"} Mar 11 09:28:05 crc kubenswrapper[4840]: I0311 09:28:05.151502 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cb83e8eb8d31ee7a558398921a198c316c1f520478be2d20a5259af3c0ee967" Mar 11 09:28:05 crc kubenswrapper[4840]: I0311 09:28:05.151572 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553688-wqhck" Mar 11 09:28:05 crc kubenswrapper[4840]: I0311 09:28:05.493779 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553682-fct6w"] Mar 11 09:28:05 crc kubenswrapper[4840]: I0311 09:28:05.502148 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553682-fct6w"] Mar 11 09:28:06 crc kubenswrapper[4840]: I0311 09:28:06.072970 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56b21cc0-0f51-4347-a0db-845f66ec531e" path="/var/lib/kubelet/pods/56b21cc0-0f51-4347-a0db-845f66ec531e/volumes" Mar 11 09:28:09 crc kubenswrapper[4840]: I0311 09:28:09.029396 4840 scope.go:117] "RemoveContainer" containerID="6cd7f23e0f6c16605b0e7d514cc4c9935f7c468c7c874b187448685d002cf147" Mar 11 09:28:10 crc kubenswrapper[4840]: I0311 09:28:10.059906 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:28:11 crc kubenswrapper[4840]: I0311 09:28:11.200174 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"51be40a909991ab2de96eb0cd37759289fa5a2f24f6847c6cdb91971787a0806"} Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.155193 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553690-bqhn5"] Mar 11 09:30:00 crc kubenswrapper[4840]: E0311 09:30:00.156225 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2296cd25-d175-4f2e-863c-ddd56b7b9c42" containerName="oc" Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.156257 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="2296cd25-d175-4f2e-863c-ddd56b7b9c42" containerName="oc" Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.156423 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="2296cd25-d175-4f2e-863c-ddd56b7b9c42" containerName="oc" Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.156940 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553690-bqhn5" Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.160005 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.162120 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.167675 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553690-xj2xk"] Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.169103 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-xj2xk" Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.173594 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.173754 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.173597 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.178333 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553690-bqhn5"] Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.186335 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553690-xj2xk"] Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.202636 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c55cdb33-3e8f-4e53-94e4-e6355c63f05f-secret-volume\") pod \"collect-profiles-29553690-xj2xk\" (UID: \"c55cdb33-3e8f-4e53-94e4-e6355c63f05f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-xj2xk" Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.202733 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vtgw\" (UniqueName: \"kubernetes.io/projected/c868a997-6786-459e-ad2c-293d1613ea1f-kube-api-access-6vtgw\") pod \"auto-csr-approver-29553690-bqhn5\" (UID: \"c868a997-6786-459e-ad2c-293d1613ea1f\") " pod="openshift-infra/auto-csr-approver-29553690-bqhn5" Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.202767 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c55cdb33-3e8f-4e53-94e4-e6355c63f05f-config-volume\") pod \"collect-profiles-29553690-xj2xk\" (UID: \"c55cdb33-3e8f-4e53-94e4-e6355c63f05f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-xj2xk" Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.202815 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bctbw\" (UniqueName: \"kubernetes.io/projected/c55cdb33-3e8f-4e53-94e4-e6355c63f05f-kube-api-access-bctbw\") pod \"collect-profiles-29553690-xj2xk\" (UID: \"c55cdb33-3e8f-4e53-94e4-e6355c63f05f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-xj2xk" Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.305569 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c55cdb33-3e8f-4e53-94e4-e6355c63f05f-secret-volume\") pod \"collect-profiles-29553690-xj2xk\" (UID: \"c55cdb33-3e8f-4e53-94e4-e6355c63f05f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-xj2xk" Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.305688 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vtgw\" (UniqueName: \"kubernetes.io/projected/c868a997-6786-459e-ad2c-293d1613ea1f-kube-api-access-6vtgw\") pod \"auto-csr-approver-29553690-bqhn5\" (UID: \"c868a997-6786-459e-ad2c-293d1613ea1f\") " pod="openshift-infra/auto-csr-approver-29553690-bqhn5" Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.305727 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c55cdb33-3e8f-4e53-94e4-e6355c63f05f-config-volume\") pod \"collect-profiles-29553690-xj2xk\" (UID: \"c55cdb33-3e8f-4e53-94e4-e6355c63f05f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-xj2xk" Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.305766 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bctbw\" (UniqueName: \"kubernetes.io/projected/c55cdb33-3e8f-4e53-94e4-e6355c63f05f-kube-api-access-bctbw\") pod \"collect-profiles-29553690-xj2xk\" (UID: \"c55cdb33-3e8f-4e53-94e4-e6355c63f05f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-xj2xk" Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.307573 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c55cdb33-3e8f-4e53-94e4-e6355c63f05f-config-volume\") pod \"collect-profiles-29553690-xj2xk\" (UID: \"c55cdb33-3e8f-4e53-94e4-e6355c63f05f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-xj2xk" Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.313035 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c55cdb33-3e8f-4e53-94e4-e6355c63f05f-secret-volume\") pod \"collect-profiles-29553690-xj2xk\" (UID: \"c55cdb33-3e8f-4e53-94e4-e6355c63f05f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-xj2xk" Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.325165 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bctbw\" (UniqueName: \"kubernetes.io/projected/c55cdb33-3e8f-4e53-94e4-e6355c63f05f-kube-api-access-bctbw\") pod \"collect-profiles-29553690-xj2xk\" (UID: \"c55cdb33-3e8f-4e53-94e4-e6355c63f05f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-xj2xk" Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.327650 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vtgw\" (UniqueName: \"kubernetes.io/projected/c868a997-6786-459e-ad2c-293d1613ea1f-kube-api-access-6vtgw\") pod \"auto-csr-approver-29553690-bqhn5\" (UID: \"c868a997-6786-459e-ad2c-293d1613ea1f\") " pod="openshift-infra/auto-csr-approver-29553690-bqhn5" Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.476985 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553690-bqhn5" Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.493173 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-xj2xk" Mar 11 09:30:00 crc kubenswrapper[4840]: I0311 09:30:00.944091 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553690-xj2xk"] Mar 11 09:30:01 crc kubenswrapper[4840]: W0311 09:30:01.015638 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc868a997_6786_459e_ad2c_293d1613ea1f.slice/crio-90c38c10e4b87274fbb7d33ef90d28f4951b8f3698738a70d0bcec0ffabbefb1 WatchSource:0}: Error finding container 90c38c10e4b87274fbb7d33ef90d28f4951b8f3698738a70d0bcec0ffabbefb1: Status 404 returned error can't find the container with id 90c38c10e4b87274fbb7d33ef90d28f4951b8f3698738a70d0bcec0ffabbefb1 Mar 11 09:30:01 crc kubenswrapper[4840]: I0311 09:30:01.015768 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553690-bqhn5"] Mar 11 09:30:01 crc kubenswrapper[4840]: I0311 09:30:01.263294 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553690-bqhn5" event={"ID":"c868a997-6786-459e-ad2c-293d1613ea1f","Type":"ContainerStarted","Data":"90c38c10e4b87274fbb7d33ef90d28f4951b8f3698738a70d0bcec0ffabbefb1"} Mar 11 09:30:01 crc kubenswrapper[4840]: I0311 09:30:01.266301 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-xj2xk" event={"ID":"c55cdb33-3e8f-4e53-94e4-e6355c63f05f","Type":"ContainerStarted","Data":"1d4a46590805a5d818600a0dc78f39d1f2dbb075c4638ec1fdc3f4967fb8f548"} Mar 11 09:30:01 crc kubenswrapper[4840]: I0311 09:30:01.266421 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-xj2xk" event={"ID":"c55cdb33-3e8f-4e53-94e4-e6355c63f05f","Type":"ContainerStarted","Data":"b34db349130372070689c16074c6c80c1041b24c2d61a96f8cf76ca7ab3fdcc4"} Mar 11 09:30:02 crc kubenswrapper[4840]: I0311 09:30:02.275797 4840 generic.go:334] "Generic (PLEG): container finished" podID="c55cdb33-3e8f-4e53-94e4-e6355c63f05f" containerID="1d4a46590805a5d818600a0dc78f39d1f2dbb075c4638ec1fdc3f4967fb8f548" exitCode=0 Mar 11 09:30:02 crc kubenswrapper[4840]: I0311 09:30:02.275897 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-xj2xk" event={"ID":"c55cdb33-3e8f-4e53-94e4-e6355c63f05f","Type":"ContainerDied","Data":"1d4a46590805a5d818600a0dc78f39d1f2dbb075c4638ec1fdc3f4967fb8f548"} Mar 11 09:30:02 crc kubenswrapper[4840]: I0311 09:30:02.514859 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-xj2xk" Mar 11 09:30:02 crc kubenswrapper[4840]: I0311 09:30:02.558248 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c55cdb33-3e8f-4e53-94e4-e6355c63f05f-config-volume\") pod \"c55cdb33-3e8f-4e53-94e4-e6355c63f05f\" (UID: \"c55cdb33-3e8f-4e53-94e4-e6355c63f05f\") " Mar 11 09:30:02 crc kubenswrapper[4840]: I0311 09:30:02.558459 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c55cdb33-3e8f-4e53-94e4-e6355c63f05f-secret-volume\") pod \"c55cdb33-3e8f-4e53-94e4-e6355c63f05f\" (UID: \"c55cdb33-3e8f-4e53-94e4-e6355c63f05f\") " Mar 11 09:30:02 crc kubenswrapper[4840]: I0311 09:30:02.558522 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bctbw\" (UniqueName: \"kubernetes.io/projected/c55cdb33-3e8f-4e53-94e4-e6355c63f05f-kube-api-access-bctbw\") pod \"c55cdb33-3e8f-4e53-94e4-e6355c63f05f\" (UID: \"c55cdb33-3e8f-4e53-94e4-e6355c63f05f\") " Mar 11 09:30:02 crc kubenswrapper[4840]: I0311 09:30:02.559823 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c55cdb33-3e8f-4e53-94e4-e6355c63f05f-config-volume" (OuterVolumeSpecName: "config-volume") pod "c55cdb33-3e8f-4e53-94e4-e6355c63f05f" (UID: "c55cdb33-3e8f-4e53-94e4-e6355c63f05f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:30:02 crc kubenswrapper[4840]: I0311 09:30:02.563971 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c55cdb33-3e8f-4e53-94e4-e6355c63f05f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c55cdb33-3e8f-4e53-94e4-e6355c63f05f" (UID: "c55cdb33-3e8f-4e53-94e4-e6355c63f05f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:30:02 crc kubenswrapper[4840]: I0311 09:30:02.568795 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c55cdb33-3e8f-4e53-94e4-e6355c63f05f-kube-api-access-bctbw" (OuterVolumeSpecName: "kube-api-access-bctbw") pod "c55cdb33-3e8f-4e53-94e4-e6355c63f05f" (UID: "c55cdb33-3e8f-4e53-94e4-e6355c63f05f"). InnerVolumeSpecName "kube-api-access-bctbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:30:02 crc kubenswrapper[4840]: I0311 09:30:02.659902 4840 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c55cdb33-3e8f-4e53-94e4-e6355c63f05f-config-volume\") on node \"crc\" DevicePath \"\"" Mar 11 09:30:02 crc kubenswrapper[4840]: I0311 09:30:02.659951 4840 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c55cdb33-3e8f-4e53-94e4-e6355c63f05f-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 11 09:30:02 crc kubenswrapper[4840]: I0311 09:30:02.659966 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bctbw\" (UniqueName: \"kubernetes.io/projected/c55cdb33-3e8f-4e53-94e4-e6355c63f05f-kube-api-access-bctbw\") on node \"crc\" DevicePath \"\"" Mar 11 09:30:03 crc kubenswrapper[4840]: I0311 09:30:03.284907 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-xj2xk" event={"ID":"c55cdb33-3e8f-4e53-94e4-e6355c63f05f","Type":"ContainerDied","Data":"b34db349130372070689c16074c6c80c1041b24c2d61a96f8cf76ca7ab3fdcc4"} Mar 11 09:30:03 crc kubenswrapper[4840]: I0311 09:30:03.284956 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b34db349130372070689c16074c6c80c1041b24c2d61a96f8cf76ca7ab3fdcc4" Mar 11 09:30:03 crc kubenswrapper[4840]: I0311 09:30:03.285024 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553690-xj2xk" Mar 11 09:30:03 crc kubenswrapper[4840]: I0311 09:30:03.592842 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553645-gw77z"] Mar 11 09:30:03 crc kubenswrapper[4840]: I0311 09:30:03.602972 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553645-gw77z"] Mar 11 09:30:04 crc kubenswrapper[4840]: I0311 09:30:04.080280 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a821fe36-3bdd-4b59-9dd4-004985404023" path="/var/lib/kubelet/pods/a821fe36-3bdd-4b59-9dd4-004985404023/volumes" Mar 11 09:30:05 crc kubenswrapper[4840]: I0311 09:30:05.302631 4840 generic.go:334] "Generic (PLEG): container finished" podID="c868a997-6786-459e-ad2c-293d1613ea1f" containerID="2e3dba699bf3fc1c8137f5035578a5b10f9857cd0cb3a34fdd111537a298f0c6" exitCode=0 Mar 11 09:30:05 crc kubenswrapper[4840]: I0311 09:30:05.302896 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553690-bqhn5" event={"ID":"c868a997-6786-459e-ad2c-293d1613ea1f","Type":"ContainerDied","Data":"2e3dba699bf3fc1c8137f5035578a5b10f9857cd0cb3a34fdd111537a298f0c6"} Mar 11 09:30:05 crc kubenswrapper[4840]: I0311 09:30:05.501419 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rpknw"] Mar 11 09:30:05 crc kubenswrapper[4840]: E0311 09:30:05.501795 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c55cdb33-3e8f-4e53-94e4-e6355c63f05f" containerName="collect-profiles" Mar 11 09:30:05 crc kubenswrapper[4840]: I0311 09:30:05.501810 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="c55cdb33-3e8f-4e53-94e4-e6355c63f05f" containerName="collect-profiles" Mar 11 09:30:05 crc kubenswrapper[4840]: I0311 09:30:05.501965 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="c55cdb33-3e8f-4e53-94e4-e6355c63f05f" containerName="collect-profiles" Mar 11 09:30:05 crc kubenswrapper[4840]: I0311 09:30:05.503181 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rpknw" Mar 11 09:30:05 crc kubenswrapper[4840]: I0311 09:30:05.516647 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rpknw"] Mar 11 09:30:05 crc kubenswrapper[4840]: I0311 09:30:05.603552 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bccdd6d4-0ed7-4942-964e-0ce37ffbac6d-catalog-content\") pod \"certified-operators-rpknw\" (UID: \"bccdd6d4-0ed7-4942-964e-0ce37ffbac6d\") " pod="openshift-marketplace/certified-operators-rpknw" Mar 11 09:30:05 crc kubenswrapper[4840]: I0311 09:30:05.603770 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g8lc\" (UniqueName: \"kubernetes.io/projected/bccdd6d4-0ed7-4942-964e-0ce37ffbac6d-kube-api-access-5g8lc\") pod \"certified-operators-rpknw\" (UID: \"bccdd6d4-0ed7-4942-964e-0ce37ffbac6d\") " pod="openshift-marketplace/certified-operators-rpknw" Mar 11 09:30:05 crc kubenswrapper[4840]: I0311 09:30:05.603902 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bccdd6d4-0ed7-4942-964e-0ce37ffbac6d-utilities\") pod \"certified-operators-rpknw\" (UID: \"bccdd6d4-0ed7-4942-964e-0ce37ffbac6d\") " pod="openshift-marketplace/certified-operators-rpknw" Mar 11 09:30:05 crc kubenswrapper[4840]: I0311 09:30:05.705938 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bccdd6d4-0ed7-4942-964e-0ce37ffbac6d-utilities\") pod \"certified-operators-rpknw\" (UID: \"bccdd6d4-0ed7-4942-964e-0ce37ffbac6d\") " pod="openshift-marketplace/certified-operators-rpknw" Mar 11 09:30:05 crc kubenswrapper[4840]: I0311 09:30:05.706035 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bccdd6d4-0ed7-4942-964e-0ce37ffbac6d-catalog-content\") pod \"certified-operators-rpknw\" (UID: \"bccdd6d4-0ed7-4942-964e-0ce37ffbac6d\") " pod="openshift-marketplace/certified-operators-rpknw" Mar 11 09:30:05 crc kubenswrapper[4840]: I0311 09:30:05.706094 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g8lc\" (UniqueName: \"kubernetes.io/projected/bccdd6d4-0ed7-4942-964e-0ce37ffbac6d-kube-api-access-5g8lc\") pod \"certified-operators-rpknw\" (UID: \"bccdd6d4-0ed7-4942-964e-0ce37ffbac6d\") " pod="openshift-marketplace/certified-operators-rpknw" Mar 11 09:30:05 crc kubenswrapper[4840]: I0311 09:30:05.706564 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bccdd6d4-0ed7-4942-964e-0ce37ffbac6d-utilities\") pod \"certified-operators-rpknw\" (UID: \"bccdd6d4-0ed7-4942-964e-0ce37ffbac6d\") " pod="openshift-marketplace/certified-operators-rpknw" Mar 11 09:30:05 crc kubenswrapper[4840]: I0311 09:30:05.706677 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bccdd6d4-0ed7-4942-964e-0ce37ffbac6d-catalog-content\") pod \"certified-operators-rpknw\" (UID: \"bccdd6d4-0ed7-4942-964e-0ce37ffbac6d\") " pod="openshift-marketplace/certified-operators-rpknw" Mar 11 09:30:05 crc kubenswrapper[4840]: I0311 09:30:05.729047 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g8lc\" (UniqueName: \"kubernetes.io/projected/bccdd6d4-0ed7-4942-964e-0ce37ffbac6d-kube-api-access-5g8lc\") pod \"certified-operators-rpknw\" (UID: \"bccdd6d4-0ed7-4942-964e-0ce37ffbac6d\") " pod="openshift-marketplace/certified-operators-rpknw" Mar 11 09:30:05 crc kubenswrapper[4840]: I0311 09:30:05.821688 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rpknw" Mar 11 09:30:06 crc kubenswrapper[4840]: I0311 09:30:06.102762 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rpknw"] Mar 11 09:30:06 crc kubenswrapper[4840]: I0311 09:30:06.311906 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rpknw" event={"ID":"bccdd6d4-0ed7-4942-964e-0ce37ffbac6d","Type":"ContainerStarted","Data":"dfcea16254ee8d71516e1d5383863f25bd4e672790be9325bbe263783cb1d602"} Mar 11 09:30:06 crc kubenswrapper[4840]: I0311 09:30:06.312272 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rpknw" event={"ID":"bccdd6d4-0ed7-4942-964e-0ce37ffbac6d","Type":"ContainerStarted","Data":"8547ae2cd464351aa6b71c37cf1c53ebb0c5d2b56eb2bfaac4bfb125219eafd5"} Mar 11 09:30:06 crc kubenswrapper[4840]: I0311 09:30:06.575066 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553690-bqhn5" Mar 11 09:30:06 crc kubenswrapper[4840]: I0311 09:30:06.624710 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vtgw\" (UniqueName: \"kubernetes.io/projected/c868a997-6786-459e-ad2c-293d1613ea1f-kube-api-access-6vtgw\") pod \"c868a997-6786-459e-ad2c-293d1613ea1f\" (UID: \"c868a997-6786-459e-ad2c-293d1613ea1f\") " Mar 11 09:30:06 crc kubenswrapper[4840]: I0311 09:30:06.630988 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c868a997-6786-459e-ad2c-293d1613ea1f-kube-api-access-6vtgw" (OuterVolumeSpecName: "kube-api-access-6vtgw") pod "c868a997-6786-459e-ad2c-293d1613ea1f" (UID: "c868a997-6786-459e-ad2c-293d1613ea1f"). InnerVolumeSpecName "kube-api-access-6vtgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:30:06 crc kubenswrapper[4840]: I0311 09:30:06.726132 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vtgw\" (UniqueName: \"kubernetes.io/projected/c868a997-6786-459e-ad2c-293d1613ea1f-kube-api-access-6vtgw\") on node \"crc\" DevicePath \"\"" Mar 11 09:30:07 crc kubenswrapper[4840]: I0311 09:30:07.321167 4840 generic.go:334] "Generic (PLEG): container finished" podID="bccdd6d4-0ed7-4942-964e-0ce37ffbac6d" containerID="dfcea16254ee8d71516e1d5383863f25bd4e672790be9325bbe263783cb1d602" exitCode=0 Mar 11 09:30:07 crc kubenswrapper[4840]: I0311 09:30:07.321259 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rpknw" event={"ID":"bccdd6d4-0ed7-4942-964e-0ce37ffbac6d","Type":"ContainerDied","Data":"dfcea16254ee8d71516e1d5383863f25bd4e672790be9325bbe263783cb1d602"} Mar 11 09:30:07 crc kubenswrapper[4840]: I0311 09:30:07.322848 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553690-bqhn5" event={"ID":"c868a997-6786-459e-ad2c-293d1613ea1f","Type":"ContainerDied","Data":"90c38c10e4b87274fbb7d33ef90d28f4951b8f3698738a70d0bcec0ffabbefb1"} Mar 11 09:30:07 crc kubenswrapper[4840]: I0311 09:30:07.322898 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90c38c10e4b87274fbb7d33ef90d28f4951b8f3698738a70d0bcec0ffabbefb1" Mar 11 09:30:07 crc kubenswrapper[4840]: I0311 09:30:07.322900 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553690-bqhn5" Mar 11 09:30:07 crc kubenswrapper[4840]: I0311 09:30:07.639713 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553684-h7krd"] Mar 11 09:30:07 crc kubenswrapper[4840]: I0311 09:30:07.659858 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553684-h7krd"] Mar 11 09:30:08 crc kubenswrapper[4840]: I0311 09:30:08.069302 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d32f0678-e814-47b7-b976-62548a3fa240" path="/var/lib/kubelet/pods/d32f0678-e814-47b7-b976-62548a3fa240/volumes" Mar 11 09:30:08 crc kubenswrapper[4840]: I0311 09:30:08.331965 4840 generic.go:334] "Generic (PLEG): container finished" podID="bccdd6d4-0ed7-4942-964e-0ce37ffbac6d" containerID="7d4659403fc64efc7330f6c2e5c122d9f7132f7483433d794536bcbbd3fc38c1" exitCode=0 Mar 11 09:30:08 crc kubenswrapper[4840]: I0311 09:30:08.332017 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rpknw" event={"ID":"bccdd6d4-0ed7-4942-964e-0ce37ffbac6d","Type":"ContainerDied","Data":"7d4659403fc64efc7330f6c2e5c122d9f7132f7483433d794536bcbbd3fc38c1"} Mar 11 09:30:09 crc kubenswrapper[4840]: I0311 09:30:09.109126 4840 scope.go:117] "RemoveContainer" containerID="1c62c186fe302f42e736484571b96c27d7c45e50ba11521b771084700eb3ff0d" Mar 11 09:30:09 crc kubenswrapper[4840]: I0311 09:30:09.150694 4840 scope.go:117] "RemoveContainer" containerID="51416be935c7d35e2e945c8d54b076b719c0e25edf4fc27069b07ab19ca93b38" Mar 11 09:30:09 crc kubenswrapper[4840]: I0311 09:30:09.341128 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rpknw" event={"ID":"bccdd6d4-0ed7-4942-964e-0ce37ffbac6d","Type":"ContainerStarted","Data":"894462cbd1f3a9f4e9756537becd1f889520a676b63167f282b3f3a92d6dba76"} Mar 11 09:30:09 crc kubenswrapper[4840]: I0311 09:30:09.368141 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rpknw" podStartSLOduration=2.890386541 podStartE2EDuration="4.368116453s" podCreationTimestamp="2026-03-11 09:30:05 +0000 UTC" firstStartedPulling="2026-03-11 09:30:07.323724528 +0000 UTC m=+2005.989394343" lastFinishedPulling="2026-03-11 09:30:08.80145444 +0000 UTC m=+2007.467124255" observedRunningTime="2026-03-11 09:30:09.363325722 +0000 UTC m=+2008.028995537" watchObservedRunningTime="2026-03-11 09:30:09.368116453 +0000 UTC m=+2008.033786258" Mar 11 09:30:15 crc kubenswrapper[4840]: I0311 09:30:15.822604 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rpknw" Mar 11 09:30:15 crc kubenswrapper[4840]: I0311 09:30:15.824648 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rpknw" Mar 11 09:30:15 crc kubenswrapper[4840]: I0311 09:30:15.864806 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rpknw" Mar 11 09:30:16 crc kubenswrapper[4840]: I0311 09:30:16.436140 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rpknw" Mar 11 09:30:16 crc kubenswrapper[4840]: I0311 09:30:16.487767 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rpknw"] Mar 11 09:30:18 crc kubenswrapper[4840]: I0311 09:30:18.399603 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rpknw" podUID="bccdd6d4-0ed7-4942-964e-0ce37ffbac6d" containerName="registry-server" containerID="cri-o://894462cbd1f3a9f4e9756537becd1f889520a676b63167f282b3f3a92d6dba76" gracePeriod=2 Mar 11 09:30:18 crc kubenswrapper[4840]: I0311 09:30:18.760084 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rpknw" Mar 11 09:30:18 crc kubenswrapper[4840]: I0311 09:30:18.961792 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bccdd6d4-0ed7-4942-964e-0ce37ffbac6d-utilities\") pod \"bccdd6d4-0ed7-4942-964e-0ce37ffbac6d\" (UID: \"bccdd6d4-0ed7-4942-964e-0ce37ffbac6d\") " Mar 11 09:30:18 crc kubenswrapper[4840]: I0311 09:30:18.962019 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bccdd6d4-0ed7-4942-964e-0ce37ffbac6d-catalog-content\") pod \"bccdd6d4-0ed7-4942-964e-0ce37ffbac6d\" (UID: \"bccdd6d4-0ed7-4942-964e-0ce37ffbac6d\") " Mar 11 09:30:18 crc kubenswrapper[4840]: I0311 09:30:18.962089 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g8lc\" (UniqueName: \"kubernetes.io/projected/bccdd6d4-0ed7-4942-964e-0ce37ffbac6d-kube-api-access-5g8lc\") pod \"bccdd6d4-0ed7-4942-964e-0ce37ffbac6d\" (UID: \"bccdd6d4-0ed7-4942-964e-0ce37ffbac6d\") " Mar 11 09:30:18 crc kubenswrapper[4840]: I0311 09:30:18.963293 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bccdd6d4-0ed7-4942-964e-0ce37ffbac6d-utilities" (OuterVolumeSpecName: "utilities") pod "bccdd6d4-0ed7-4942-964e-0ce37ffbac6d" (UID: "bccdd6d4-0ed7-4942-964e-0ce37ffbac6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:30:18 crc kubenswrapper[4840]: I0311 09:30:18.968281 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bccdd6d4-0ed7-4942-964e-0ce37ffbac6d-kube-api-access-5g8lc" (OuterVolumeSpecName: "kube-api-access-5g8lc") pod "bccdd6d4-0ed7-4942-964e-0ce37ffbac6d" (UID: "bccdd6d4-0ed7-4942-964e-0ce37ffbac6d"). InnerVolumeSpecName "kube-api-access-5g8lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:30:19 crc kubenswrapper[4840]: I0311 09:30:19.064455 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g8lc\" (UniqueName: \"kubernetes.io/projected/bccdd6d4-0ed7-4942-964e-0ce37ffbac6d-kube-api-access-5g8lc\") on node \"crc\" DevicePath \"\"" Mar 11 09:30:19 crc kubenswrapper[4840]: I0311 09:30:19.064535 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bccdd6d4-0ed7-4942-964e-0ce37ffbac6d-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:30:19 crc kubenswrapper[4840]: I0311 09:30:19.408544 4840 generic.go:334] "Generic (PLEG): container finished" podID="bccdd6d4-0ed7-4942-964e-0ce37ffbac6d" containerID="894462cbd1f3a9f4e9756537becd1f889520a676b63167f282b3f3a92d6dba76" exitCode=0 Mar 11 09:30:19 crc kubenswrapper[4840]: I0311 09:30:19.408629 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rpknw" event={"ID":"bccdd6d4-0ed7-4942-964e-0ce37ffbac6d","Type":"ContainerDied","Data":"894462cbd1f3a9f4e9756537becd1f889520a676b63167f282b3f3a92d6dba76"} Mar 11 09:30:19 crc kubenswrapper[4840]: I0311 09:30:19.409703 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rpknw" event={"ID":"bccdd6d4-0ed7-4942-964e-0ce37ffbac6d","Type":"ContainerDied","Data":"8547ae2cd464351aa6b71c37cf1c53ebb0c5d2b56eb2bfaac4bfb125219eafd5"} Mar 11 09:30:19 crc kubenswrapper[4840]: I0311 09:30:19.409732 4840 scope.go:117] "RemoveContainer" containerID="894462cbd1f3a9f4e9756537becd1f889520a676b63167f282b3f3a92d6dba76" Mar 11 09:30:19 crc kubenswrapper[4840]: I0311 09:30:19.408746 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rpknw" Mar 11 09:30:19 crc kubenswrapper[4840]: I0311 09:30:19.429538 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bccdd6d4-0ed7-4942-964e-0ce37ffbac6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bccdd6d4-0ed7-4942-964e-0ce37ffbac6d" (UID: "bccdd6d4-0ed7-4942-964e-0ce37ffbac6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:30:19 crc kubenswrapper[4840]: I0311 09:30:19.430779 4840 scope.go:117] "RemoveContainer" containerID="7d4659403fc64efc7330f6c2e5c122d9f7132f7483433d794536bcbbd3fc38c1" Mar 11 09:30:19 crc kubenswrapper[4840]: I0311 09:30:19.448989 4840 scope.go:117] "RemoveContainer" containerID="dfcea16254ee8d71516e1d5383863f25bd4e672790be9325bbe263783cb1d602" Mar 11 09:30:19 crc kubenswrapper[4840]: I0311 09:30:19.471260 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bccdd6d4-0ed7-4942-964e-0ce37ffbac6d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:30:19 crc kubenswrapper[4840]: I0311 09:30:19.473225 4840 scope.go:117] "RemoveContainer" containerID="894462cbd1f3a9f4e9756537becd1f889520a676b63167f282b3f3a92d6dba76" Mar 11 09:30:19 crc kubenswrapper[4840]: E0311 09:30:19.473766 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"894462cbd1f3a9f4e9756537becd1f889520a676b63167f282b3f3a92d6dba76\": container with ID starting with 894462cbd1f3a9f4e9756537becd1f889520a676b63167f282b3f3a92d6dba76 not found: ID does not exist" containerID="894462cbd1f3a9f4e9756537becd1f889520a676b63167f282b3f3a92d6dba76" Mar 11 09:30:19 crc kubenswrapper[4840]: I0311 09:30:19.473801 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"894462cbd1f3a9f4e9756537becd1f889520a676b63167f282b3f3a92d6dba76"} err="failed to get container status \"894462cbd1f3a9f4e9756537becd1f889520a676b63167f282b3f3a92d6dba76\": rpc error: code = NotFound desc = could not find container \"894462cbd1f3a9f4e9756537becd1f889520a676b63167f282b3f3a92d6dba76\": container with ID starting with 894462cbd1f3a9f4e9756537becd1f889520a676b63167f282b3f3a92d6dba76 not found: ID does not exist" Mar 11 09:30:19 crc kubenswrapper[4840]: I0311 09:30:19.473821 4840 scope.go:117] "RemoveContainer" containerID="7d4659403fc64efc7330f6c2e5c122d9f7132f7483433d794536bcbbd3fc38c1" Mar 11 09:30:19 crc kubenswrapper[4840]: E0311 09:30:19.474285 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d4659403fc64efc7330f6c2e5c122d9f7132f7483433d794536bcbbd3fc38c1\": container with ID starting with 7d4659403fc64efc7330f6c2e5c122d9f7132f7483433d794536bcbbd3fc38c1 not found: ID does not exist" containerID="7d4659403fc64efc7330f6c2e5c122d9f7132f7483433d794536bcbbd3fc38c1" Mar 11 09:30:19 crc kubenswrapper[4840]: I0311 09:30:19.474315 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d4659403fc64efc7330f6c2e5c122d9f7132f7483433d794536bcbbd3fc38c1"} err="failed to get container status \"7d4659403fc64efc7330f6c2e5c122d9f7132f7483433d794536bcbbd3fc38c1\": rpc error: code = NotFound desc = could not find container \"7d4659403fc64efc7330f6c2e5c122d9f7132f7483433d794536bcbbd3fc38c1\": container with ID starting with 7d4659403fc64efc7330f6c2e5c122d9f7132f7483433d794536bcbbd3fc38c1 not found: ID does not exist" Mar 11 09:30:19 crc kubenswrapper[4840]: I0311 09:30:19.474335 4840 scope.go:117] "RemoveContainer" containerID="dfcea16254ee8d71516e1d5383863f25bd4e672790be9325bbe263783cb1d602" Mar 11 09:30:19 crc kubenswrapper[4840]: E0311 09:30:19.474750 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfcea16254ee8d71516e1d5383863f25bd4e672790be9325bbe263783cb1d602\": container with ID starting with dfcea16254ee8d71516e1d5383863f25bd4e672790be9325bbe263783cb1d602 not found: ID does not exist" containerID="dfcea16254ee8d71516e1d5383863f25bd4e672790be9325bbe263783cb1d602" Mar 11 09:30:19 crc kubenswrapper[4840]: I0311 09:30:19.474776 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfcea16254ee8d71516e1d5383863f25bd4e672790be9325bbe263783cb1d602"} err="failed to get container status \"dfcea16254ee8d71516e1d5383863f25bd4e672790be9325bbe263783cb1d602\": rpc error: code = NotFound desc = could not find container \"dfcea16254ee8d71516e1d5383863f25bd4e672790be9325bbe263783cb1d602\": container with ID starting with dfcea16254ee8d71516e1d5383863f25bd4e672790be9325bbe263783cb1d602 not found: ID does not exist" Mar 11 09:30:19 crc kubenswrapper[4840]: I0311 09:30:19.742875 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rpknw"] Mar 11 09:30:19 crc kubenswrapper[4840]: I0311 09:30:19.748623 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rpknw"] Mar 11 09:30:20 crc kubenswrapper[4840]: I0311 09:30:20.072311 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bccdd6d4-0ed7-4942-964e-0ce37ffbac6d" path="/var/lib/kubelet/pods/bccdd6d4-0ed7-4942-964e-0ce37ffbac6d/volumes" Mar 11 09:30:27 crc kubenswrapper[4840]: I0311 09:30:27.445843 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:30:27 crc kubenswrapper[4840]: I0311 09:30:27.446502 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:30:57 crc kubenswrapper[4840]: I0311 09:30:57.446642 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:30:57 crc kubenswrapper[4840]: I0311 09:30:57.447710 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:31:27 crc kubenswrapper[4840]: I0311 09:31:27.446168 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:31:27 crc kubenswrapper[4840]: I0311 09:31:27.446808 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:31:27 crc kubenswrapper[4840]: I0311 09:31:27.446868 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 09:31:27 crc kubenswrapper[4840]: I0311 09:31:27.448043 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51be40a909991ab2de96eb0cd37759289fa5a2f24f6847c6cdb91971787a0806"} pod="openshift-machine-config-operator/machine-config-daemon-brtht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 09:31:27 crc kubenswrapper[4840]: I0311 09:31:27.448114 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" containerID="cri-o://51be40a909991ab2de96eb0cd37759289fa5a2f24f6847c6cdb91971787a0806" gracePeriod=600 Mar 11 09:31:27 crc kubenswrapper[4840]: I0311 09:31:27.878329 4840 generic.go:334] "Generic (PLEG): container finished" podID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerID="51be40a909991ab2de96eb0cd37759289fa5a2f24f6847c6cdb91971787a0806" exitCode=0 Mar 11 09:31:27 crc kubenswrapper[4840]: I0311 09:31:27.878416 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerDied","Data":"51be40a909991ab2de96eb0cd37759289fa5a2f24f6847c6cdb91971787a0806"} Mar 11 09:31:27 crc kubenswrapper[4840]: I0311 09:31:27.878740 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2"} Mar 11 09:31:27 crc kubenswrapper[4840]: I0311 09:31:27.878777 4840 scope.go:117] "RemoveContainer" containerID="169d1d8ff4c62b0ae7bd3b2cf09a5479966d8e2e448439c79c97cc33b9792a4f" Mar 11 09:32:00 crc kubenswrapper[4840]: I0311 09:32:00.139075 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553692-c2k8x"] Mar 11 09:32:00 crc kubenswrapper[4840]: E0311 09:32:00.140034 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bccdd6d4-0ed7-4942-964e-0ce37ffbac6d" containerName="extract-content" Mar 11 09:32:00 crc kubenswrapper[4840]: I0311 09:32:00.140052 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="bccdd6d4-0ed7-4942-964e-0ce37ffbac6d" containerName="extract-content" Mar 11 09:32:00 crc kubenswrapper[4840]: E0311 09:32:00.140077 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bccdd6d4-0ed7-4942-964e-0ce37ffbac6d" containerName="extract-utilities" Mar 11 09:32:00 crc kubenswrapper[4840]: I0311 09:32:00.140085 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="bccdd6d4-0ed7-4942-964e-0ce37ffbac6d" containerName="extract-utilities" Mar 11 09:32:00 crc kubenswrapper[4840]: E0311 09:32:00.140101 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c868a997-6786-459e-ad2c-293d1613ea1f" containerName="oc" Mar 11 09:32:00 crc kubenswrapper[4840]: I0311 09:32:00.140110 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="c868a997-6786-459e-ad2c-293d1613ea1f" containerName="oc" Mar 11 09:32:00 crc kubenswrapper[4840]: E0311 09:32:00.140125 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bccdd6d4-0ed7-4942-964e-0ce37ffbac6d" containerName="registry-server" Mar 11 09:32:00 crc kubenswrapper[4840]: I0311 09:32:00.140133 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="bccdd6d4-0ed7-4942-964e-0ce37ffbac6d" containerName="registry-server" Mar 11 09:32:00 crc kubenswrapper[4840]: I0311 09:32:00.140301 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="c868a997-6786-459e-ad2c-293d1613ea1f" containerName="oc" Mar 11 09:32:00 crc kubenswrapper[4840]: I0311 09:32:00.140314 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="bccdd6d4-0ed7-4942-964e-0ce37ffbac6d" containerName="registry-server" Mar 11 09:32:00 crc kubenswrapper[4840]: I0311 09:32:00.140895 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553692-c2k8x" Mar 11 09:32:00 crc kubenswrapper[4840]: I0311 09:32:00.147799 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553692-c2k8x"] Mar 11 09:32:00 crc kubenswrapper[4840]: I0311 09:32:00.149245 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:32:00 crc kubenswrapper[4840]: I0311 09:32:00.149483 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:32:00 crc kubenswrapper[4840]: I0311 09:32:00.149642 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:32:00 crc kubenswrapper[4840]: I0311 09:32:00.262659 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57zfs\" (UniqueName: \"kubernetes.io/projected/e3ca333e-8898-426e-8f56-ee17d810bc6f-kube-api-access-57zfs\") pod \"auto-csr-approver-29553692-c2k8x\" (UID: \"e3ca333e-8898-426e-8f56-ee17d810bc6f\") " pod="openshift-infra/auto-csr-approver-29553692-c2k8x" Mar 11 09:32:00 crc kubenswrapper[4840]: I0311 09:32:00.364734 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57zfs\" (UniqueName: \"kubernetes.io/projected/e3ca333e-8898-426e-8f56-ee17d810bc6f-kube-api-access-57zfs\") pod \"auto-csr-approver-29553692-c2k8x\" (UID: \"e3ca333e-8898-426e-8f56-ee17d810bc6f\") " pod="openshift-infra/auto-csr-approver-29553692-c2k8x" Mar 11 09:32:00 crc kubenswrapper[4840]: I0311 09:32:00.387448 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57zfs\" (UniqueName: \"kubernetes.io/projected/e3ca333e-8898-426e-8f56-ee17d810bc6f-kube-api-access-57zfs\") pod \"auto-csr-approver-29553692-c2k8x\" (UID: \"e3ca333e-8898-426e-8f56-ee17d810bc6f\") " pod="openshift-infra/auto-csr-approver-29553692-c2k8x" Mar 11 09:32:00 crc kubenswrapper[4840]: I0311 09:32:00.461229 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553692-c2k8x" Mar 11 09:32:00 crc kubenswrapper[4840]: I0311 09:32:00.894103 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553692-c2k8x"] Mar 11 09:32:01 crc kubenswrapper[4840]: I0311 09:32:01.103854 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553692-c2k8x" event={"ID":"e3ca333e-8898-426e-8f56-ee17d810bc6f","Type":"ContainerStarted","Data":"6bd3f201b49b7eaca8415d924e9a15016037459281e7c939f9cd81cf04a085b4"} Mar 11 09:32:03 crc kubenswrapper[4840]: I0311 09:32:03.150625 4840 generic.go:334] "Generic (PLEG): container finished" podID="e3ca333e-8898-426e-8f56-ee17d810bc6f" containerID="78d26a727e54ccccef444909c40cc731ae71d06771c5b3e5067ce654fa1019d5" exitCode=0 Mar 11 09:32:03 crc kubenswrapper[4840]: I0311 09:32:03.150672 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553692-c2k8x" event={"ID":"e3ca333e-8898-426e-8f56-ee17d810bc6f","Type":"ContainerDied","Data":"78d26a727e54ccccef444909c40cc731ae71d06771c5b3e5067ce654fa1019d5"} Mar 11 09:32:04 crc kubenswrapper[4840]: I0311 09:32:04.425560 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553692-c2k8x" Mar 11 09:32:04 crc kubenswrapper[4840]: I0311 09:32:04.528336 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57zfs\" (UniqueName: \"kubernetes.io/projected/e3ca333e-8898-426e-8f56-ee17d810bc6f-kube-api-access-57zfs\") pod \"e3ca333e-8898-426e-8f56-ee17d810bc6f\" (UID: \"e3ca333e-8898-426e-8f56-ee17d810bc6f\") " Mar 11 09:32:04 crc kubenswrapper[4840]: I0311 09:32:04.534818 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3ca333e-8898-426e-8f56-ee17d810bc6f-kube-api-access-57zfs" (OuterVolumeSpecName: "kube-api-access-57zfs") pod "e3ca333e-8898-426e-8f56-ee17d810bc6f" (UID: "e3ca333e-8898-426e-8f56-ee17d810bc6f"). InnerVolumeSpecName "kube-api-access-57zfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:32:04 crc kubenswrapper[4840]: I0311 09:32:04.630557 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57zfs\" (UniqueName: \"kubernetes.io/projected/e3ca333e-8898-426e-8f56-ee17d810bc6f-kube-api-access-57zfs\") on node \"crc\" DevicePath \"\"" Mar 11 09:32:05 crc kubenswrapper[4840]: I0311 09:32:05.166093 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553692-c2k8x" event={"ID":"e3ca333e-8898-426e-8f56-ee17d810bc6f","Type":"ContainerDied","Data":"6bd3f201b49b7eaca8415d924e9a15016037459281e7c939f9cd81cf04a085b4"} Mar 11 09:32:05 crc kubenswrapper[4840]: I0311 09:32:05.166139 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bd3f201b49b7eaca8415d924e9a15016037459281e7c939f9cd81cf04a085b4" Mar 11 09:32:05 crc kubenswrapper[4840]: I0311 09:32:05.166156 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553692-c2k8x" Mar 11 09:32:05 crc kubenswrapper[4840]: I0311 09:32:05.513387 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553686-wpkpx"] Mar 11 09:32:05 crc kubenswrapper[4840]: I0311 09:32:05.519838 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553686-wpkpx"] Mar 11 09:32:06 crc kubenswrapper[4840]: I0311 09:32:06.071106 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3d08069-fc56-45d2-bce0-246ce578db77" path="/var/lib/kubelet/pods/a3d08069-fc56-45d2-bce0-246ce578db77/volumes" Mar 11 09:32:09 crc kubenswrapper[4840]: I0311 09:32:09.234678 4840 scope.go:117] "RemoveContainer" containerID="f04903422aff0dbc18066a5f3bbc0944f55f41f875478cd0c5d9db5570bceddc" Mar 11 09:33:02 crc kubenswrapper[4840]: I0311 09:33:02.587450 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lfsdf"] Mar 11 09:33:02 crc kubenswrapper[4840]: E0311 09:33:02.588266 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3ca333e-8898-426e-8f56-ee17d810bc6f" containerName="oc" Mar 11 09:33:02 crc kubenswrapper[4840]: I0311 09:33:02.588278 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3ca333e-8898-426e-8f56-ee17d810bc6f" containerName="oc" Mar 11 09:33:02 crc kubenswrapper[4840]: I0311 09:33:02.588449 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3ca333e-8898-426e-8f56-ee17d810bc6f" containerName="oc" Mar 11 09:33:02 crc kubenswrapper[4840]: I0311 09:33:02.589482 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lfsdf" Mar 11 09:33:02 crc kubenswrapper[4840]: I0311 09:33:02.602053 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lfsdf"] Mar 11 09:33:02 crc kubenswrapper[4840]: I0311 09:33:02.761951 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00ea06eb-f368-4c88-ba2d-eb4a0600ae61-catalog-content\") pod \"redhat-marketplace-lfsdf\" (UID: \"00ea06eb-f368-4c88-ba2d-eb4a0600ae61\") " pod="openshift-marketplace/redhat-marketplace-lfsdf" Mar 11 09:33:02 crc kubenswrapper[4840]: I0311 09:33:02.762001 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00ea06eb-f368-4c88-ba2d-eb4a0600ae61-utilities\") pod \"redhat-marketplace-lfsdf\" (UID: \"00ea06eb-f368-4c88-ba2d-eb4a0600ae61\") " pod="openshift-marketplace/redhat-marketplace-lfsdf" Mar 11 09:33:02 crc kubenswrapper[4840]: I0311 09:33:02.762578 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dllf\" (UniqueName: \"kubernetes.io/projected/00ea06eb-f368-4c88-ba2d-eb4a0600ae61-kube-api-access-9dllf\") pod \"redhat-marketplace-lfsdf\" (UID: \"00ea06eb-f368-4c88-ba2d-eb4a0600ae61\") " pod="openshift-marketplace/redhat-marketplace-lfsdf" Mar 11 09:33:02 crc kubenswrapper[4840]: I0311 09:33:02.863620 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00ea06eb-f368-4c88-ba2d-eb4a0600ae61-catalog-content\") pod \"redhat-marketplace-lfsdf\" (UID: \"00ea06eb-f368-4c88-ba2d-eb4a0600ae61\") " pod="openshift-marketplace/redhat-marketplace-lfsdf" Mar 11 09:33:02 crc kubenswrapper[4840]: I0311 09:33:02.863675 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00ea06eb-f368-4c88-ba2d-eb4a0600ae61-utilities\") pod \"redhat-marketplace-lfsdf\" (UID: \"00ea06eb-f368-4c88-ba2d-eb4a0600ae61\") " pod="openshift-marketplace/redhat-marketplace-lfsdf" Mar 11 09:33:02 crc kubenswrapper[4840]: I0311 09:33:02.863714 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dllf\" (UniqueName: \"kubernetes.io/projected/00ea06eb-f368-4c88-ba2d-eb4a0600ae61-kube-api-access-9dllf\") pod \"redhat-marketplace-lfsdf\" (UID: \"00ea06eb-f368-4c88-ba2d-eb4a0600ae61\") " pod="openshift-marketplace/redhat-marketplace-lfsdf" Mar 11 09:33:02 crc kubenswrapper[4840]: I0311 09:33:02.864283 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00ea06eb-f368-4c88-ba2d-eb4a0600ae61-catalog-content\") pod \"redhat-marketplace-lfsdf\" (UID: \"00ea06eb-f368-4c88-ba2d-eb4a0600ae61\") " pod="openshift-marketplace/redhat-marketplace-lfsdf" Mar 11 09:33:02 crc kubenswrapper[4840]: I0311 09:33:02.864311 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00ea06eb-f368-4c88-ba2d-eb4a0600ae61-utilities\") pod \"redhat-marketplace-lfsdf\" (UID: \"00ea06eb-f368-4c88-ba2d-eb4a0600ae61\") " pod="openshift-marketplace/redhat-marketplace-lfsdf" Mar 11 09:33:02 crc kubenswrapper[4840]: I0311 09:33:02.882229 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dllf\" (UniqueName: \"kubernetes.io/projected/00ea06eb-f368-4c88-ba2d-eb4a0600ae61-kube-api-access-9dllf\") pod \"redhat-marketplace-lfsdf\" (UID: \"00ea06eb-f368-4c88-ba2d-eb4a0600ae61\") " pod="openshift-marketplace/redhat-marketplace-lfsdf" Mar 11 09:33:02 crc kubenswrapper[4840]: I0311 09:33:02.906973 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lfsdf" Mar 11 09:33:03 crc kubenswrapper[4840]: I0311 09:33:03.332655 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lfsdf"] Mar 11 09:33:03 crc kubenswrapper[4840]: I0311 09:33:03.556196 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfsdf" event={"ID":"00ea06eb-f368-4c88-ba2d-eb4a0600ae61","Type":"ContainerStarted","Data":"280ec40e18334e3254d3f44ef00e144faa82b47c0e89c7eb8661f88d1bd59b69"} Mar 11 09:33:04 crc kubenswrapper[4840]: I0311 09:33:04.564249 4840 generic.go:334] "Generic (PLEG): container finished" podID="00ea06eb-f368-4c88-ba2d-eb4a0600ae61" containerID="7432e35e91829e61e32b326733c2f3dcae3a5f61ccd999abffe8ee632c7423b3" exitCode=0 Mar 11 09:33:04 crc kubenswrapper[4840]: I0311 09:33:04.564582 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfsdf" event={"ID":"00ea06eb-f368-4c88-ba2d-eb4a0600ae61","Type":"ContainerDied","Data":"7432e35e91829e61e32b326733c2f3dcae3a5f61ccd999abffe8ee632c7423b3"} Mar 11 09:33:04 crc kubenswrapper[4840]: I0311 09:33:04.568004 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 09:33:05 crc kubenswrapper[4840]: I0311 09:33:05.573317 4840 generic.go:334] "Generic (PLEG): container finished" podID="00ea06eb-f368-4c88-ba2d-eb4a0600ae61" containerID="a9e97657375ce0349f0bf6e34d0494c3ab910c0d22bad5378ebf8cef6043127a" exitCode=0 Mar 11 09:33:05 crc kubenswrapper[4840]: I0311 09:33:05.573369 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfsdf" event={"ID":"00ea06eb-f368-4c88-ba2d-eb4a0600ae61","Type":"ContainerDied","Data":"a9e97657375ce0349f0bf6e34d0494c3ab910c0d22bad5378ebf8cef6043127a"} Mar 11 09:33:06 crc kubenswrapper[4840]: I0311 09:33:06.582553 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfsdf" event={"ID":"00ea06eb-f368-4c88-ba2d-eb4a0600ae61","Type":"ContainerStarted","Data":"08ddbaff91cd0c042cb4d793cebb4a859a854334b53b7bd304b5c1cf4b42351d"} Mar 11 09:33:06 crc kubenswrapper[4840]: I0311 09:33:06.607106 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lfsdf" podStartSLOduration=3.062572733 podStartE2EDuration="4.607077107s" podCreationTimestamp="2026-03-11 09:33:02 +0000 UTC" firstStartedPulling="2026-03-11 09:33:04.567740649 +0000 UTC m=+2183.233410464" lastFinishedPulling="2026-03-11 09:33:06.112245013 +0000 UTC m=+2184.777914838" observedRunningTime="2026-03-11 09:33:06.601041325 +0000 UTC m=+2185.266711140" watchObservedRunningTime="2026-03-11 09:33:06.607077107 +0000 UTC m=+2185.272746922" Mar 11 09:33:12 crc kubenswrapper[4840]: I0311 09:33:12.908178 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lfsdf" Mar 11 09:33:12 crc kubenswrapper[4840]: I0311 09:33:12.908812 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lfsdf" Mar 11 09:33:12 crc kubenswrapper[4840]: I0311 09:33:12.951394 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lfsdf" Mar 11 09:33:13 crc kubenswrapper[4840]: I0311 09:33:13.677649 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lfsdf" Mar 11 09:33:13 crc kubenswrapper[4840]: I0311 09:33:13.722323 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lfsdf"] Mar 11 09:33:15 crc kubenswrapper[4840]: I0311 09:33:15.650922 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lfsdf" podUID="00ea06eb-f368-4c88-ba2d-eb4a0600ae61" containerName="registry-server" containerID="cri-o://08ddbaff91cd0c042cb4d793cebb4a859a854334b53b7bd304b5c1cf4b42351d" gracePeriod=2 Mar 11 09:33:16 crc kubenswrapper[4840]: I0311 09:33:16.288644 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lfsdf" Mar 11 09:33:16 crc kubenswrapper[4840]: I0311 09:33:16.459260 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dllf\" (UniqueName: \"kubernetes.io/projected/00ea06eb-f368-4c88-ba2d-eb4a0600ae61-kube-api-access-9dllf\") pod \"00ea06eb-f368-4c88-ba2d-eb4a0600ae61\" (UID: \"00ea06eb-f368-4c88-ba2d-eb4a0600ae61\") " Mar 11 09:33:16 crc kubenswrapper[4840]: I0311 09:33:16.459353 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00ea06eb-f368-4c88-ba2d-eb4a0600ae61-catalog-content\") pod \"00ea06eb-f368-4c88-ba2d-eb4a0600ae61\" (UID: \"00ea06eb-f368-4c88-ba2d-eb4a0600ae61\") " Mar 11 09:33:16 crc kubenswrapper[4840]: I0311 09:33:16.459512 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00ea06eb-f368-4c88-ba2d-eb4a0600ae61-utilities\") pod \"00ea06eb-f368-4c88-ba2d-eb4a0600ae61\" (UID: \"00ea06eb-f368-4c88-ba2d-eb4a0600ae61\") " Mar 11 09:33:16 crc kubenswrapper[4840]: I0311 09:33:16.460349 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00ea06eb-f368-4c88-ba2d-eb4a0600ae61-utilities" (OuterVolumeSpecName: "utilities") pod "00ea06eb-f368-4c88-ba2d-eb4a0600ae61" (UID: "00ea06eb-f368-4c88-ba2d-eb4a0600ae61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:33:16 crc kubenswrapper[4840]: I0311 09:33:16.460645 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00ea06eb-f368-4c88-ba2d-eb4a0600ae61-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:33:16 crc kubenswrapper[4840]: I0311 09:33:16.464715 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00ea06eb-f368-4c88-ba2d-eb4a0600ae61-kube-api-access-9dllf" (OuterVolumeSpecName: "kube-api-access-9dllf") pod "00ea06eb-f368-4c88-ba2d-eb4a0600ae61" (UID: "00ea06eb-f368-4c88-ba2d-eb4a0600ae61"). InnerVolumeSpecName "kube-api-access-9dllf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:33:16 crc kubenswrapper[4840]: I0311 09:33:16.489693 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00ea06eb-f368-4c88-ba2d-eb4a0600ae61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00ea06eb-f368-4c88-ba2d-eb4a0600ae61" (UID: "00ea06eb-f368-4c88-ba2d-eb4a0600ae61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:33:16 crc kubenswrapper[4840]: I0311 09:33:16.561685 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dllf\" (UniqueName: \"kubernetes.io/projected/00ea06eb-f368-4c88-ba2d-eb4a0600ae61-kube-api-access-9dllf\") on node \"crc\" DevicePath \"\"" Mar 11 09:33:16 crc kubenswrapper[4840]: I0311 09:33:16.561721 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00ea06eb-f368-4c88-ba2d-eb4a0600ae61-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:33:16 crc kubenswrapper[4840]: I0311 09:33:16.660222 4840 generic.go:334] "Generic (PLEG): container finished" podID="00ea06eb-f368-4c88-ba2d-eb4a0600ae61" containerID="08ddbaff91cd0c042cb4d793cebb4a859a854334b53b7bd304b5c1cf4b42351d" exitCode=0 Mar 11 09:33:16 crc kubenswrapper[4840]: I0311 09:33:16.660269 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfsdf" event={"ID":"00ea06eb-f368-4c88-ba2d-eb4a0600ae61","Type":"ContainerDied","Data":"08ddbaff91cd0c042cb4d793cebb4a859a854334b53b7bd304b5c1cf4b42351d"} Mar 11 09:33:16 crc kubenswrapper[4840]: I0311 09:33:16.660301 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lfsdf" Mar 11 09:33:16 crc kubenswrapper[4840]: I0311 09:33:16.660340 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lfsdf" event={"ID":"00ea06eb-f368-4c88-ba2d-eb4a0600ae61","Type":"ContainerDied","Data":"280ec40e18334e3254d3f44ef00e144faa82b47c0e89c7eb8661f88d1bd59b69"} Mar 11 09:33:16 crc kubenswrapper[4840]: I0311 09:33:16.660368 4840 scope.go:117] "RemoveContainer" containerID="08ddbaff91cd0c042cb4d793cebb4a859a854334b53b7bd304b5c1cf4b42351d" Mar 11 09:33:16 crc kubenswrapper[4840]: I0311 09:33:16.688886 4840 scope.go:117] "RemoveContainer" containerID="a9e97657375ce0349f0bf6e34d0494c3ab910c0d22bad5378ebf8cef6043127a" Mar 11 09:33:16 crc kubenswrapper[4840]: I0311 09:33:16.709132 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lfsdf"] Mar 11 09:33:16 crc kubenswrapper[4840]: I0311 09:33:16.713035 4840 scope.go:117] "RemoveContainer" containerID="7432e35e91829e61e32b326733c2f3dcae3a5f61ccd999abffe8ee632c7423b3" Mar 11 09:33:16 crc kubenswrapper[4840]: I0311 09:33:16.715669 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lfsdf"] Mar 11 09:33:16 crc kubenswrapper[4840]: I0311 09:33:16.735883 4840 scope.go:117] "RemoveContainer" containerID="08ddbaff91cd0c042cb4d793cebb4a859a854334b53b7bd304b5c1cf4b42351d" Mar 11 09:33:16 crc kubenswrapper[4840]: E0311 09:33:16.736252 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08ddbaff91cd0c042cb4d793cebb4a859a854334b53b7bd304b5c1cf4b42351d\": container with ID starting with 08ddbaff91cd0c042cb4d793cebb4a859a854334b53b7bd304b5c1cf4b42351d not found: ID does not exist" containerID="08ddbaff91cd0c042cb4d793cebb4a859a854334b53b7bd304b5c1cf4b42351d" Mar 11 09:33:16 crc kubenswrapper[4840]: I0311 09:33:16.736289 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08ddbaff91cd0c042cb4d793cebb4a859a854334b53b7bd304b5c1cf4b42351d"} err="failed to get container status \"08ddbaff91cd0c042cb4d793cebb4a859a854334b53b7bd304b5c1cf4b42351d\": rpc error: code = NotFound desc = could not find container \"08ddbaff91cd0c042cb4d793cebb4a859a854334b53b7bd304b5c1cf4b42351d\": container with ID starting with 08ddbaff91cd0c042cb4d793cebb4a859a854334b53b7bd304b5c1cf4b42351d not found: ID does not exist" Mar 11 09:33:16 crc kubenswrapper[4840]: I0311 09:33:16.736311 4840 scope.go:117] "RemoveContainer" containerID="a9e97657375ce0349f0bf6e34d0494c3ab910c0d22bad5378ebf8cef6043127a" Mar 11 09:33:16 crc kubenswrapper[4840]: E0311 09:33:16.736575 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9e97657375ce0349f0bf6e34d0494c3ab910c0d22bad5378ebf8cef6043127a\": container with ID starting with a9e97657375ce0349f0bf6e34d0494c3ab910c0d22bad5378ebf8cef6043127a not found: ID does not exist" containerID="a9e97657375ce0349f0bf6e34d0494c3ab910c0d22bad5378ebf8cef6043127a" Mar 11 09:33:16 crc kubenswrapper[4840]: I0311 09:33:16.736620 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9e97657375ce0349f0bf6e34d0494c3ab910c0d22bad5378ebf8cef6043127a"} err="failed to get container status \"a9e97657375ce0349f0bf6e34d0494c3ab910c0d22bad5378ebf8cef6043127a\": rpc error: code = NotFound desc = could not find container \"a9e97657375ce0349f0bf6e34d0494c3ab910c0d22bad5378ebf8cef6043127a\": container with ID starting with a9e97657375ce0349f0bf6e34d0494c3ab910c0d22bad5378ebf8cef6043127a not found: ID does not exist" Mar 11 09:33:16 crc kubenswrapper[4840]: I0311 09:33:16.736649 4840 scope.go:117] "RemoveContainer" containerID="7432e35e91829e61e32b326733c2f3dcae3a5f61ccd999abffe8ee632c7423b3" Mar 11 09:33:16 crc kubenswrapper[4840]: E0311 09:33:16.736978 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7432e35e91829e61e32b326733c2f3dcae3a5f61ccd999abffe8ee632c7423b3\": container with ID starting with 7432e35e91829e61e32b326733c2f3dcae3a5f61ccd999abffe8ee632c7423b3 not found: ID does not exist" containerID="7432e35e91829e61e32b326733c2f3dcae3a5f61ccd999abffe8ee632c7423b3" Mar 11 09:33:16 crc kubenswrapper[4840]: I0311 09:33:16.737084 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7432e35e91829e61e32b326733c2f3dcae3a5f61ccd999abffe8ee632c7423b3"} err="failed to get container status \"7432e35e91829e61e32b326733c2f3dcae3a5f61ccd999abffe8ee632c7423b3\": rpc error: code = NotFound desc = could not find container \"7432e35e91829e61e32b326733c2f3dcae3a5f61ccd999abffe8ee632c7423b3\": container with ID starting with 7432e35e91829e61e32b326733c2f3dcae3a5f61ccd999abffe8ee632c7423b3 not found: ID does not exist" Mar 11 09:33:18 crc kubenswrapper[4840]: I0311 09:33:18.069554 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00ea06eb-f368-4c88-ba2d-eb4a0600ae61" path="/var/lib/kubelet/pods/00ea06eb-f368-4c88-ba2d-eb4a0600ae61/volumes" Mar 11 09:33:27 crc kubenswrapper[4840]: I0311 09:33:27.446200 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:33:27 crc kubenswrapper[4840]: I0311 09:33:27.446884 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:33:57 crc kubenswrapper[4840]: I0311 09:33:57.446739 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:33:57 crc kubenswrapper[4840]: I0311 09:33:57.447383 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:34:00 crc kubenswrapper[4840]: I0311 09:34:00.147898 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553694-jc72q"] Mar 11 09:34:00 crc kubenswrapper[4840]: E0311 09:34:00.148607 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00ea06eb-f368-4c88-ba2d-eb4a0600ae61" containerName="extract-content" Mar 11 09:34:00 crc kubenswrapper[4840]: I0311 09:34:00.148621 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="00ea06eb-f368-4c88-ba2d-eb4a0600ae61" containerName="extract-content" Mar 11 09:34:00 crc kubenswrapper[4840]: E0311 09:34:00.148631 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00ea06eb-f368-4c88-ba2d-eb4a0600ae61" containerName="registry-server" Mar 11 09:34:00 crc kubenswrapper[4840]: I0311 09:34:00.148638 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="00ea06eb-f368-4c88-ba2d-eb4a0600ae61" containerName="registry-server" Mar 11 09:34:00 crc kubenswrapper[4840]: E0311 09:34:00.148674 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00ea06eb-f368-4c88-ba2d-eb4a0600ae61" containerName="extract-utilities" Mar 11 09:34:00 crc kubenswrapper[4840]: I0311 09:34:00.148685 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="00ea06eb-f368-4c88-ba2d-eb4a0600ae61" containerName="extract-utilities" Mar 11 09:34:00 crc kubenswrapper[4840]: I0311 09:34:00.148816 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="00ea06eb-f368-4c88-ba2d-eb4a0600ae61" containerName="registry-server" Mar 11 09:34:00 crc kubenswrapper[4840]: I0311 09:34:00.149318 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553694-jc72q" Mar 11 09:34:00 crc kubenswrapper[4840]: I0311 09:34:00.152207 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:34:00 crc kubenswrapper[4840]: I0311 09:34:00.152403 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:34:00 crc kubenswrapper[4840]: I0311 09:34:00.159246 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553694-jc72q"] Mar 11 09:34:00 crc kubenswrapper[4840]: I0311 09:34:00.159397 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:34:00 crc kubenswrapper[4840]: I0311 09:34:00.316061 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pbps\" (UniqueName: \"kubernetes.io/projected/8f279dd8-0d1b-45b8-b61f-cbb1fc073c01-kube-api-access-9pbps\") pod \"auto-csr-approver-29553694-jc72q\" (UID: \"8f279dd8-0d1b-45b8-b61f-cbb1fc073c01\") " pod="openshift-infra/auto-csr-approver-29553694-jc72q" Mar 11 09:34:00 crc kubenswrapper[4840]: I0311 09:34:00.420849 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pbps\" (UniqueName: \"kubernetes.io/projected/8f279dd8-0d1b-45b8-b61f-cbb1fc073c01-kube-api-access-9pbps\") pod \"auto-csr-approver-29553694-jc72q\" (UID: \"8f279dd8-0d1b-45b8-b61f-cbb1fc073c01\") " pod="openshift-infra/auto-csr-approver-29553694-jc72q" Mar 11 09:34:00 crc kubenswrapper[4840]: I0311 09:34:00.439177 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pbps\" (UniqueName: \"kubernetes.io/projected/8f279dd8-0d1b-45b8-b61f-cbb1fc073c01-kube-api-access-9pbps\") pod \"auto-csr-approver-29553694-jc72q\" (UID: \"8f279dd8-0d1b-45b8-b61f-cbb1fc073c01\") " pod="openshift-infra/auto-csr-approver-29553694-jc72q" Mar 11 09:34:00 crc kubenswrapper[4840]: I0311 09:34:00.470811 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553694-jc72q" Mar 11 09:34:00 crc kubenswrapper[4840]: I0311 09:34:00.889671 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553694-jc72q"] Mar 11 09:34:00 crc kubenswrapper[4840]: I0311 09:34:00.970318 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553694-jc72q" event={"ID":"8f279dd8-0d1b-45b8-b61f-cbb1fc073c01","Type":"ContainerStarted","Data":"2be4cfd40bb65aa1b5f7f7aa72bba831fc4b30afd4b100c7e40087838e2a9789"} Mar 11 09:34:02 crc kubenswrapper[4840]: E0311 09:34:02.757097 4840 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f279dd8_0d1b_45b8_b61f_cbb1fc073c01.slice/crio-conmon-10ccf11e09f6947344c93ed11e7a612ba981f1d8e40e76cf5b5c8495dfc6e282.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f279dd8_0d1b_45b8_b61f_cbb1fc073c01.slice/crio-10ccf11e09f6947344c93ed11e7a612ba981f1d8e40e76cf5b5c8495dfc6e282.scope\": RecentStats: unable to find data in memory cache]" Mar 11 09:34:02 crc kubenswrapper[4840]: I0311 09:34:02.985848 4840 generic.go:334] "Generic (PLEG): container finished" podID="8f279dd8-0d1b-45b8-b61f-cbb1fc073c01" containerID="10ccf11e09f6947344c93ed11e7a612ba981f1d8e40e76cf5b5c8495dfc6e282" exitCode=0 Mar 11 09:34:02 crc kubenswrapper[4840]: I0311 09:34:02.985905 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553694-jc72q" event={"ID":"8f279dd8-0d1b-45b8-b61f-cbb1fc073c01","Type":"ContainerDied","Data":"10ccf11e09f6947344c93ed11e7a612ba981f1d8e40e76cf5b5c8495dfc6e282"} Mar 11 09:34:04 crc kubenswrapper[4840]: I0311 09:34:04.258659 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553694-jc72q" Mar 11 09:34:04 crc kubenswrapper[4840]: I0311 09:34:04.280221 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pbps\" (UniqueName: \"kubernetes.io/projected/8f279dd8-0d1b-45b8-b61f-cbb1fc073c01-kube-api-access-9pbps\") pod \"8f279dd8-0d1b-45b8-b61f-cbb1fc073c01\" (UID: \"8f279dd8-0d1b-45b8-b61f-cbb1fc073c01\") " Mar 11 09:34:04 crc kubenswrapper[4840]: I0311 09:34:04.299824 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f279dd8-0d1b-45b8-b61f-cbb1fc073c01-kube-api-access-9pbps" (OuterVolumeSpecName: "kube-api-access-9pbps") pod "8f279dd8-0d1b-45b8-b61f-cbb1fc073c01" (UID: "8f279dd8-0d1b-45b8-b61f-cbb1fc073c01"). InnerVolumeSpecName "kube-api-access-9pbps". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:34:04 crc kubenswrapper[4840]: I0311 09:34:04.382293 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pbps\" (UniqueName: \"kubernetes.io/projected/8f279dd8-0d1b-45b8-b61f-cbb1fc073c01-kube-api-access-9pbps\") on node \"crc\" DevicePath \"\"" Mar 11 09:34:05 crc kubenswrapper[4840]: I0311 09:34:05.002138 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553694-jc72q" event={"ID":"8f279dd8-0d1b-45b8-b61f-cbb1fc073c01","Type":"ContainerDied","Data":"2be4cfd40bb65aa1b5f7f7aa72bba831fc4b30afd4b100c7e40087838e2a9789"} Mar 11 09:34:05 crc kubenswrapper[4840]: I0311 09:34:05.002182 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2be4cfd40bb65aa1b5f7f7aa72bba831fc4b30afd4b100c7e40087838e2a9789" Mar 11 09:34:05 crc kubenswrapper[4840]: I0311 09:34:05.002204 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553694-jc72q" Mar 11 09:34:05 crc kubenswrapper[4840]: I0311 09:34:05.323542 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553688-wqhck"] Mar 11 09:34:05 crc kubenswrapper[4840]: I0311 09:34:05.329115 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553688-wqhck"] Mar 11 09:34:06 crc kubenswrapper[4840]: I0311 09:34:06.073193 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2296cd25-d175-4f2e-863c-ddd56b7b9c42" path="/var/lib/kubelet/pods/2296cd25-d175-4f2e-863c-ddd56b7b9c42/volumes" Mar 11 09:34:09 crc kubenswrapper[4840]: I0311 09:34:09.329074 4840 scope.go:117] "RemoveContainer" containerID="bc2c527670bee77b724c800e5ac574cb32766e25648caad4ac95de83f4142a7f" Mar 11 09:34:27 crc kubenswrapper[4840]: I0311 09:34:27.446424 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:34:27 crc kubenswrapper[4840]: I0311 09:34:27.447106 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:34:27 crc kubenswrapper[4840]: I0311 09:34:27.447161 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 09:34:27 crc kubenswrapper[4840]: I0311 09:34:27.447846 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2"} pod="openshift-machine-config-operator/machine-config-daemon-brtht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 09:34:27 crc kubenswrapper[4840]: I0311 09:34:27.447907 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" containerID="cri-o://c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" gracePeriod=600 Mar 11 09:34:27 crc kubenswrapper[4840]: E0311 09:34:27.570339 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:34:28 crc kubenswrapper[4840]: I0311 09:34:28.161739 4840 generic.go:334] "Generic (PLEG): container finished" podID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" exitCode=0 Mar 11 09:34:28 crc kubenswrapper[4840]: I0311 09:34:28.161827 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerDied","Data":"c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2"} Mar 11 09:34:28 crc kubenswrapper[4840]: I0311 09:34:28.162217 4840 scope.go:117] "RemoveContainer" containerID="51be40a909991ab2de96eb0cd37759289fa5a2f24f6847c6cdb91971787a0806" Mar 11 09:34:28 crc kubenswrapper[4840]: I0311 09:34:28.163937 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:34:28 crc kubenswrapper[4840]: E0311 09:34:28.164256 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:34:41 crc kubenswrapper[4840]: I0311 09:34:41.060853 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:34:41 crc kubenswrapper[4840]: E0311 09:34:41.061932 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:34:53 crc kubenswrapper[4840]: I0311 09:34:53.061135 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:34:53 crc kubenswrapper[4840]: E0311 09:34:53.061865 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:35:08 crc kubenswrapper[4840]: I0311 09:35:08.060355 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:35:08 crc kubenswrapper[4840]: E0311 09:35:08.061222 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:35:20 crc kubenswrapper[4840]: I0311 09:35:20.060033 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:35:20 crc kubenswrapper[4840]: E0311 09:35:20.060795 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:35:35 crc kubenswrapper[4840]: I0311 09:35:35.084310 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:35:35 crc kubenswrapper[4840]: E0311 09:35:35.085121 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:35:49 crc kubenswrapper[4840]: I0311 09:35:49.060558 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:35:49 crc kubenswrapper[4840]: E0311 09:35:49.062363 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:36:00 crc kubenswrapper[4840]: I0311 09:36:00.140943 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553696-nq8kg"] Mar 11 09:36:00 crc kubenswrapper[4840]: E0311 09:36:00.141977 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f279dd8-0d1b-45b8-b61f-cbb1fc073c01" containerName="oc" Mar 11 09:36:00 crc kubenswrapper[4840]: I0311 09:36:00.142021 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f279dd8-0d1b-45b8-b61f-cbb1fc073c01" containerName="oc" Mar 11 09:36:00 crc kubenswrapper[4840]: I0311 09:36:00.142209 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f279dd8-0d1b-45b8-b61f-cbb1fc073c01" containerName="oc" Mar 11 09:36:00 crc kubenswrapper[4840]: I0311 09:36:00.142849 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553696-nq8kg" Mar 11 09:36:00 crc kubenswrapper[4840]: I0311 09:36:00.145144 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:36:00 crc kubenswrapper[4840]: I0311 09:36:00.145336 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:36:00 crc kubenswrapper[4840]: I0311 09:36:00.147584 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:36:00 crc kubenswrapper[4840]: I0311 09:36:00.159936 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553696-nq8kg"] Mar 11 09:36:00 crc kubenswrapper[4840]: I0311 09:36:00.339991 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfl8w\" (UniqueName: \"kubernetes.io/projected/147d7062-9a1b-405b-822b-5e603c39fd0b-kube-api-access-nfl8w\") pod \"auto-csr-approver-29553696-nq8kg\" (UID: \"147d7062-9a1b-405b-822b-5e603c39fd0b\") " pod="openshift-infra/auto-csr-approver-29553696-nq8kg" Mar 11 09:36:00 crc kubenswrapper[4840]: I0311 09:36:00.441526 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfl8w\" (UniqueName: \"kubernetes.io/projected/147d7062-9a1b-405b-822b-5e603c39fd0b-kube-api-access-nfl8w\") pod \"auto-csr-approver-29553696-nq8kg\" (UID: \"147d7062-9a1b-405b-822b-5e603c39fd0b\") " pod="openshift-infra/auto-csr-approver-29553696-nq8kg" Mar 11 09:36:00 crc kubenswrapper[4840]: I0311 09:36:00.462996 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfl8w\" (UniqueName: \"kubernetes.io/projected/147d7062-9a1b-405b-822b-5e603c39fd0b-kube-api-access-nfl8w\") pod \"auto-csr-approver-29553696-nq8kg\" (UID: \"147d7062-9a1b-405b-822b-5e603c39fd0b\") " pod="openshift-infra/auto-csr-approver-29553696-nq8kg" Mar 11 09:36:00 crc kubenswrapper[4840]: I0311 09:36:00.464195 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553696-nq8kg" Mar 11 09:36:00 crc kubenswrapper[4840]: I0311 09:36:00.912841 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553696-nq8kg"] Mar 11 09:36:01 crc kubenswrapper[4840]: I0311 09:36:01.800162 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553696-nq8kg" event={"ID":"147d7062-9a1b-405b-822b-5e603c39fd0b","Type":"ContainerStarted","Data":"a58b6a30ece5f1c13b6e83bbd667126d5232de7c315152f41553b20fb6980f40"} Mar 11 09:36:02 crc kubenswrapper[4840]: I0311 09:36:02.808833 4840 generic.go:334] "Generic (PLEG): container finished" podID="147d7062-9a1b-405b-822b-5e603c39fd0b" containerID="d213b438574f4ebdbc0d377f2fda6298c6bfc441c886ecfee9a6526d819faead" exitCode=0 Mar 11 09:36:02 crc kubenswrapper[4840]: I0311 09:36:02.808924 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553696-nq8kg" event={"ID":"147d7062-9a1b-405b-822b-5e603c39fd0b","Type":"ContainerDied","Data":"d213b438574f4ebdbc0d377f2fda6298c6bfc441c886ecfee9a6526d819faead"} Mar 11 09:36:03 crc kubenswrapper[4840]: I0311 09:36:03.060249 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:36:03 crc kubenswrapper[4840]: E0311 09:36:03.061017 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:36:04 crc kubenswrapper[4840]: I0311 09:36:04.093954 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553696-nq8kg" Mar 11 09:36:04 crc kubenswrapper[4840]: I0311 09:36:04.111150 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfl8w\" (UniqueName: \"kubernetes.io/projected/147d7062-9a1b-405b-822b-5e603c39fd0b-kube-api-access-nfl8w\") pod \"147d7062-9a1b-405b-822b-5e603c39fd0b\" (UID: \"147d7062-9a1b-405b-822b-5e603c39fd0b\") " Mar 11 09:36:04 crc kubenswrapper[4840]: I0311 09:36:04.121557 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/147d7062-9a1b-405b-822b-5e603c39fd0b-kube-api-access-nfl8w" (OuterVolumeSpecName: "kube-api-access-nfl8w") pod "147d7062-9a1b-405b-822b-5e603c39fd0b" (UID: "147d7062-9a1b-405b-822b-5e603c39fd0b"). InnerVolumeSpecName "kube-api-access-nfl8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:36:04 crc kubenswrapper[4840]: I0311 09:36:04.212451 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfl8w\" (UniqueName: \"kubernetes.io/projected/147d7062-9a1b-405b-822b-5e603c39fd0b-kube-api-access-nfl8w\") on node \"crc\" DevicePath \"\"" Mar 11 09:36:04 crc kubenswrapper[4840]: I0311 09:36:04.824414 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553696-nq8kg" event={"ID":"147d7062-9a1b-405b-822b-5e603c39fd0b","Type":"ContainerDied","Data":"a58b6a30ece5f1c13b6e83bbd667126d5232de7c315152f41553b20fb6980f40"} Mar 11 09:36:04 crc kubenswrapper[4840]: I0311 09:36:04.824459 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a58b6a30ece5f1c13b6e83bbd667126d5232de7c315152f41553b20fb6980f40" Mar 11 09:36:04 crc kubenswrapper[4840]: I0311 09:36:04.824521 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553696-nq8kg" Mar 11 09:36:05 crc kubenswrapper[4840]: I0311 09:36:05.162416 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553690-bqhn5"] Mar 11 09:36:05 crc kubenswrapper[4840]: I0311 09:36:05.168262 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553690-bqhn5"] Mar 11 09:36:06 crc kubenswrapper[4840]: I0311 09:36:06.069205 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c868a997-6786-459e-ad2c-293d1613ea1f" path="/var/lib/kubelet/pods/c868a997-6786-459e-ad2c-293d1613ea1f/volumes" Mar 11 09:36:09 crc kubenswrapper[4840]: I0311 09:36:09.417168 4840 scope.go:117] "RemoveContainer" containerID="2e3dba699bf3fc1c8137f5035578a5b10f9857cd0cb3a34fdd111537a298f0c6" Mar 11 09:36:15 crc kubenswrapper[4840]: I0311 09:36:15.060734 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:36:15 crc kubenswrapper[4840]: E0311 09:36:15.061510 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:36:26 crc kubenswrapper[4840]: I0311 09:36:26.059809 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:36:26 crc kubenswrapper[4840]: E0311 09:36:26.060567 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:36:37 crc kubenswrapper[4840]: I0311 09:36:37.060050 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:36:37 crc kubenswrapper[4840]: E0311 09:36:37.060829 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:36:51 crc kubenswrapper[4840]: I0311 09:36:51.061520 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:36:51 crc kubenswrapper[4840]: E0311 09:36:51.063195 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:37:01 crc kubenswrapper[4840]: I0311 09:37:01.525683 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hmssw"] Mar 11 09:37:01 crc kubenswrapper[4840]: E0311 09:37:01.526768 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147d7062-9a1b-405b-822b-5e603c39fd0b" containerName="oc" Mar 11 09:37:01 crc kubenswrapper[4840]: I0311 09:37:01.526786 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="147d7062-9a1b-405b-822b-5e603c39fd0b" containerName="oc" Mar 11 09:37:01 crc kubenswrapper[4840]: I0311 09:37:01.526962 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="147d7062-9a1b-405b-822b-5e603c39fd0b" containerName="oc" Mar 11 09:37:01 crc kubenswrapper[4840]: I0311 09:37:01.528259 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmssw" Mar 11 09:37:01 crc kubenswrapper[4840]: I0311 09:37:01.543105 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hmssw"] Mar 11 09:37:01 crc kubenswrapper[4840]: I0311 09:37:01.585064 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktq89\" (UniqueName: \"kubernetes.io/projected/86df4a68-b65d-4882-924b-62393168ddc0-kube-api-access-ktq89\") pod \"redhat-operators-hmssw\" (UID: \"86df4a68-b65d-4882-924b-62393168ddc0\") " pod="openshift-marketplace/redhat-operators-hmssw" Mar 11 09:37:01 crc kubenswrapper[4840]: I0311 09:37:01.585600 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86df4a68-b65d-4882-924b-62393168ddc0-utilities\") pod \"redhat-operators-hmssw\" (UID: \"86df4a68-b65d-4882-924b-62393168ddc0\") " pod="openshift-marketplace/redhat-operators-hmssw" Mar 11 09:37:01 crc kubenswrapper[4840]: I0311 09:37:01.585649 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86df4a68-b65d-4882-924b-62393168ddc0-catalog-content\") pod \"redhat-operators-hmssw\" (UID: \"86df4a68-b65d-4882-924b-62393168ddc0\") " pod="openshift-marketplace/redhat-operators-hmssw" Mar 11 09:37:01 crc kubenswrapper[4840]: I0311 09:37:01.687835 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktq89\" (UniqueName: \"kubernetes.io/projected/86df4a68-b65d-4882-924b-62393168ddc0-kube-api-access-ktq89\") pod \"redhat-operators-hmssw\" (UID: \"86df4a68-b65d-4882-924b-62393168ddc0\") " pod="openshift-marketplace/redhat-operators-hmssw" Mar 11 09:37:01 crc kubenswrapper[4840]: I0311 09:37:01.687902 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86df4a68-b65d-4882-924b-62393168ddc0-utilities\") pod \"redhat-operators-hmssw\" (UID: \"86df4a68-b65d-4882-924b-62393168ddc0\") " pod="openshift-marketplace/redhat-operators-hmssw" Mar 11 09:37:01 crc kubenswrapper[4840]: I0311 09:37:01.687965 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86df4a68-b65d-4882-924b-62393168ddc0-catalog-content\") pod \"redhat-operators-hmssw\" (UID: \"86df4a68-b65d-4882-924b-62393168ddc0\") " pod="openshift-marketplace/redhat-operators-hmssw" Mar 11 09:37:01 crc kubenswrapper[4840]: I0311 09:37:01.688539 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86df4a68-b65d-4882-924b-62393168ddc0-utilities\") pod \"redhat-operators-hmssw\" (UID: \"86df4a68-b65d-4882-924b-62393168ddc0\") " pod="openshift-marketplace/redhat-operators-hmssw" Mar 11 09:37:01 crc kubenswrapper[4840]: I0311 09:37:01.688617 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86df4a68-b65d-4882-924b-62393168ddc0-catalog-content\") pod \"redhat-operators-hmssw\" (UID: \"86df4a68-b65d-4882-924b-62393168ddc0\") " pod="openshift-marketplace/redhat-operators-hmssw" Mar 11 09:37:01 crc kubenswrapper[4840]: I0311 09:37:01.710404 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktq89\" (UniqueName: \"kubernetes.io/projected/86df4a68-b65d-4882-924b-62393168ddc0-kube-api-access-ktq89\") pod \"redhat-operators-hmssw\" (UID: \"86df4a68-b65d-4882-924b-62393168ddc0\") " pod="openshift-marketplace/redhat-operators-hmssw" Mar 11 09:37:01 crc kubenswrapper[4840]: I0311 09:37:01.852441 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmssw" Mar 11 09:37:02 crc kubenswrapper[4840]: I0311 09:37:02.067609 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:37:02 crc kubenswrapper[4840]: E0311 09:37:02.068394 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:37:02 crc kubenswrapper[4840]: I0311 09:37:02.320013 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hmssw"] Mar 11 09:37:02 crc kubenswrapper[4840]: I0311 09:37:02.589304 4840 generic.go:334] "Generic (PLEG): container finished" podID="86df4a68-b65d-4882-924b-62393168ddc0" containerID="40ca865b255558d0b5a4f3f16720ea46186f4fbb78304a405668fa4f48c80ef8" exitCode=0 Mar 11 09:37:02 crc kubenswrapper[4840]: I0311 09:37:02.589378 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmssw" event={"ID":"86df4a68-b65d-4882-924b-62393168ddc0","Type":"ContainerDied","Data":"40ca865b255558d0b5a4f3f16720ea46186f4fbb78304a405668fa4f48c80ef8"} Mar 11 09:37:02 crc kubenswrapper[4840]: I0311 09:37:02.589422 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmssw" event={"ID":"86df4a68-b65d-4882-924b-62393168ddc0","Type":"ContainerStarted","Data":"f3a03604f9f69c12dfc0dd111e7ed3f20541dbbdc4c4ca48c96a40dbc3eed273"} Mar 11 09:37:03 crc kubenswrapper[4840]: I0311 09:37:03.599289 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmssw" event={"ID":"86df4a68-b65d-4882-924b-62393168ddc0","Type":"ContainerStarted","Data":"2a568534cfeddbbc4a36a91e88750915cad46a52417a2aa4518b2dfecbfd661a"} Mar 11 09:37:04 crc kubenswrapper[4840]: I0311 09:37:04.609183 4840 generic.go:334] "Generic (PLEG): container finished" podID="86df4a68-b65d-4882-924b-62393168ddc0" containerID="2a568534cfeddbbc4a36a91e88750915cad46a52417a2aa4518b2dfecbfd661a" exitCode=0 Mar 11 09:37:04 crc kubenswrapper[4840]: I0311 09:37:04.609444 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmssw" event={"ID":"86df4a68-b65d-4882-924b-62393168ddc0","Type":"ContainerDied","Data":"2a568534cfeddbbc4a36a91e88750915cad46a52417a2aa4518b2dfecbfd661a"} Mar 11 09:37:05 crc kubenswrapper[4840]: I0311 09:37:05.619156 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmssw" event={"ID":"86df4a68-b65d-4882-924b-62393168ddc0","Type":"ContainerStarted","Data":"6ddc0992b221a8c67fc0c01bda5a924fc4e1a4ca759566d8af0ec66538815caf"} Mar 11 09:37:05 crc kubenswrapper[4840]: I0311 09:37:05.638851 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hmssw" podStartSLOduration=2.234414127 podStartE2EDuration="4.638833586s" podCreationTimestamp="2026-03-11 09:37:01 +0000 UTC" firstStartedPulling="2026-03-11 09:37:02.591633649 +0000 UTC m=+2421.257303464" lastFinishedPulling="2026-03-11 09:37:04.996053108 +0000 UTC m=+2423.661722923" observedRunningTime="2026-03-11 09:37:05.636838846 +0000 UTC m=+2424.302508671" watchObservedRunningTime="2026-03-11 09:37:05.638833586 +0000 UTC m=+2424.304503411" Mar 11 09:37:11 crc kubenswrapper[4840]: I0311 09:37:11.852998 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hmssw" Mar 11 09:37:11 crc kubenswrapper[4840]: I0311 09:37:11.853715 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hmssw" Mar 11 09:37:11 crc kubenswrapper[4840]: I0311 09:37:11.899002 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hmssw" Mar 11 09:37:12 crc kubenswrapper[4840]: I0311 09:37:12.704327 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hmssw" Mar 11 09:37:12 crc kubenswrapper[4840]: I0311 09:37:12.745760 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hmssw"] Mar 11 09:37:13 crc kubenswrapper[4840]: I0311 09:37:13.060347 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:37:13 crc kubenswrapper[4840]: E0311 09:37:13.060618 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:37:14 crc kubenswrapper[4840]: I0311 09:37:14.680126 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hmssw" podUID="86df4a68-b65d-4882-924b-62393168ddc0" containerName="registry-server" containerID="cri-o://6ddc0992b221a8c67fc0c01bda5a924fc4e1a4ca759566d8af0ec66538815caf" gracePeriod=2 Mar 11 09:37:15 crc kubenswrapper[4840]: I0311 09:37:15.689073 4840 generic.go:334] "Generic (PLEG): container finished" podID="86df4a68-b65d-4882-924b-62393168ddc0" containerID="6ddc0992b221a8c67fc0c01bda5a924fc4e1a4ca759566d8af0ec66538815caf" exitCode=0 Mar 11 09:37:15 crc kubenswrapper[4840]: I0311 09:37:15.689150 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmssw" event={"ID":"86df4a68-b65d-4882-924b-62393168ddc0","Type":"ContainerDied","Data":"6ddc0992b221a8c67fc0c01bda5a924fc4e1a4ca759566d8af0ec66538815caf"} Mar 11 09:37:16 crc kubenswrapper[4840]: I0311 09:37:16.148422 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmssw" Mar 11 09:37:16 crc kubenswrapper[4840]: I0311 09:37:16.225052 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktq89\" (UniqueName: \"kubernetes.io/projected/86df4a68-b65d-4882-924b-62393168ddc0-kube-api-access-ktq89\") pod \"86df4a68-b65d-4882-924b-62393168ddc0\" (UID: \"86df4a68-b65d-4882-924b-62393168ddc0\") " Mar 11 09:37:16 crc kubenswrapper[4840]: I0311 09:37:16.225149 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86df4a68-b65d-4882-924b-62393168ddc0-utilities\") pod \"86df4a68-b65d-4882-924b-62393168ddc0\" (UID: \"86df4a68-b65d-4882-924b-62393168ddc0\") " Mar 11 09:37:16 crc kubenswrapper[4840]: I0311 09:37:16.225211 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86df4a68-b65d-4882-924b-62393168ddc0-catalog-content\") pod \"86df4a68-b65d-4882-924b-62393168ddc0\" (UID: \"86df4a68-b65d-4882-924b-62393168ddc0\") " Mar 11 09:37:16 crc kubenswrapper[4840]: I0311 09:37:16.226363 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86df4a68-b65d-4882-924b-62393168ddc0-utilities" (OuterVolumeSpecName: "utilities") pod "86df4a68-b65d-4882-924b-62393168ddc0" (UID: "86df4a68-b65d-4882-924b-62393168ddc0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:37:16 crc kubenswrapper[4840]: I0311 09:37:16.232815 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86df4a68-b65d-4882-924b-62393168ddc0-kube-api-access-ktq89" (OuterVolumeSpecName: "kube-api-access-ktq89") pod "86df4a68-b65d-4882-924b-62393168ddc0" (UID: "86df4a68-b65d-4882-924b-62393168ddc0"). InnerVolumeSpecName "kube-api-access-ktq89". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:37:16 crc kubenswrapper[4840]: I0311 09:37:16.327396 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktq89\" (UniqueName: \"kubernetes.io/projected/86df4a68-b65d-4882-924b-62393168ddc0-kube-api-access-ktq89\") on node \"crc\" DevicePath \"\"" Mar 11 09:37:16 crc kubenswrapper[4840]: I0311 09:37:16.327430 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86df4a68-b65d-4882-924b-62393168ddc0-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:37:16 crc kubenswrapper[4840]: I0311 09:37:16.361737 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86df4a68-b65d-4882-924b-62393168ddc0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86df4a68-b65d-4882-924b-62393168ddc0" (UID: "86df4a68-b65d-4882-924b-62393168ddc0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:37:16 crc kubenswrapper[4840]: I0311 09:37:16.430330 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86df4a68-b65d-4882-924b-62393168ddc0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:37:16 crc kubenswrapper[4840]: I0311 09:37:16.698735 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmssw" event={"ID":"86df4a68-b65d-4882-924b-62393168ddc0","Type":"ContainerDied","Data":"f3a03604f9f69c12dfc0dd111e7ed3f20541dbbdc4c4ca48c96a40dbc3eed273"} Mar 11 09:37:16 crc kubenswrapper[4840]: I0311 09:37:16.698799 4840 scope.go:117] "RemoveContainer" containerID="6ddc0992b221a8c67fc0c01bda5a924fc4e1a4ca759566d8af0ec66538815caf" Mar 11 09:37:16 crc kubenswrapper[4840]: I0311 09:37:16.698991 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmssw" Mar 11 09:37:16 crc kubenswrapper[4840]: I0311 09:37:16.723711 4840 scope.go:117] "RemoveContainer" containerID="2a568534cfeddbbc4a36a91e88750915cad46a52417a2aa4518b2dfecbfd661a" Mar 11 09:37:16 crc kubenswrapper[4840]: I0311 09:37:16.733755 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hmssw"] Mar 11 09:37:16 crc kubenswrapper[4840]: I0311 09:37:16.741373 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hmssw"] Mar 11 09:37:16 crc kubenswrapper[4840]: I0311 09:37:16.752774 4840 scope.go:117] "RemoveContainer" containerID="40ca865b255558d0b5a4f3f16720ea46186f4fbb78304a405668fa4f48c80ef8" Mar 11 09:37:18 crc kubenswrapper[4840]: I0311 09:37:18.073915 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86df4a68-b65d-4882-924b-62393168ddc0" path="/var/lib/kubelet/pods/86df4a68-b65d-4882-924b-62393168ddc0/volumes" Mar 11 09:37:27 crc kubenswrapper[4840]: I0311 09:37:27.060112 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:37:27 crc kubenswrapper[4840]: E0311 09:37:27.060943 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:37:40 crc kubenswrapper[4840]: I0311 09:37:40.060312 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:37:40 crc kubenswrapper[4840]: E0311 09:37:40.061053 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:37:53 crc kubenswrapper[4840]: I0311 09:37:53.060288 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:37:53 crc kubenswrapper[4840]: E0311 09:37:53.060914 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:38:00 crc kubenswrapper[4840]: I0311 09:38:00.138908 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553698-528pg"] Mar 11 09:38:00 crc kubenswrapper[4840]: E0311 09:38:00.139962 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86df4a68-b65d-4882-924b-62393168ddc0" containerName="registry-server" Mar 11 09:38:00 crc kubenswrapper[4840]: I0311 09:38:00.139981 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="86df4a68-b65d-4882-924b-62393168ddc0" containerName="registry-server" Mar 11 09:38:00 crc kubenswrapper[4840]: E0311 09:38:00.140010 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86df4a68-b65d-4882-924b-62393168ddc0" containerName="extract-utilities" Mar 11 09:38:00 crc kubenswrapper[4840]: I0311 09:38:00.140019 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="86df4a68-b65d-4882-924b-62393168ddc0" containerName="extract-utilities" Mar 11 09:38:00 crc kubenswrapper[4840]: E0311 09:38:00.140038 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86df4a68-b65d-4882-924b-62393168ddc0" containerName="extract-content" Mar 11 09:38:00 crc kubenswrapper[4840]: I0311 09:38:00.140046 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="86df4a68-b65d-4882-924b-62393168ddc0" containerName="extract-content" Mar 11 09:38:00 crc kubenswrapper[4840]: I0311 09:38:00.140249 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="86df4a68-b65d-4882-924b-62393168ddc0" containerName="registry-server" Mar 11 09:38:00 crc kubenswrapper[4840]: I0311 09:38:00.140782 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553698-528pg" Mar 11 09:38:00 crc kubenswrapper[4840]: I0311 09:38:00.144870 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:38:00 crc kubenswrapper[4840]: I0311 09:38:00.152528 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553698-528pg"] Mar 11 09:38:00 crc kubenswrapper[4840]: I0311 09:38:00.153873 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:38:00 crc kubenswrapper[4840]: I0311 09:38:00.154029 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:38:00 crc kubenswrapper[4840]: I0311 09:38:00.190537 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v59mp\" (UniqueName: \"kubernetes.io/projected/8df0adb4-be88-4988-bed8-184d43f7984c-kube-api-access-v59mp\") pod \"auto-csr-approver-29553698-528pg\" (UID: \"8df0adb4-be88-4988-bed8-184d43f7984c\") " pod="openshift-infra/auto-csr-approver-29553698-528pg" Mar 11 09:38:00 crc kubenswrapper[4840]: I0311 09:38:00.292211 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v59mp\" (UniqueName: \"kubernetes.io/projected/8df0adb4-be88-4988-bed8-184d43f7984c-kube-api-access-v59mp\") pod \"auto-csr-approver-29553698-528pg\" (UID: \"8df0adb4-be88-4988-bed8-184d43f7984c\") " pod="openshift-infra/auto-csr-approver-29553698-528pg" Mar 11 09:38:00 crc kubenswrapper[4840]: I0311 09:38:00.313353 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v59mp\" (UniqueName: \"kubernetes.io/projected/8df0adb4-be88-4988-bed8-184d43f7984c-kube-api-access-v59mp\") pod \"auto-csr-approver-29553698-528pg\" (UID: \"8df0adb4-be88-4988-bed8-184d43f7984c\") " pod="openshift-infra/auto-csr-approver-29553698-528pg" Mar 11 09:38:00 crc kubenswrapper[4840]: I0311 09:38:00.460321 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553698-528pg" Mar 11 09:38:00 crc kubenswrapper[4840]: I0311 09:38:00.883670 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553698-528pg"] Mar 11 09:38:01 crc kubenswrapper[4840]: I0311 09:38:01.043115 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553698-528pg" event={"ID":"8df0adb4-be88-4988-bed8-184d43f7984c","Type":"ContainerStarted","Data":"3f382f79b890b12ed20cf0787aaeb835ebebca18fdcb22d206cc20d827a1302b"} Mar 11 09:38:03 crc kubenswrapper[4840]: I0311 09:38:03.064408 4840 generic.go:334] "Generic (PLEG): container finished" podID="8df0adb4-be88-4988-bed8-184d43f7984c" containerID="0a8c1072e43310f96923a9789350b012ce467d225bc7821c41c22b4976c48b8e" exitCode=0 Mar 11 09:38:03 crc kubenswrapper[4840]: I0311 09:38:03.064545 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553698-528pg" event={"ID":"8df0adb4-be88-4988-bed8-184d43f7984c","Type":"ContainerDied","Data":"0a8c1072e43310f96923a9789350b012ce467d225bc7821c41c22b4976c48b8e"} Mar 11 09:38:04 crc kubenswrapper[4840]: I0311 09:38:04.512133 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553698-528pg" Mar 11 09:38:04 crc kubenswrapper[4840]: I0311 09:38:04.650253 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v59mp\" (UniqueName: \"kubernetes.io/projected/8df0adb4-be88-4988-bed8-184d43f7984c-kube-api-access-v59mp\") pod \"8df0adb4-be88-4988-bed8-184d43f7984c\" (UID: \"8df0adb4-be88-4988-bed8-184d43f7984c\") " Mar 11 09:38:04 crc kubenswrapper[4840]: I0311 09:38:04.656376 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8df0adb4-be88-4988-bed8-184d43f7984c-kube-api-access-v59mp" (OuterVolumeSpecName: "kube-api-access-v59mp") pod "8df0adb4-be88-4988-bed8-184d43f7984c" (UID: "8df0adb4-be88-4988-bed8-184d43f7984c"). InnerVolumeSpecName "kube-api-access-v59mp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:38:04 crc kubenswrapper[4840]: I0311 09:38:04.751654 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v59mp\" (UniqueName: \"kubernetes.io/projected/8df0adb4-be88-4988-bed8-184d43f7984c-kube-api-access-v59mp\") on node \"crc\" DevicePath \"\"" Mar 11 09:38:05 crc kubenswrapper[4840]: I0311 09:38:05.060591 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:38:05 crc kubenswrapper[4840]: E0311 09:38:05.060868 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:38:05 crc kubenswrapper[4840]: I0311 09:38:05.082977 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553698-528pg" event={"ID":"8df0adb4-be88-4988-bed8-184d43f7984c","Type":"ContainerDied","Data":"3f382f79b890b12ed20cf0787aaeb835ebebca18fdcb22d206cc20d827a1302b"} Mar 11 09:38:05 crc kubenswrapper[4840]: I0311 09:38:05.083009 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553698-528pg" Mar 11 09:38:05 crc kubenswrapper[4840]: I0311 09:38:05.083020 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f382f79b890b12ed20cf0787aaeb835ebebca18fdcb22d206cc20d827a1302b" Mar 11 09:38:05 crc kubenswrapper[4840]: I0311 09:38:05.582098 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553692-c2k8x"] Mar 11 09:38:05 crc kubenswrapper[4840]: I0311 09:38:05.587164 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553692-c2k8x"] Mar 11 09:38:06 crc kubenswrapper[4840]: I0311 09:38:06.068879 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3ca333e-8898-426e-8f56-ee17d810bc6f" path="/var/lib/kubelet/pods/e3ca333e-8898-426e-8f56-ee17d810bc6f/volumes" Mar 11 09:38:09 crc kubenswrapper[4840]: I0311 09:38:09.500540 4840 scope.go:117] "RemoveContainer" containerID="78d26a727e54ccccef444909c40cc731ae71d06771c5b3e5067ce654fa1019d5" Mar 11 09:38:18 crc kubenswrapper[4840]: I0311 09:38:18.060750 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:38:18 crc kubenswrapper[4840]: E0311 09:38:18.061442 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:38:31 crc kubenswrapper[4840]: I0311 09:38:31.060517 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:38:31 crc kubenswrapper[4840]: E0311 09:38:31.061312 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:38:45 crc kubenswrapper[4840]: I0311 09:38:45.060129 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:38:45 crc kubenswrapper[4840]: E0311 09:38:45.061549 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:38:58 crc kubenswrapper[4840]: I0311 09:38:58.061282 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:38:58 crc kubenswrapper[4840]: E0311 09:38:58.062031 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:39:11 crc kubenswrapper[4840]: I0311 09:39:11.060280 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:39:11 crc kubenswrapper[4840]: E0311 09:39:11.061328 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:39:23 crc kubenswrapper[4840]: I0311 09:39:23.060191 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:39:23 crc kubenswrapper[4840]: E0311 09:39:23.061399 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:39:37 crc kubenswrapper[4840]: I0311 09:39:37.060240 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:39:37 crc kubenswrapper[4840]: I0311 09:39:37.948206 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"d03747e4fc9bc46a24602d6964eb7dac7d34cf189febc04713e199f66cc16972"} Mar 11 09:40:00 crc kubenswrapper[4840]: I0311 09:40:00.144335 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553700-dg4x5"] Mar 11 09:40:00 crc kubenswrapper[4840]: E0311 09:40:00.145216 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8df0adb4-be88-4988-bed8-184d43f7984c" containerName="oc" Mar 11 09:40:00 crc kubenswrapper[4840]: I0311 09:40:00.145229 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="8df0adb4-be88-4988-bed8-184d43f7984c" containerName="oc" Mar 11 09:40:00 crc kubenswrapper[4840]: I0311 09:40:00.145376 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="8df0adb4-be88-4988-bed8-184d43f7984c" containerName="oc" Mar 11 09:40:00 crc kubenswrapper[4840]: I0311 09:40:00.145909 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553700-dg4x5" Mar 11 09:40:00 crc kubenswrapper[4840]: I0311 09:40:00.148805 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:40:00 crc kubenswrapper[4840]: I0311 09:40:00.150264 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:40:00 crc kubenswrapper[4840]: I0311 09:40:00.153841 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:40:00 crc kubenswrapper[4840]: I0311 09:40:00.160422 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553700-dg4x5"] Mar 11 09:40:00 crc kubenswrapper[4840]: I0311 09:40:00.293077 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjbtf\" (UniqueName: \"kubernetes.io/projected/cf092727-a72a-447d-a0fe-f62a283ee577-kube-api-access-rjbtf\") pod \"auto-csr-approver-29553700-dg4x5\" (UID: \"cf092727-a72a-447d-a0fe-f62a283ee577\") " pod="openshift-infra/auto-csr-approver-29553700-dg4x5" Mar 11 09:40:00 crc kubenswrapper[4840]: I0311 09:40:00.395135 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjbtf\" (UniqueName: \"kubernetes.io/projected/cf092727-a72a-447d-a0fe-f62a283ee577-kube-api-access-rjbtf\") pod \"auto-csr-approver-29553700-dg4x5\" (UID: \"cf092727-a72a-447d-a0fe-f62a283ee577\") " pod="openshift-infra/auto-csr-approver-29553700-dg4x5" Mar 11 09:40:00 crc kubenswrapper[4840]: I0311 09:40:00.418909 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjbtf\" (UniqueName: \"kubernetes.io/projected/cf092727-a72a-447d-a0fe-f62a283ee577-kube-api-access-rjbtf\") pod \"auto-csr-approver-29553700-dg4x5\" (UID: \"cf092727-a72a-447d-a0fe-f62a283ee577\") " pod="openshift-infra/auto-csr-approver-29553700-dg4x5" Mar 11 09:40:00 crc kubenswrapper[4840]: I0311 09:40:00.466337 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553700-dg4x5" Mar 11 09:40:00 crc kubenswrapper[4840]: I0311 09:40:00.904047 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553700-dg4x5"] Mar 11 09:40:00 crc kubenswrapper[4840]: I0311 09:40:00.905727 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 09:40:01 crc kubenswrapper[4840]: I0311 09:40:01.103222 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553700-dg4x5" event={"ID":"cf092727-a72a-447d-a0fe-f62a283ee577","Type":"ContainerStarted","Data":"4431e439f08c261f4ae94b7bb11bce5e4791a7a231eeae1784b3931f56f38879"} Mar 11 09:40:03 crc kubenswrapper[4840]: I0311 09:40:03.116360 4840 generic.go:334] "Generic (PLEG): container finished" podID="cf092727-a72a-447d-a0fe-f62a283ee577" containerID="d6e45f630bb8c342ec86ae3ffa1b8e3926636bf79a82b6d034d57769100d31df" exitCode=0 Mar 11 09:40:03 crc kubenswrapper[4840]: I0311 09:40:03.116446 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553700-dg4x5" event={"ID":"cf092727-a72a-447d-a0fe-f62a283ee577","Type":"ContainerDied","Data":"d6e45f630bb8c342ec86ae3ffa1b8e3926636bf79a82b6d034d57769100d31df"} Mar 11 09:40:04 crc kubenswrapper[4840]: I0311 09:40:04.416976 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553700-dg4x5" Mar 11 09:40:04 crc kubenswrapper[4840]: I0311 09:40:04.550757 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjbtf\" (UniqueName: \"kubernetes.io/projected/cf092727-a72a-447d-a0fe-f62a283ee577-kube-api-access-rjbtf\") pod \"cf092727-a72a-447d-a0fe-f62a283ee577\" (UID: \"cf092727-a72a-447d-a0fe-f62a283ee577\") " Mar 11 09:40:04 crc kubenswrapper[4840]: I0311 09:40:04.556599 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf092727-a72a-447d-a0fe-f62a283ee577-kube-api-access-rjbtf" (OuterVolumeSpecName: "kube-api-access-rjbtf") pod "cf092727-a72a-447d-a0fe-f62a283ee577" (UID: "cf092727-a72a-447d-a0fe-f62a283ee577"). InnerVolumeSpecName "kube-api-access-rjbtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:40:04 crc kubenswrapper[4840]: I0311 09:40:04.652746 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjbtf\" (UniqueName: \"kubernetes.io/projected/cf092727-a72a-447d-a0fe-f62a283ee577-kube-api-access-rjbtf\") on node \"crc\" DevicePath \"\"" Mar 11 09:40:05 crc kubenswrapper[4840]: I0311 09:40:05.148321 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553700-dg4x5" event={"ID":"cf092727-a72a-447d-a0fe-f62a283ee577","Type":"ContainerDied","Data":"4431e439f08c261f4ae94b7bb11bce5e4791a7a231eeae1784b3931f56f38879"} Mar 11 09:40:05 crc kubenswrapper[4840]: I0311 09:40:05.148729 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4431e439f08c261f4ae94b7bb11bce5e4791a7a231eeae1784b3931f56f38879" Mar 11 09:40:05 crc kubenswrapper[4840]: I0311 09:40:05.148808 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553700-dg4x5" Mar 11 09:40:05 crc kubenswrapper[4840]: I0311 09:40:05.479915 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553694-jc72q"] Mar 11 09:40:05 crc kubenswrapper[4840]: I0311 09:40:05.485384 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553694-jc72q"] Mar 11 09:40:06 crc kubenswrapper[4840]: I0311 09:40:06.070520 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f279dd8-0d1b-45b8-b61f-cbb1fc073c01" path="/var/lib/kubelet/pods/8f279dd8-0d1b-45b8-b61f-cbb1fc073c01/volumes" Mar 11 09:40:09 crc kubenswrapper[4840]: I0311 09:40:09.586698 4840 scope.go:117] "RemoveContainer" containerID="10ccf11e09f6947344c93ed11e7a612ba981f1d8e40e76cf5b5c8495dfc6e282" Mar 11 09:40:38 crc kubenswrapper[4840]: I0311 09:40:38.151158 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r6lcr"] Mar 11 09:40:38 crc kubenswrapper[4840]: E0311 09:40:38.152432 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf092727-a72a-447d-a0fe-f62a283ee577" containerName="oc" Mar 11 09:40:38 crc kubenswrapper[4840]: I0311 09:40:38.152449 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf092727-a72a-447d-a0fe-f62a283ee577" containerName="oc" Mar 11 09:40:38 crc kubenswrapper[4840]: I0311 09:40:38.153078 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf092727-a72a-447d-a0fe-f62a283ee577" containerName="oc" Mar 11 09:40:38 crc kubenswrapper[4840]: I0311 09:40:38.179563 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6lcr" Mar 11 09:40:38 crc kubenswrapper[4840]: I0311 09:40:38.194911 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r6lcr"] Mar 11 09:40:38 crc kubenswrapper[4840]: I0311 09:40:38.282040 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2tvm\" (UniqueName: \"kubernetes.io/projected/3c1769da-3e60-4c96-9f09-485ba5ab47ba-kube-api-access-g2tvm\") pod \"certified-operators-r6lcr\" (UID: \"3c1769da-3e60-4c96-9f09-485ba5ab47ba\") " pod="openshift-marketplace/certified-operators-r6lcr" Mar 11 09:40:38 crc kubenswrapper[4840]: I0311 09:40:38.282243 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c1769da-3e60-4c96-9f09-485ba5ab47ba-catalog-content\") pod \"certified-operators-r6lcr\" (UID: \"3c1769da-3e60-4c96-9f09-485ba5ab47ba\") " pod="openshift-marketplace/certified-operators-r6lcr" Mar 11 09:40:38 crc kubenswrapper[4840]: I0311 09:40:38.282276 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c1769da-3e60-4c96-9f09-485ba5ab47ba-utilities\") pod \"certified-operators-r6lcr\" (UID: \"3c1769da-3e60-4c96-9f09-485ba5ab47ba\") " pod="openshift-marketplace/certified-operators-r6lcr" Mar 11 09:40:38 crc kubenswrapper[4840]: I0311 09:40:38.383122 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2tvm\" (UniqueName: \"kubernetes.io/projected/3c1769da-3e60-4c96-9f09-485ba5ab47ba-kube-api-access-g2tvm\") pod \"certified-operators-r6lcr\" (UID: \"3c1769da-3e60-4c96-9f09-485ba5ab47ba\") " pod="openshift-marketplace/certified-operators-r6lcr" Mar 11 09:40:38 crc kubenswrapper[4840]: I0311 09:40:38.383249 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c1769da-3e60-4c96-9f09-485ba5ab47ba-catalog-content\") pod \"certified-operators-r6lcr\" (UID: \"3c1769da-3e60-4c96-9f09-485ba5ab47ba\") " pod="openshift-marketplace/certified-operators-r6lcr" Mar 11 09:40:38 crc kubenswrapper[4840]: I0311 09:40:38.383276 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c1769da-3e60-4c96-9f09-485ba5ab47ba-utilities\") pod \"certified-operators-r6lcr\" (UID: \"3c1769da-3e60-4c96-9f09-485ba5ab47ba\") " pod="openshift-marketplace/certified-operators-r6lcr" Mar 11 09:40:38 crc kubenswrapper[4840]: I0311 09:40:38.383863 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c1769da-3e60-4c96-9f09-485ba5ab47ba-utilities\") pod \"certified-operators-r6lcr\" (UID: \"3c1769da-3e60-4c96-9f09-485ba5ab47ba\") " pod="openshift-marketplace/certified-operators-r6lcr" Mar 11 09:40:38 crc kubenswrapper[4840]: I0311 09:40:38.383967 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c1769da-3e60-4c96-9f09-485ba5ab47ba-catalog-content\") pod \"certified-operators-r6lcr\" (UID: \"3c1769da-3e60-4c96-9f09-485ba5ab47ba\") " pod="openshift-marketplace/certified-operators-r6lcr" Mar 11 09:40:38 crc kubenswrapper[4840]: I0311 09:40:38.405724 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2tvm\" (UniqueName: \"kubernetes.io/projected/3c1769da-3e60-4c96-9f09-485ba5ab47ba-kube-api-access-g2tvm\") pod \"certified-operators-r6lcr\" (UID: \"3c1769da-3e60-4c96-9f09-485ba5ab47ba\") " pod="openshift-marketplace/certified-operators-r6lcr" Mar 11 09:40:38 crc kubenswrapper[4840]: I0311 09:40:38.514796 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r6lcr" Mar 11 09:40:38 crc kubenswrapper[4840]: I0311 09:40:38.765743 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r6lcr"] Mar 11 09:40:39 crc kubenswrapper[4840]: I0311 09:40:39.393058 4840 generic.go:334] "Generic (PLEG): container finished" podID="3c1769da-3e60-4c96-9f09-485ba5ab47ba" containerID="dc82ab827afcb8112eb9e1c6e33e97132262461ac1dba19bb182472e38eb26de" exitCode=0 Mar 11 09:40:39 crc kubenswrapper[4840]: I0311 09:40:39.393156 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6lcr" event={"ID":"3c1769da-3e60-4c96-9f09-485ba5ab47ba","Type":"ContainerDied","Data":"dc82ab827afcb8112eb9e1c6e33e97132262461ac1dba19bb182472e38eb26de"} Mar 11 09:40:39 crc kubenswrapper[4840]: I0311 09:40:39.393388 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6lcr" event={"ID":"3c1769da-3e60-4c96-9f09-485ba5ab47ba","Type":"ContainerStarted","Data":"fd2becbf4d0bbfa4b9cc680cedc3d9a2a597830211a60983f4b720f614fc8d69"} Mar 11 09:40:43 crc kubenswrapper[4840]: I0311 09:40:43.425145 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6lcr" event={"ID":"3c1769da-3e60-4c96-9f09-485ba5ab47ba","Type":"ContainerStarted","Data":"88da82c8f25aa77dc61727abd036e74df67153bf82b2c52d9e123a08eadf48ad"} Mar 11 09:40:44 crc kubenswrapper[4840]: I0311 09:40:44.437161 4840 generic.go:334] "Generic (PLEG): container finished" podID="3c1769da-3e60-4c96-9f09-485ba5ab47ba" containerID="88da82c8f25aa77dc61727abd036e74df67153bf82b2c52d9e123a08eadf48ad" exitCode=0 Mar 11 09:40:44 crc kubenswrapper[4840]: I0311 09:40:44.437254 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6lcr" event={"ID":"3c1769da-3e60-4c96-9f09-485ba5ab47ba","Type":"ContainerDied","Data":"88da82c8f25aa77dc61727abd036e74df67153bf82b2c52d9e123a08eadf48ad"} Mar 11 09:40:45 crc kubenswrapper[4840]: I0311 09:40:45.450957 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r6lcr" event={"ID":"3c1769da-3e60-4c96-9f09-485ba5ab47ba","Type":"ContainerStarted","Data":"f1fa1a9ccf89e292c07fb1bf9bce10aa53ac355781f823642260506e03c0fd63"} Mar 11 09:40:45 crc kubenswrapper[4840]: I0311 09:40:45.474968 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r6lcr" podStartSLOduration=2.001619174 podStartE2EDuration="7.47494147s" podCreationTimestamp="2026-03-11 09:40:38 +0000 UTC" firstStartedPulling="2026-03-11 09:40:39.395557856 +0000 UTC m=+2638.061227671" lastFinishedPulling="2026-03-11 09:40:44.868880152 +0000 UTC m=+2643.534549967" observedRunningTime="2026-03-11 09:40:45.471323598 +0000 UTC m=+2644.136993413" watchObservedRunningTime="2026-03-11 09:40:45.47494147 +0000 UTC m=+2644.140611285" Mar 11 09:40:48 crc kubenswrapper[4840]: I0311 09:40:48.515315 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r6lcr" Mar 11 09:40:48 crc kubenswrapper[4840]: I0311 09:40:48.515760 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r6lcr" Mar 11 09:40:48 crc kubenswrapper[4840]: I0311 09:40:48.564686 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r6lcr" Mar 11 09:40:58 crc kubenswrapper[4840]: I0311 09:40:58.558025 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r6lcr" Mar 11 09:40:58 crc kubenswrapper[4840]: I0311 09:40:58.622311 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r6lcr"] Mar 11 09:40:58 crc kubenswrapper[4840]: I0311 09:40:58.678215 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dnnpl"] Mar 11 09:40:58 crc kubenswrapper[4840]: I0311 09:40:58.678456 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dnnpl" podUID="c93238da-07d7-42ab-8b86-59e30ebfe3e5" containerName="registry-server" containerID="cri-o://c27710c2b6b14777ff15bfdc637b0f5f1177a87cac6ef578df93405779ce8860" gracePeriod=2 Mar 11 09:40:59 crc kubenswrapper[4840]: I0311 09:40:59.079181 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnnpl" Mar 11 09:40:59 crc kubenswrapper[4840]: I0311 09:40:59.196685 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c93238da-07d7-42ab-8b86-59e30ebfe3e5-utilities\") pod \"c93238da-07d7-42ab-8b86-59e30ebfe3e5\" (UID: \"c93238da-07d7-42ab-8b86-59e30ebfe3e5\") " Mar 11 09:40:59 crc kubenswrapper[4840]: I0311 09:40:59.196859 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c93238da-07d7-42ab-8b86-59e30ebfe3e5-catalog-content\") pod \"c93238da-07d7-42ab-8b86-59e30ebfe3e5\" (UID: \"c93238da-07d7-42ab-8b86-59e30ebfe3e5\") " Mar 11 09:40:59 crc kubenswrapper[4840]: I0311 09:40:59.196914 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4qch\" (UniqueName: \"kubernetes.io/projected/c93238da-07d7-42ab-8b86-59e30ebfe3e5-kube-api-access-q4qch\") pod \"c93238da-07d7-42ab-8b86-59e30ebfe3e5\" (UID: \"c93238da-07d7-42ab-8b86-59e30ebfe3e5\") " Mar 11 09:40:59 crc kubenswrapper[4840]: I0311 09:40:59.197222 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c93238da-07d7-42ab-8b86-59e30ebfe3e5-utilities" (OuterVolumeSpecName: "utilities") pod "c93238da-07d7-42ab-8b86-59e30ebfe3e5" (UID: "c93238da-07d7-42ab-8b86-59e30ebfe3e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:40:59 crc kubenswrapper[4840]: I0311 09:40:59.205803 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c93238da-07d7-42ab-8b86-59e30ebfe3e5-kube-api-access-q4qch" (OuterVolumeSpecName: "kube-api-access-q4qch") pod "c93238da-07d7-42ab-8b86-59e30ebfe3e5" (UID: "c93238da-07d7-42ab-8b86-59e30ebfe3e5"). InnerVolumeSpecName "kube-api-access-q4qch". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:40:59 crc kubenswrapper[4840]: I0311 09:40:59.253969 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c93238da-07d7-42ab-8b86-59e30ebfe3e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c93238da-07d7-42ab-8b86-59e30ebfe3e5" (UID: "c93238da-07d7-42ab-8b86-59e30ebfe3e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:40:59 crc kubenswrapper[4840]: I0311 09:40:59.298401 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c93238da-07d7-42ab-8b86-59e30ebfe3e5-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:40:59 crc kubenswrapper[4840]: I0311 09:40:59.298439 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c93238da-07d7-42ab-8b86-59e30ebfe3e5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:40:59 crc kubenswrapper[4840]: I0311 09:40:59.298455 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4qch\" (UniqueName: \"kubernetes.io/projected/c93238da-07d7-42ab-8b86-59e30ebfe3e5-kube-api-access-q4qch\") on node \"crc\" DevicePath \"\"" Mar 11 09:40:59 crc kubenswrapper[4840]: I0311 09:40:59.893216 4840 generic.go:334] "Generic (PLEG): container finished" podID="c93238da-07d7-42ab-8b86-59e30ebfe3e5" containerID="c27710c2b6b14777ff15bfdc637b0f5f1177a87cac6ef578df93405779ce8860" exitCode=0 Mar 11 09:40:59 crc kubenswrapper[4840]: I0311 09:40:59.894343 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dnnpl" Mar 11 09:40:59 crc kubenswrapper[4840]: I0311 09:40:59.899532 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnnpl" event={"ID":"c93238da-07d7-42ab-8b86-59e30ebfe3e5","Type":"ContainerDied","Data":"c27710c2b6b14777ff15bfdc637b0f5f1177a87cac6ef578df93405779ce8860"} Mar 11 09:40:59 crc kubenswrapper[4840]: I0311 09:40:59.899597 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dnnpl" event={"ID":"c93238da-07d7-42ab-8b86-59e30ebfe3e5","Type":"ContainerDied","Data":"e716aca91b9f6452e4c3de12e06c8e8df528a56c03ed03cd1da0080d202f8991"} Mar 11 09:40:59 crc kubenswrapper[4840]: I0311 09:40:59.899625 4840 scope.go:117] "RemoveContainer" containerID="c27710c2b6b14777ff15bfdc637b0f5f1177a87cac6ef578df93405779ce8860" Mar 11 09:40:59 crc kubenswrapper[4840]: I0311 09:40:59.930618 4840 scope.go:117] "RemoveContainer" containerID="f69da28932e3a19f51664d91ad48e74abb671ae9a83c8fc1044d25c3c4ae2919" Mar 11 09:40:59 crc kubenswrapper[4840]: I0311 09:40:59.946029 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dnnpl"] Mar 11 09:40:59 crc kubenswrapper[4840]: I0311 09:40:59.956707 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dnnpl"] Mar 11 09:40:59 crc kubenswrapper[4840]: I0311 09:40:59.963987 4840 scope.go:117] "RemoveContainer" containerID="32ca9123015f8b75edfbb48d3b3ae242b6cabee2010cdd1930f43a0242ee8283" Mar 11 09:40:59 crc kubenswrapper[4840]: I0311 09:40:59.984624 4840 scope.go:117] "RemoveContainer" containerID="c27710c2b6b14777ff15bfdc637b0f5f1177a87cac6ef578df93405779ce8860" Mar 11 09:40:59 crc kubenswrapper[4840]: E0311 09:40:59.985082 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c27710c2b6b14777ff15bfdc637b0f5f1177a87cac6ef578df93405779ce8860\": container with ID starting with c27710c2b6b14777ff15bfdc637b0f5f1177a87cac6ef578df93405779ce8860 not found: ID does not exist" containerID="c27710c2b6b14777ff15bfdc637b0f5f1177a87cac6ef578df93405779ce8860" Mar 11 09:40:59 crc kubenswrapper[4840]: I0311 09:40:59.985127 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c27710c2b6b14777ff15bfdc637b0f5f1177a87cac6ef578df93405779ce8860"} err="failed to get container status \"c27710c2b6b14777ff15bfdc637b0f5f1177a87cac6ef578df93405779ce8860\": rpc error: code = NotFound desc = could not find container \"c27710c2b6b14777ff15bfdc637b0f5f1177a87cac6ef578df93405779ce8860\": container with ID starting with c27710c2b6b14777ff15bfdc637b0f5f1177a87cac6ef578df93405779ce8860 not found: ID does not exist" Mar 11 09:40:59 crc kubenswrapper[4840]: I0311 09:40:59.985157 4840 scope.go:117] "RemoveContainer" containerID="f69da28932e3a19f51664d91ad48e74abb671ae9a83c8fc1044d25c3c4ae2919" Mar 11 09:40:59 crc kubenswrapper[4840]: E0311 09:40:59.985399 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f69da28932e3a19f51664d91ad48e74abb671ae9a83c8fc1044d25c3c4ae2919\": container with ID starting with f69da28932e3a19f51664d91ad48e74abb671ae9a83c8fc1044d25c3c4ae2919 not found: ID does not exist" containerID="f69da28932e3a19f51664d91ad48e74abb671ae9a83c8fc1044d25c3c4ae2919" Mar 11 09:40:59 crc kubenswrapper[4840]: I0311 09:40:59.985424 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f69da28932e3a19f51664d91ad48e74abb671ae9a83c8fc1044d25c3c4ae2919"} err="failed to get container status \"f69da28932e3a19f51664d91ad48e74abb671ae9a83c8fc1044d25c3c4ae2919\": rpc error: code = NotFound desc = could not find container \"f69da28932e3a19f51664d91ad48e74abb671ae9a83c8fc1044d25c3c4ae2919\": container with ID starting with f69da28932e3a19f51664d91ad48e74abb671ae9a83c8fc1044d25c3c4ae2919 not found: ID does not exist" Mar 11 09:40:59 crc kubenswrapper[4840]: I0311 09:40:59.985447 4840 scope.go:117] "RemoveContainer" containerID="32ca9123015f8b75edfbb48d3b3ae242b6cabee2010cdd1930f43a0242ee8283" Mar 11 09:40:59 crc kubenswrapper[4840]: E0311 09:40:59.985754 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32ca9123015f8b75edfbb48d3b3ae242b6cabee2010cdd1930f43a0242ee8283\": container with ID starting with 32ca9123015f8b75edfbb48d3b3ae242b6cabee2010cdd1930f43a0242ee8283 not found: ID does not exist" containerID="32ca9123015f8b75edfbb48d3b3ae242b6cabee2010cdd1930f43a0242ee8283" Mar 11 09:40:59 crc kubenswrapper[4840]: I0311 09:40:59.985807 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32ca9123015f8b75edfbb48d3b3ae242b6cabee2010cdd1930f43a0242ee8283"} err="failed to get container status \"32ca9123015f8b75edfbb48d3b3ae242b6cabee2010cdd1930f43a0242ee8283\": rpc error: code = NotFound desc = could not find container \"32ca9123015f8b75edfbb48d3b3ae242b6cabee2010cdd1930f43a0242ee8283\": container with ID starting with 32ca9123015f8b75edfbb48d3b3ae242b6cabee2010cdd1930f43a0242ee8283 not found: ID does not exist" Mar 11 09:41:00 crc kubenswrapper[4840]: E0311 09:41:00.064374 4840 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc93238da_07d7_42ab_8b86_59e30ebfe3e5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc93238da_07d7_42ab_8b86_59e30ebfe3e5.slice/crio-e716aca91b9f6452e4c3de12e06c8e8df528a56c03ed03cd1da0080d202f8991\": RecentStats: unable to find data in memory cache]" Mar 11 09:41:00 crc kubenswrapper[4840]: I0311 09:41:00.074221 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c93238da-07d7-42ab-8b86-59e30ebfe3e5" path="/var/lib/kubelet/pods/c93238da-07d7-42ab-8b86-59e30ebfe3e5/volumes" Mar 11 09:41:03 crc kubenswrapper[4840]: I0311 09:41:03.925792 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lb99z"] Mar 11 09:41:03 crc kubenswrapper[4840]: E0311 09:41:03.926719 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c93238da-07d7-42ab-8b86-59e30ebfe3e5" containerName="registry-server" Mar 11 09:41:03 crc kubenswrapper[4840]: I0311 09:41:03.926734 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="c93238da-07d7-42ab-8b86-59e30ebfe3e5" containerName="registry-server" Mar 11 09:41:03 crc kubenswrapper[4840]: E0311 09:41:03.926752 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c93238da-07d7-42ab-8b86-59e30ebfe3e5" containerName="extract-utilities" Mar 11 09:41:03 crc kubenswrapper[4840]: I0311 09:41:03.926759 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="c93238da-07d7-42ab-8b86-59e30ebfe3e5" containerName="extract-utilities" Mar 11 09:41:03 crc kubenswrapper[4840]: E0311 09:41:03.926780 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c93238da-07d7-42ab-8b86-59e30ebfe3e5" containerName="extract-content" Mar 11 09:41:03 crc kubenswrapper[4840]: I0311 09:41:03.926787 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="c93238da-07d7-42ab-8b86-59e30ebfe3e5" containerName="extract-content" Mar 11 09:41:03 crc kubenswrapper[4840]: I0311 09:41:03.926920 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="c93238da-07d7-42ab-8b86-59e30ebfe3e5" containerName="registry-server" Mar 11 09:41:03 crc kubenswrapper[4840]: I0311 09:41:03.928020 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lb99z" Mar 11 09:41:03 crc kubenswrapper[4840]: I0311 09:41:03.934912 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lb99z"] Mar 11 09:41:03 crc kubenswrapper[4840]: I0311 09:41:03.962391 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ftvj\" (UniqueName: \"kubernetes.io/projected/00d35b54-5aea-425e-8218-f7ac74106b6c-kube-api-access-2ftvj\") pod \"community-operators-lb99z\" (UID: \"00d35b54-5aea-425e-8218-f7ac74106b6c\") " pod="openshift-marketplace/community-operators-lb99z" Mar 11 09:41:03 crc kubenswrapper[4840]: I0311 09:41:03.962501 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d35b54-5aea-425e-8218-f7ac74106b6c-catalog-content\") pod \"community-operators-lb99z\" (UID: \"00d35b54-5aea-425e-8218-f7ac74106b6c\") " pod="openshift-marketplace/community-operators-lb99z" Mar 11 09:41:03 crc kubenswrapper[4840]: I0311 09:41:03.962712 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d35b54-5aea-425e-8218-f7ac74106b6c-utilities\") pod \"community-operators-lb99z\" (UID: \"00d35b54-5aea-425e-8218-f7ac74106b6c\") " pod="openshift-marketplace/community-operators-lb99z" Mar 11 09:41:04 crc kubenswrapper[4840]: I0311 09:41:04.064224 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d35b54-5aea-425e-8218-f7ac74106b6c-utilities\") pod \"community-operators-lb99z\" (UID: \"00d35b54-5aea-425e-8218-f7ac74106b6c\") " pod="openshift-marketplace/community-operators-lb99z" Mar 11 09:41:04 crc kubenswrapper[4840]: I0311 09:41:04.064313 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ftvj\" (UniqueName: \"kubernetes.io/projected/00d35b54-5aea-425e-8218-f7ac74106b6c-kube-api-access-2ftvj\") pod \"community-operators-lb99z\" (UID: \"00d35b54-5aea-425e-8218-f7ac74106b6c\") " pod="openshift-marketplace/community-operators-lb99z" Mar 11 09:41:04 crc kubenswrapper[4840]: I0311 09:41:04.064393 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d35b54-5aea-425e-8218-f7ac74106b6c-catalog-content\") pod \"community-operators-lb99z\" (UID: \"00d35b54-5aea-425e-8218-f7ac74106b6c\") " pod="openshift-marketplace/community-operators-lb99z" Mar 11 09:41:04 crc kubenswrapper[4840]: I0311 09:41:04.064803 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d35b54-5aea-425e-8218-f7ac74106b6c-utilities\") pod \"community-operators-lb99z\" (UID: \"00d35b54-5aea-425e-8218-f7ac74106b6c\") " pod="openshift-marketplace/community-operators-lb99z" Mar 11 09:41:04 crc kubenswrapper[4840]: I0311 09:41:04.064912 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d35b54-5aea-425e-8218-f7ac74106b6c-catalog-content\") pod \"community-operators-lb99z\" (UID: \"00d35b54-5aea-425e-8218-f7ac74106b6c\") " pod="openshift-marketplace/community-operators-lb99z" Mar 11 09:41:04 crc kubenswrapper[4840]: I0311 09:41:04.088718 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ftvj\" (UniqueName: \"kubernetes.io/projected/00d35b54-5aea-425e-8218-f7ac74106b6c-kube-api-access-2ftvj\") pod \"community-operators-lb99z\" (UID: \"00d35b54-5aea-425e-8218-f7ac74106b6c\") " pod="openshift-marketplace/community-operators-lb99z" Mar 11 09:41:04 crc kubenswrapper[4840]: I0311 09:41:04.258937 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lb99z" Mar 11 09:41:04 crc kubenswrapper[4840]: I0311 09:41:04.762606 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lb99z"] Mar 11 09:41:04 crc kubenswrapper[4840]: I0311 09:41:04.946490 4840 generic.go:334] "Generic (PLEG): container finished" podID="00d35b54-5aea-425e-8218-f7ac74106b6c" containerID="8d09e44368b7d5aa0770eb38527b125ad4515e6b85512b67571503cfc2da38c1" exitCode=0 Mar 11 09:41:04 crc kubenswrapper[4840]: I0311 09:41:04.946541 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lb99z" event={"ID":"00d35b54-5aea-425e-8218-f7ac74106b6c","Type":"ContainerDied","Data":"8d09e44368b7d5aa0770eb38527b125ad4515e6b85512b67571503cfc2da38c1"} Mar 11 09:41:04 crc kubenswrapper[4840]: I0311 09:41:04.946589 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lb99z" event={"ID":"00d35b54-5aea-425e-8218-f7ac74106b6c","Type":"ContainerStarted","Data":"2b64636cd2427eb4923817c144df8d1b59e7b2ee441ca849ec7e2c15340eb82b"} Mar 11 09:41:05 crc kubenswrapper[4840]: I0311 09:41:05.956727 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lb99z" event={"ID":"00d35b54-5aea-425e-8218-f7ac74106b6c","Type":"ContainerStarted","Data":"f3173ba1ffdb1f15de347266acfa2b5c9b4ee0685f779136d39ccff7660287ec"} Mar 11 09:41:06 crc kubenswrapper[4840]: I0311 09:41:06.969069 4840 generic.go:334] "Generic (PLEG): container finished" podID="00d35b54-5aea-425e-8218-f7ac74106b6c" containerID="f3173ba1ffdb1f15de347266acfa2b5c9b4ee0685f779136d39ccff7660287ec" exitCode=0 Mar 11 09:41:06 crc kubenswrapper[4840]: I0311 09:41:06.969187 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lb99z" event={"ID":"00d35b54-5aea-425e-8218-f7ac74106b6c","Type":"ContainerDied","Data":"f3173ba1ffdb1f15de347266acfa2b5c9b4ee0685f779136d39ccff7660287ec"} Mar 11 09:41:07 crc kubenswrapper[4840]: I0311 09:41:07.981328 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lb99z" event={"ID":"00d35b54-5aea-425e-8218-f7ac74106b6c","Type":"ContainerStarted","Data":"2a67dbee12ac6c8ab766ba20ba438eb5298c130a5cf8cc6478b41a1c51f47fe6"} Mar 11 09:41:09 crc kubenswrapper[4840]: I0311 09:41:09.014712 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lb99z" podStartSLOduration=3.300104365 podStartE2EDuration="6.014688859s" podCreationTimestamp="2026-03-11 09:41:03 +0000 UTC" firstStartedPulling="2026-03-11 09:41:04.948003004 +0000 UTC m=+2663.613672819" lastFinishedPulling="2026-03-11 09:41:07.662587498 +0000 UTC m=+2666.328257313" observedRunningTime="2026-03-11 09:41:09.008287317 +0000 UTC m=+2667.673957142" watchObservedRunningTime="2026-03-11 09:41:09.014688859 +0000 UTC m=+2667.680358664" Mar 11 09:41:14 crc kubenswrapper[4840]: I0311 09:41:14.259905 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lb99z" Mar 11 09:41:14 crc kubenswrapper[4840]: I0311 09:41:14.260530 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lb99z" Mar 11 09:41:14 crc kubenswrapper[4840]: I0311 09:41:14.307085 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lb99z" Mar 11 09:41:15 crc kubenswrapper[4840]: I0311 09:41:15.078262 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lb99z" Mar 11 09:41:15 crc kubenswrapper[4840]: I0311 09:41:15.129789 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lb99z"] Mar 11 09:41:17 crc kubenswrapper[4840]: I0311 09:41:17.048377 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lb99z" podUID="00d35b54-5aea-425e-8218-f7ac74106b6c" containerName="registry-server" containerID="cri-o://2a67dbee12ac6c8ab766ba20ba438eb5298c130a5cf8cc6478b41a1c51f47fe6" gracePeriod=2 Mar 11 09:41:18 crc kubenswrapper[4840]: I0311 09:41:18.024026 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lb99z" Mar 11 09:41:18 crc kubenswrapper[4840]: I0311 09:41:18.069448 4840 generic.go:334] "Generic (PLEG): container finished" podID="00d35b54-5aea-425e-8218-f7ac74106b6c" containerID="2a67dbee12ac6c8ab766ba20ba438eb5298c130a5cf8cc6478b41a1c51f47fe6" exitCode=0 Mar 11 09:41:18 crc kubenswrapper[4840]: I0311 09:41:18.069545 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lb99z" Mar 11 09:41:18 crc kubenswrapper[4840]: I0311 09:41:18.074428 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lb99z" event={"ID":"00d35b54-5aea-425e-8218-f7ac74106b6c","Type":"ContainerDied","Data":"2a67dbee12ac6c8ab766ba20ba438eb5298c130a5cf8cc6478b41a1c51f47fe6"} Mar 11 09:41:18 crc kubenswrapper[4840]: I0311 09:41:18.074487 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lb99z" event={"ID":"00d35b54-5aea-425e-8218-f7ac74106b6c","Type":"ContainerDied","Data":"2b64636cd2427eb4923817c144df8d1b59e7b2ee441ca849ec7e2c15340eb82b"} Mar 11 09:41:18 crc kubenswrapper[4840]: I0311 09:41:18.074510 4840 scope.go:117] "RemoveContainer" containerID="2a67dbee12ac6c8ab766ba20ba438eb5298c130a5cf8cc6478b41a1c51f47fe6" Mar 11 09:41:18 crc kubenswrapper[4840]: I0311 09:41:18.093278 4840 scope.go:117] "RemoveContainer" containerID="f3173ba1ffdb1f15de347266acfa2b5c9b4ee0685f779136d39ccff7660287ec" Mar 11 09:41:18 crc kubenswrapper[4840]: I0311 09:41:18.097593 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d35b54-5aea-425e-8218-f7ac74106b6c-catalog-content\") pod \"00d35b54-5aea-425e-8218-f7ac74106b6c\" (UID: \"00d35b54-5aea-425e-8218-f7ac74106b6c\") " Mar 11 09:41:18 crc kubenswrapper[4840]: I0311 09:41:18.097705 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d35b54-5aea-425e-8218-f7ac74106b6c-utilities\") pod \"00d35b54-5aea-425e-8218-f7ac74106b6c\" (UID: \"00d35b54-5aea-425e-8218-f7ac74106b6c\") " Mar 11 09:41:18 crc kubenswrapper[4840]: I0311 09:41:18.097775 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ftvj\" (UniqueName: \"kubernetes.io/projected/00d35b54-5aea-425e-8218-f7ac74106b6c-kube-api-access-2ftvj\") pod \"00d35b54-5aea-425e-8218-f7ac74106b6c\" (UID: \"00d35b54-5aea-425e-8218-f7ac74106b6c\") " Mar 11 09:41:18 crc kubenswrapper[4840]: I0311 09:41:18.098969 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00d35b54-5aea-425e-8218-f7ac74106b6c-utilities" (OuterVolumeSpecName: "utilities") pod "00d35b54-5aea-425e-8218-f7ac74106b6c" (UID: "00d35b54-5aea-425e-8218-f7ac74106b6c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:41:18 crc kubenswrapper[4840]: I0311 09:41:18.104061 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00d35b54-5aea-425e-8218-f7ac74106b6c-kube-api-access-2ftvj" (OuterVolumeSpecName: "kube-api-access-2ftvj") pod "00d35b54-5aea-425e-8218-f7ac74106b6c" (UID: "00d35b54-5aea-425e-8218-f7ac74106b6c"). InnerVolumeSpecName "kube-api-access-2ftvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:41:18 crc kubenswrapper[4840]: I0311 09:41:18.114703 4840 scope.go:117] "RemoveContainer" containerID="8d09e44368b7d5aa0770eb38527b125ad4515e6b85512b67571503cfc2da38c1" Mar 11 09:41:18 crc kubenswrapper[4840]: I0311 09:41:18.153669 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00d35b54-5aea-425e-8218-f7ac74106b6c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00d35b54-5aea-425e-8218-f7ac74106b6c" (UID: "00d35b54-5aea-425e-8218-f7ac74106b6c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:41:18 crc kubenswrapper[4840]: I0311 09:41:18.158795 4840 scope.go:117] "RemoveContainer" containerID="2a67dbee12ac6c8ab766ba20ba438eb5298c130a5cf8cc6478b41a1c51f47fe6" Mar 11 09:41:18 crc kubenswrapper[4840]: E0311 09:41:18.159310 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a67dbee12ac6c8ab766ba20ba438eb5298c130a5cf8cc6478b41a1c51f47fe6\": container with ID starting with 2a67dbee12ac6c8ab766ba20ba438eb5298c130a5cf8cc6478b41a1c51f47fe6 not found: ID does not exist" containerID="2a67dbee12ac6c8ab766ba20ba438eb5298c130a5cf8cc6478b41a1c51f47fe6" Mar 11 09:41:18 crc kubenswrapper[4840]: I0311 09:41:18.159342 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a67dbee12ac6c8ab766ba20ba438eb5298c130a5cf8cc6478b41a1c51f47fe6"} err="failed to get container status \"2a67dbee12ac6c8ab766ba20ba438eb5298c130a5cf8cc6478b41a1c51f47fe6\": rpc error: code = NotFound desc = could not find container \"2a67dbee12ac6c8ab766ba20ba438eb5298c130a5cf8cc6478b41a1c51f47fe6\": container with ID starting with 2a67dbee12ac6c8ab766ba20ba438eb5298c130a5cf8cc6478b41a1c51f47fe6 not found: ID does not exist" Mar 11 09:41:18 crc kubenswrapper[4840]: I0311 09:41:18.159365 4840 scope.go:117] "RemoveContainer" containerID="f3173ba1ffdb1f15de347266acfa2b5c9b4ee0685f779136d39ccff7660287ec" Mar 11 09:41:18 crc kubenswrapper[4840]: E0311 09:41:18.159586 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3173ba1ffdb1f15de347266acfa2b5c9b4ee0685f779136d39ccff7660287ec\": container with ID starting with f3173ba1ffdb1f15de347266acfa2b5c9b4ee0685f779136d39ccff7660287ec not found: ID does not exist" containerID="f3173ba1ffdb1f15de347266acfa2b5c9b4ee0685f779136d39ccff7660287ec" Mar 11 09:41:18 crc kubenswrapper[4840]: I0311 09:41:18.159665 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3173ba1ffdb1f15de347266acfa2b5c9b4ee0685f779136d39ccff7660287ec"} err="failed to get container status \"f3173ba1ffdb1f15de347266acfa2b5c9b4ee0685f779136d39ccff7660287ec\": rpc error: code = NotFound desc = could not find container \"f3173ba1ffdb1f15de347266acfa2b5c9b4ee0685f779136d39ccff7660287ec\": container with ID starting with f3173ba1ffdb1f15de347266acfa2b5c9b4ee0685f779136d39ccff7660287ec not found: ID does not exist" Mar 11 09:41:18 crc kubenswrapper[4840]: I0311 09:41:18.159737 4840 scope.go:117] "RemoveContainer" containerID="8d09e44368b7d5aa0770eb38527b125ad4515e6b85512b67571503cfc2da38c1" Mar 11 09:41:18 crc kubenswrapper[4840]: E0311 09:41:18.160043 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d09e44368b7d5aa0770eb38527b125ad4515e6b85512b67571503cfc2da38c1\": container with ID starting with 8d09e44368b7d5aa0770eb38527b125ad4515e6b85512b67571503cfc2da38c1 not found: ID does not exist" containerID="8d09e44368b7d5aa0770eb38527b125ad4515e6b85512b67571503cfc2da38c1" Mar 11 09:41:18 crc kubenswrapper[4840]: I0311 09:41:18.160123 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d09e44368b7d5aa0770eb38527b125ad4515e6b85512b67571503cfc2da38c1"} err="failed to get container status \"8d09e44368b7d5aa0770eb38527b125ad4515e6b85512b67571503cfc2da38c1\": rpc error: code = NotFound desc = could not find container \"8d09e44368b7d5aa0770eb38527b125ad4515e6b85512b67571503cfc2da38c1\": container with ID starting with 8d09e44368b7d5aa0770eb38527b125ad4515e6b85512b67571503cfc2da38c1 not found: ID does not exist" Mar 11 09:41:18 crc kubenswrapper[4840]: I0311 09:41:18.199205 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d35b54-5aea-425e-8218-f7ac74106b6c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:41:18 crc kubenswrapper[4840]: I0311 09:41:18.199240 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d35b54-5aea-425e-8218-f7ac74106b6c-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:41:18 crc kubenswrapper[4840]: I0311 09:41:18.199253 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ftvj\" (UniqueName: \"kubernetes.io/projected/00d35b54-5aea-425e-8218-f7ac74106b6c-kube-api-access-2ftvj\") on node \"crc\" DevicePath \"\"" Mar 11 09:41:18 crc kubenswrapper[4840]: I0311 09:41:18.406595 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lb99z"] Mar 11 09:41:18 crc kubenswrapper[4840]: I0311 09:41:18.411589 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lb99z"] Mar 11 09:41:20 crc kubenswrapper[4840]: I0311 09:41:20.069989 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00d35b54-5aea-425e-8218-f7ac74106b6c" path="/var/lib/kubelet/pods/00d35b54-5aea-425e-8218-f7ac74106b6c/volumes" Mar 11 09:41:57 crc kubenswrapper[4840]: I0311 09:41:57.445843 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:41:57 crc kubenswrapper[4840]: I0311 09:41:57.446646 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:42:00 crc kubenswrapper[4840]: I0311 09:42:00.147615 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553702-m9966"] Mar 11 09:42:00 crc kubenswrapper[4840]: E0311 09:42:00.148625 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d35b54-5aea-425e-8218-f7ac74106b6c" containerName="registry-server" Mar 11 09:42:00 crc kubenswrapper[4840]: I0311 09:42:00.148642 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d35b54-5aea-425e-8218-f7ac74106b6c" containerName="registry-server" Mar 11 09:42:00 crc kubenswrapper[4840]: E0311 09:42:00.148658 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d35b54-5aea-425e-8218-f7ac74106b6c" containerName="extract-utilities" Mar 11 09:42:00 crc kubenswrapper[4840]: I0311 09:42:00.148665 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d35b54-5aea-425e-8218-f7ac74106b6c" containerName="extract-utilities" Mar 11 09:42:00 crc kubenswrapper[4840]: E0311 09:42:00.148678 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d35b54-5aea-425e-8218-f7ac74106b6c" containerName="extract-content" Mar 11 09:42:00 crc kubenswrapper[4840]: I0311 09:42:00.148685 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d35b54-5aea-425e-8218-f7ac74106b6c" containerName="extract-content" Mar 11 09:42:00 crc kubenswrapper[4840]: I0311 09:42:00.148837 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="00d35b54-5aea-425e-8218-f7ac74106b6c" containerName="registry-server" Mar 11 09:42:00 crc kubenswrapper[4840]: I0311 09:42:00.150391 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553702-m9966" Mar 11 09:42:00 crc kubenswrapper[4840]: I0311 09:42:00.153409 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:42:00 crc kubenswrapper[4840]: I0311 09:42:00.154026 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:42:00 crc kubenswrapper[4840]: I0311 09:42:00.154028 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:42:00 crc kubenswrapper[4840]: I0311 09:42:00.162063 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553702-m9966"] Mar 11 09:42:00 crc kubenswrapper[4840]: I0311 09:42:00.230207 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzsk4\" (UniqueName: \"kubernetes.io/projected/e9cb42d3-d732-4da7-912e-5b9859e38d18-kube-api-access-tzsk4\") pod \"auto-csr-approver-29553702-m9966\" (UID: \"e9cb42d3-d732-4da7-912e-5b9859e38d18\") " pod="openshift-infra/auto-csr-approver-29553702-m9966" Mar 11 09:42:00 crc kubenswrapper[4840]: I0311 09:42:00.331315 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzsk4\" (UniqueName: \"kubernetes.io/projected/e9cb42d3-d732-4da7-912e-5b9859e38d18-kube-api-access-tzsk4\") pod \"auto-csr-approver-29553702-m9966\" (UID: \"e9cb42d3-d732-4da7-912e-5b9859e38d18\") " pod="openshift-infra/auto-csr-approver-29553702-m9966" Mar 11 09:42:00 crc kubenswrapper[4840]: I0311 09:42:00.357700 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzsk4\" (UniqueName: \"kubernetes.io/projected/e9cb42d3-d732-4da7-912e-5b9859e38d18-kube-api-access-tzsk4\") pod \"auto-csr-approver-29553702-m9966\" (UID: \"e9cb42d3-d732-4da7-912e-5b9859e38d18\") " pod="openshift-infra/auto-csr-approver-29553702-m9966" Mar 11 09:42:00 crc kubenswrapper[4840]: I0311 09:42:00.475546 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553702-m9966" Mar 11 09:42:00 crc kubenswrapper[4840]: I0311 09:42:00.926435 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553702-m9966"] Mar 11 09:42:01 crc kubenswrapper[4840]: I0311 09:42:01.961304 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553702-m9966" event={"ID":"e9cb42d3-d732-4da7-912e-5b9859e38d18","Type":"ContainerStarted","Data":"8071f619937aa2744d9a5e2a5e89f5db726f691d4f76e26ef7d19554f5657df9"} Mar 11 09:42:02 crc kubenswrapper[4840]: I0311 09:42:02.971246 4840 generic.go:334] "Generic (PLEG): container finished" podID="e9cb42d3-d732-4da7-912e-5b9859e38d18" containerID="a151016a8d2d5016cad3c52bff3bfcd5babe212846dc6bc6f0620db22ea46a53" exitCode=0 Mar 11 09:42:02 crc kubenswrapper[4840]: I0311 09:42:02.971328 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553702-m9966" event={"ID":"e9cb42d3-d732-4da7-912e-5b9859e38d18","Type":"ContainerDied","Data":"a151016a8d2d5016cad3c52bff3bfcd5babe212846dc6bc6f0620db22ea46a53"} Mar 11 09:42:04 crc kubenswrapper[4840]: I0311 09:42:04.284278 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553702-m9966" Mar 11 09:42:04 crc kubenswrapper[4840]: I0311 09:42:04.291972 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzsk4\" (UniqueName: \"kubernetes.io/projected/e9cb42d3-d732-4da7-912e-5b9859e38d18-kube-api-access-tzsk4\") pod \"e9cb42d3-d732-4da7-912e-5b9859e38d18\" (UID: \"e9cb42d3-d732-4da7-912e-5b9859e38d18\") " Mar 11 09:42:04 crc kubenswrapper[4840]: I0311 09:42:04.300923 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9cb42d3-d732-4da7-912e-5b9859e38d18-kube-api-access-tzsk4" (OuterVolumeSpecName: "kube-api-access-tzsk4") pod "e9cb42d3-d732-4da7-912e-5b9859e38d18" (UID: "e9cb42d3-d732-4da7-912e-5b9859e38d18"). InnerVolumeSpecName "kube-api-access-tzsk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:42:04 crc kubenswrapper[4840]: I0311 09:42:04.393222 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzsk4\" (UniqueName: \"kubernetes.io/projected/e9cb42d3-d732-4da7-912e-5b9859e38d18-kube-api-access-tzsk4\") on node \"crc\" DevicePath \"\"" Mar 11 09:42:04 crc kubenswrapper[4840]: I0311 09:42:04.989225 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553702-m9966" event={"ID":"e9cb42d3-d732-4da7-912e-5b9859e38d18","Type":"ContainerDied","Data":"8071f619937aa2744d9a5e2a5e89f5db726f691d4f76e26ef7d19554f5657df9"} Mar 11 09:42:04 crc kubenswrapper[4840]: I0311 09:42:04.989270 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553702-m9966" Mar 11 09:42:04 crc kubenswrapper[4840]: I0311 09:42:04.989291 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8071f619937aa2744d9a5e2a5e89f5db726f691d4f76e26ef7d19554f5657df9" Mar 11 09:42:05 crc kubenswrapper[4840]: I0311 09:42:05.363208 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553696-nq8kg"] Mar 11 09:42:05 crc kubenswrapper[4840]: I0311 09:42:05.369908 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553696-nq8kg"] Mar 11 09:42:06 crc kubenswrapper[4840]: I0311 09:42:06.069971 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="147d7062-9a1b-405b-822b-5e603c39fd0b" path="/var/lib/kubelet/pods/147d7062-9a1b-405b-822b-5e603c39fd0b/volumes" Mar 11 09:42:09 crc kubenswrapper[4840]: I0311 09:42:09.682418 4840 scope.go:117] "RemoveContainer" containerID="d213b438574f4ebdbc0d377f2fda6298c6bfc441c886ecfee9a6526d819faead" Mar 11 09:42:27 crc kubenswrapper[4840]: I0311 09:42:27.445406 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:42:27 crc kubenswrapper[4840]: I0311 09:42:27.446031 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:42:57 crc kubenswrapper[4840]: I0311 09:42:57.445853 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:42:57 crc kubenswrapper[4840]: I0311 09:42:57.446874 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:42:57 crc kubenswrapper[4840]: I0311 09:42:57.446941 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 09:42:57 crc kubenswrapper[4840]: I0311 09:42:57.447781 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d03747e4fc9bc46a24602d6964eb7dac7d34cf189febc04713e199f66cc16972"} pod="openshift-machine-config-operator/machine-config-daemon-brtht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 09:42:57 crc kubenswrapper[4840]: I0311 09:42:57.447866 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" containerID="cri-o://d03747e4fc9bc46a24602d6964eb7dac7d34cf189febc04713e199f66cc16972" gracePeriod=600 Mar 11 09:42:58 crc kubenswrapper[4840]: I0311 09:42:58.392609 4840 generic.go:334] "Generic (PLEG): container finished" podID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerID="d03747e4fc9bc46a24602d6964eb7dac7d34cf189febc04713e199f66cc16972" exitCode=0 Mar 11 09:42:58 crc kubenswrapper[4840]: I0311 09:42:58.392701 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerDied","Data":"d03747e4fc9bc46a24602d6964eb7dac7d34cf189febc04713e199f66cc16972"} Mar 11 09:42:58 crc kubenswrapper[4840]: I0311 09:42:58.393546 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c"} Mar 11 09:42:58 crc kubenswrapper[4840]: I0311 09:42:58.393585 4840 scope.go:117] "RemoveContainer" containerID="c710991db870febe1fd3f356ee1dfce0cb83fdbd36fda869f2a6eb6f7ae8c1a2" Mar 11 09:43:36 crc kubenswrapper[4840]: I0311 09:43:36.526347 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rrxqd"] Mar 11 09:43:36 crc kubenswrapper[4840]: E0311 09:43:36.528632 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9cb42d3-d732-4da7-912e-5b9859e38d18" containerName="oc" Mar 11 09:43:36 crc kubenswrapper[4840]: I0311 09:43:36.528758 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9cb42d3-d732-4da7-912e-5b9859e38d18" containerName="oc" Mar 11 09:43:36 crc kubenswrapper[4840]: I0311 09:43:36.529810 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9cb42d3-d732-4da7-912e-5b9859e38d18" containerName="oc" Mar 11 09:43:36 crc kubenswrapper[4840]: I0311 09:43:36.532494 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rrxqd" Mar 11 09:43:36 crc kubenswrapper[4840]: I0311 09:43:36.539866 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrxqd"] Mar 11 09:43:36 crc kubenswrapper[4840]: I0311 09:43:36.643011 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4457b503-8616-4c9c-bb61-ee9371857f19-catalog-content\") pod \"redhat-marketplace-rrxqd\" (UID: \"4457b503-8616-4c9c-bb61-ee9371857f19\") " pod="openshift-marketplace/redhat-marketplace-rrxqd" Mar 11 09:43:36 crc kubenswrapper[4840]: I0311 09:43:36.643071 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4457b503-8616-4c9c-bb61-ee9371857f19-utilities\") pod \"redhat-marketplace-rrxqd\" (UID: \"4457b503-8616-4c9c-bb61-ee9371857f19\") " pod="openshift-marketplace/redhat-marketplace-rrxqd" Mar 11 09:43:36 crc kubenswrapper[4840]: I0311 09:43:36.643192 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmht2\" (UniqueName: \"kubernetes.io/projected/4457b503-8616-4c9c-bb61-ee9371857f19-kube-api-access-zmht2\") pod \"redhat-marketplace-rrxqd\" (UID: \"4457b503-8616-4c9c-bb61-ee9371857f19\") " pod="openshift-marketplace/redhat-marketplace-rrxqd" Mar 11 09:43:36 crc kubenswrapper[4840]: I0311 09:43:36.744320 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4457b503-8616-4c9c-bb61-ee9371857f19-catalog-content\") pod \"redhat-marketplace-rrxqd\" (UID: \"4457b503-8616-4c9c-bb61-ee9371857f19\") " pod="openshift-marketplace/redhat-marketplace-rrxqd" Mar 11 09:43:36 crc kubenswrapper[4840]: I0311 09:43:36.744375 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4457b503-8616-4c9c-bb61-ee9371857f19-utilities\") pod \"redhat-marketplace-rrxqd\" (UID: \"4457b503-8616-4c9c-bb61-ee9371857f19\") " pod="openshift-marketplace/redhat-marketplace-rrxqd" Mar 11 09:43:36 crc kubenswrapper[4840]: I0311 09:43:36.744447 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmht2\" (UniqueName: \"kubernetes.io/projected/4457b503-8616-4c9c-bb61-ee9371857f19-kube-api-access-zmht2\") pod \"redhat-marketplace-rrxqd\" (UID: \"4457b503-8616-4c9c-bb61-ee9371857f19\") " pod="openshift-marketplace/redhat-marketplace-rrxqd" Mar 11 09:43:36 crc kubenswrapper[4840]: I0311 09:43:36.745238 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4457b503-8616-4c9c-bb61-ee9371857f19-utilities\") pod \"redhat-marketplace-rrxqd\" (UID: \"4457b503-8616-4c9c-bb61-ee9371857f19\") " pod="openshift-marketplace/redhat-marketplace-rrxqd" Mar 11 09:43:36 crc kubenswrapper[4840]: I0311 09:43:36.745257 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4457b503-8616-4c9c-bb61-ee9371857f19-catalog-content\") pod \"redhat-marketplace-rrxqd\" (UID: \"4457b503-8616-4c9c-bb61-ee9371857f19\") " pod="openshift-marketplace/redhat-marketplace-rrxqd" Mar 11 09:43:36 crc kubenswrapper[4840]: I0311 09:43:36.763234 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmht2\" (UniqueName: \"kubernetes.io/projected/4457b503-8616-4c9c-bb61-ee9371857f19-kube-api-access-zmht2\") pod \"redhat-marketplace-rrxqd\" (UID: \"4457b503-8616-4c9c-bb61-ee9371857f19\") " pod="openshift-marketplace/redhat-marketplace-rrxqd" Mar 11 09:43:36 crc kubenswrapper[4840]: I0311 09:43:36.867322 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rrxqd" Mar 11 09:43:37 crc kubenswrapper[4840]: I0311 09:43:37.369599 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrxqd"] Mar 11 09:43:37 crc kubenswrapper[4840]: I0311 09:43:37.782890 4840 generic.go:334] "Generic (PLEG): container finished" podID="4457b503-8616-4c9c-bb61-ee9371857f19" containerID="63f89c046244fd48e4b76f462fa0296b0280e5399250368ce5d28a9994811170" exitCode=0 Mar 11 09:43:37 crc kubenswrapper[4840]: I0311 09:43:37.782958 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrxqd" event={"ID":"4457b503-8616-4c9c-bb61-ee9371857f19","Type":"ContainerDied","Data":"63f89c046244fd48e4b76f462fa0296b0280e5399250368ce5d28a9994811170"} Mar 11 09:43:37 crc kubenswrapper[4840]: I0311 09:43:37.783004 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrxqd" event={"ID":"4457b503-8616-4c9c-bb61-ee9371857f19","Type":"ContainerStarted","Data":"5e86196bb317da0e2f44578e9adb96caaf640c75d78774f5e54d05fb70d99109"} Mar 11 09:43:38 crc kubenswrapper[4840]: I0311 09:43:38.793184 4840 generic.go:334] "Generic (PLEG): container finished" podID="4457b503-8616-4c9c-bb61-ee9371857f19" containerID="fa7610e6f01999bea991138fab1815624f90f5bb2278e3e75d7ed0b85367c6e3" exitCode=0 Mar 11 09:43:38 crc kubenswrapper[4840]: I0311 09:43:38.793316 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrxqd" event={"ID":"4457b503-8616-4c9c-bb61-ee9371857f19","Type":"ContainerDied","Data":"fa7610e6f01999bea991138fab1815624f90f5bb2278e3e75d7ed0b85367c6e3"} Mar 11 09:43:39 crc kubenswrapper[4840]: I0311 09:43:39.801804 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrxqd" event={"ID":"4457b503-8616-4c9c-bb61-ee9371857f19","Type":"ContainerStarted","Data":"abb209a8ae3206d3a5d5b780a05543a993e7786a040944ccf4de1f1b2e4808e9"} Mar 11 09:43:39 crc kubenswrapper[4840]: I0311 09:43:39.833851 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rrxqd" podStartSLOduration=2.06305824 podStartE2EDuration="3.833829311s" podCreationTimestamp="2026-03-11 09:43:36 +0000 UTC" firstStartedPulling="2026-03-11 09:43:37.785254874 +0000 UTC m=+2816.450924689" lastFinishedPulling="2026-03-11 09:43:39.556025935 +0000 UTC m=+2818.221695760" observedRunningTime="2026-03-11 09:43:39.830505447 +0000 UTC m=+2818.496175262" watchObservedRunningTime="2026-03-11 09:43:39.833829311 +0000 UTC m=+2818.499499126" Mar 11 09:43:46 crc kubenswrapper[4840]: I0311 09:43:46.868432 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rrxqd" Mar 11 09:43:46 crc kubenswrapper[4840]: I0311 09:43:46.868950 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rrxqd" Mar 11 09:43:46 crc kubenswrapper[4840]: I0311 09:43:46.910523 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rrxqd" Mar 11 09:43:47 crc kubenswrapper[4840]: I0311 09:43:47.893740 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rrxqd" Mar 11 09:43:47 crc kubenswrapper[4840]: I0311 09:43:47.951172 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrxqd"] Mar 11 09:43:49 crc kubenswrapper[4840]: I0311 09:43:49.865685 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rrxqd" podUID="4457b503-8616-4c9c-bb61-ee9371857f19" containerName="registry-server" containerID="cri-o://abb209a8ae3206d3a5d5b780a05543a993e7786a040944ccf4de1f1b2e4808e9" gracePeriod=2 Mar 11 09:43:50 crc kubenswrapper[4840]: I0311 09:43:50.293963 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rrxqd" Mar 11 09:43:50 crc kubenswrapper[4840]: I0311 09:43:50.452346 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4457b503-8616-4c9c-bb61-ee9371857f19-catalog-content\") pod \"4457b503-8616-4c9c-bb61-ee9371857f19\" (UID: \"4457b503-8616-4c9c-bb61-ee9371857f19\") " Mar 11 09:43:50 crc kubenswrapper[4840]: I0311 09:43:50.452584 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmht2\" (UniqueName: \"kubernetes.io/projected/4457b503-8616-4c9c-bb61-ee9371857f19-kube-api-access-zmht2\") pod \"4457b503-8616-4c9c-bb61-ee9371857f19\" (UID: \"4457b503-8616-4c9c-bb61-ee9371857f19\") " Mar 11 09:43:50 crc kubenswrapper[4840]: I0311 09:43:50.453666 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4457b503-8616-4c9c-bb61-ee9371857f19-utilities" (OuterVolumeSpecName: "utilities") pod "4457b503-8616-4c9c-bb61-ee9371857f19" (UID: "4457b503-8616-4c9c-bb61-ee9371857f19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:43:50 crc kubenswrapper[4840]: I0311 09:43:50.452621 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4457b503-8616-4c9c-bb61-ee9371857f19-utilities\") pod \"4457b503-8616-4c9c-bb61-ee9371857f19\" (UID: \"4457b503-8616-4c9c-bb61-ee9371857f19\") " Mar 11 09:43:50 crc kubenswrapper[4840]: I0311 09:43:50.454253 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4457b503-8616-4c9c-bb61-ee9371857f19-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:43:50 crc kubenswrapper[4840]: I0311 09:43:50.467409 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4457b503-8616-4c9c-bb61-ee9371857f19-kube-api-access-zmht2" (OuterVolumeSpecName: "kube-api-access-zmht2") pod "4457b503-8616-4c9c-bb61-ee9371857f19" (UID: "4457b503-8616-4c9c-bb61-ee9371857f19"). InnerVolumeSpecName "kube-api-access-zmht2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:43:50 crc kubenswrapper[4840]: I0311 09:43:50.478505 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4457b503-8616-4c9c-bb61-ee9371857f19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4457b503-8616-4c9c-bb61-ee9371857f19" (UID: "4457b503-8616-4c9c-bb61-ee9371857f19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:43:50 crc kubenswrapper[4840]: I0311 09:43:50.554933 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4457b503-8616-4c9c-bb61-ee9371857f19-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:43:50 crc kubenswrapper[4840]: I0311 09:43:50.554970 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmht2\" (UniqueName: \"kubernetes.io/projected/4457b503-8616-4c9c-bb61-ee9371857f19-kube-api-access-zmht2\") on node \"crc\" DevicePath \"\"" Mar 11 09:43:50 crc kubenswrapper[4840]: I0311 09:43:50.885939 4840 generic.go:334] "Generic (PLEG): container finished" podID="4457b503-8616-4c9c-bb61-ee9371857f19" containerID="abb209a8ae3206d3a5d5b780a05543a993e7786a040944ccf4de1f1b2e4808e9" exitCode=0 Mar 11 09:43:50 crc kubenswrapper[4840]: I0311 09:43:50.885994 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrxqd" event={"ID":"4457b503-8616-4c9c-bb61-ee9371857f19","Type":"ContainerDied","Data":"abb209a8ae3206d3a5d5b780a05543a993e7786a040944ccf4de1f1b2e4808e9"} Mar 11 09:43:50 crc kubenswrapper[4840]: I0311 09:43:50.886030 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrxqd" event={"ID":"4457b503-8616-4c9c-bb61-ee9371857f19","Type":"ContainerDied","Data":"5e86196bb317da0e2f44578e9adb96caaf640c75d78774f5e54d05fb70d99109"} Mar 11 09:43:50 crc kubenswrapper[4840]: I0311 09:43:50.886053 4840 scope.go:117] "RemoveContainer" containerID="abb209a8ae3206d3a5d5b780a05543a993e7786a040944ccf4de1f1b2e4808e9" Mar 11 09:43:50 crc kubenswrapper[4840]: I0311 09:43:50.886195 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rrxqd" Mar 11 09:43:50 crc kubenswrapper[4840]: I0311 09:43:50.935574 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrxqd"] Mar 11 09:43:50 crc kubenswrapper[4840]: I0311 09:43:50.938625 4840 scope.go:117] "RemoveContainer" containerID="fa7610e6f01999bea991138fab1815624f90f5bb2278e3e75d7ed0b85367c6e3" Mar 11 09:43:50 crc kubenswrapper[4840]: I0311 09:43:50.948720 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrxqd"] Mar 11 09:43:50 crc kubenswrapper[4840]: I0311 09:43:50.969788 4840 scope.go:117] "RemoveContainer" containerID="63f89c046244fd48e4b76f462fa0296b0280e5399250368ce5d28a9994811170" Mar 11 09:43:51 crc kubenswrapper[4840]: I0311 09:43:51.012293 4840 scope.go:117] "RemoveContainer" containerID="abb209a8ae3206d3a5d5b780a05543a993e7786a040944ccf4de1f1b2e4808e9" Mar 11 09:43:51 crc kubenswrapper[4840]: E0311 09:43:51.013319 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abb209a8ae3206d3a5d5b780a05543a993e7786a040944ccf4de1f1b2e4808e9\": container with ID starting with abb209a8ae3206d3a5d5b780a05543a993e7786a040944ccf4de1f1b2e4808e9 not found: ID does not exist" containerID="abb209a8ae3206d3a5d5b780a05543a993e7786a040944ccf4de1f1b2e4808e9" Mar 11 09:43:51 crc kubenswrapper[4840]: I0311 09:43:51.013373 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abb209a8ae3206d3a5d5b780a05543a993e7786a040944ccf4de1f1b2e4808e9"} err="failed to get container status \"abb209a8ae3206d3a5d5b780a05543a993e7786a040944ccf4de1f1b2e4808e9\": rpc error: code = NotFound desc = could not find container \"abb209a8ae3206d3a5d5b780a05543a993e7786a040944ccf4de1f1b2e4808e9\": container with ID starting with abb209a8ae3206d3a5d5b780a05543a993e7786a040944ccf4de1f1b2e4808e9 not found: ID does not exist" Mar 11 09:43:51 crc kubenswrapper[4840]: I0311 09:43:51.013403 4840 scope.go:117] "RemoveContainer" containerID="fa7610e6f01999bea991138fab1815624f90f5bb2278e3e75d7ed0b85367c6e3" Mar 11 09:43:51 crc kubenswrapper[4840]: E0311 09:43:51.013814 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa7610e6f01999bea991138fab1815624f90f5bb2278e3e75d7ed0b85367c6e3\": container with ID starting with fa7610e6f01999bea991138fab1815624f90f5bb2278e3e75d7ed0b85367c6e3 not found: ID does not exist" containerID="fa7610e6f01999bea991138fab1815624f90f5bb2278e3e75d7ed0b85367c6e3" Mar 11 09:43:51 crc kubenswrapper[4840]: I0311 09:43:51.013854 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa7610e6f01999bea991138fab1815624f90f5bb2278e3e75d7ed0b85367c6e3"} err="failed to get container status \"fa7610e6f01999bea991138fab1815624f90f5bb2278e3e75d7ed0b85367c6e3\": rpc error: code = NotFound desc = could not find container \"fa7610e6f01999bea991138fab1815624f90f5bb2278e3e75d7ed0b85367c6e3\": container with ID starting with fa7610e6f01999bea991138fab1815624f90f5bb2278e3e75d7ed0b85367c6e3 not found: ID does not exist" Mar 11 09:43:51 crc kubenswrapper[4840]: I0311 09:43:51.013883 4840 scope.go:117] "RemoveContainer" containerID="63f89c046244fd48e4b76f462fa0296b0280e5399250368ce5d28a9994811170" Mar 11 09:43:51 crc kubenswrapper[4840]: E0311 09:43:51.014357 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63f89c046244fd48e4b76f462fa0296b0280e5399250368ce5d28a9994811170\": container with ID starting with 63f89c046244fd48e4b76f462fa0296b0280e5399250368ce5d28a9994811170 not found: ID does not exist" containerID="63f89c046244fd48e4b76f462fa0296b0280e5399250368ce5d28a9994811170" Mar 11 09:43:51 crc kubenswrapper[4840]: I0311 09:43:51.014389 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63f89c046244fd48e4b76f462fa0296b0280e5399250368ce5d28a9994811170"} err="failed to get container status \"63f89c046244fd48e4b76f462fa0296b0280e5399250368ce5d28a9994811170\": rpc error: code = NotFound desc = could not find container \"63f89c046244fd48e4b76f462fa0296b0280e5399250368ce5d28a9994811170\": container with ID starting with 63f89c046244fd48e4b76f462fa0296b0280e5399250368ce5d28a9994811170 not found: ID does not exist" Mar 11 09:43:52 crc kubenswrapper[4840]: I0311 09:43:52.069151 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4457b503-8616-4c9c-bb61-ee9371857f19" path="/var/lib/kubelet/pods/4457b503-8616-4c9c-bb61-ee9371857f19/volumes" Mar 11 09:44:00 crc kubenswrapper[4840]: I0311 09:44:00.171006 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553704-k57pp"] Mar 11 09:44:00 crc kubenswrapper[4840]: E0311 09:44:00.172046 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4457b503-8616-4c9c-bb61-ee9371857f19" containerName="extract-utilities" Mar 11 09:44:00 crc kubenswrapper[4840]: I0311 09:44:00.172065 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="4457b503-8616-4c9c-bb61-ee9371857f19" containerName="extract-utilities" Mar 11 09:44:00 crc kubenswrapper[4840]: E0311 09:44:00.172075 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4457b503-8616-4c9c-bb61-ee9371857f19" containerName="extract-content" Mar 11 09:44:00 crc kubenswrapper[4840]: I0311 09:44:00.172083 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="4457b503-8616-4c9c-bb61-ee9371857f19" containerName="extract-content" Mar 11 09:44:00 crc kubenswrapper[4840]: E0311 09:44:00.172108 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4457b503-8616-4c9c-bb61-ee9371857f19" containerName="registry-server" Mar 11 09:44:00 crc kubenswrapper[4840]: I0311 09:44:00.172119 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="4457b503-8616-4c9c-bb61-ee9371857f19" containerName="registry-server" Mar 11 09:44:00 crc kubenswrapper[4840]: I0311 09:44:00.172296 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="4457b503-8616-4c9c-bb61-ee9371857f19" containerName="registry-server" Mar 11 09:44:00 crc kubenswrapper[4840]: I0311 09:44:00.172965 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553704-k57pp" Mar 11 09:44:00 crc kubenswrapper[4840]: I0311 09:44:00.175454 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:44:00 crc kubenswrapper[4840]: I0311 09:44:00.176189 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:44:00 crc kubenswrapper[4840]: I0311 09:44:00.176320 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:44:00 crc kubenswrapper[4840]: I0311 09:44:00.181325 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k94d8\" (UniqueName: \"kubernetes.io/projected/f60adbd0-ebd3-461a-a058-6889bb3278dc-kube-api-access-k94d8\") pod \"auto-csr-approver-29553704-k57pp\" (UID: \"f60adbd0-ebd3-461a-a058-6889bb3278dc\") " pod="openshift-infra/auto-csr-approver-29553704-k57pp" Mar 11 09:44:00 crc kubenswrapper[4840]: I0311 09:44:00.191318 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553704-k57pp"] Mar 11 09:44:00 crc kubenswrapper[4840]: I0311 09:44:00.282893 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k94d8\" (UniqueName: \"kubernetes.io/projected/f60adbd0-ebd3-461a-a058-6889bb3278dc-kube-api-access-k94d8\") pod \"auto-csr-approver-29553704-k57pp\" (UID: \"f60adbd0-ebd3-461a-a058-6889bb3278dc\") " pod="openshift-infra/auto-csr-approver-29553704-k57pp" Mar 11 09:44:00 crc kubenswrapper[4840]: I0311 09:44:00.302701 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k94d8\" (UniqueName: \"kubernetes.io/projected/f60adbd0-ebd3-461a-a058-6889bb3278dc-kube-api-access-k94d8\") pod \"auto-csr-approver-29553704-k57pp\" (UID: \"f60adbd0-ebd3-461a-a058-6889bb3278dc\") " pod="openshift-infra/auto-csr-approver-29553704-k57pp" Mar 11 09:44:00 crc kubenswrapper[4840]: I0311 09:44:00.493965 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553704-k57pp" Mar 11 09:44:00 crc kubenswrapper[4840]: I0311 09:44:00.904114 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553704-k57pp"] Mar 11 09:44:00 crc kubenswrapper[4840]: I0311 09:44:00.966653 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553704-k57pp" event={"ID":"f60adbd0-ebd3-461a-a058-6889bb3278dc","Type":"ContainerStarted","Data":"70c1a42c6173aaa7022bdfa3a4c36c37772f80df3467b1faf3701d7c4a004312"} Mar 11 09:44:02 crc kubenswrapper[4840]: I0311 09:44:02.980014 4840 generic.go:334] "Generic (PLEG): container finished" podID="f60adbd0-ebd3-461a-a058-6889bb3278dc" containerID="934fe5e55580841fa40966cca06a9ab8e4d64a68357b9cf4530e60eaff2a2146" exitCode=0 Mar 11 09:44:02 crc kubenswrapper[4840]: I0311 09:44:02.980086 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553704-k57pp" event={"ID":"f60adbd0-ebd3-461a-a058-6889bb3278dc","Type":"ContainerDied","Data":"934fe5e55580841fa40966cca06a9ab8e4d64a68357b9cf4530e60eaff2a2146"} Mar 11 09:44:04 crc kubenswrapper[4840]: I0311 09:44:04.273118 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553704-k57pp" Mar 11 09:44:04 crc kubenswrapper[4840]: I0311 09:44:04.445087 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k94d8\" (UniqueName: \"kubernetes.io/projected/f60adbd0-ebd3-461a-a058-6889bb3278dc-kube-api-access-k94d8\") pod \"f60adbd0-ebd3-461a-a058-6889bb3278dc\" (UID: \"f60adbd0-ebd3-461a-a058-6889bb3278dc\") " Mar 11 09:44:04 crc kubenswrapper[4840]: I0311 09:44:04.466842 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f60adbd0-ebd3-461a-a058-6889bb3278dc-kube-api-access-k94d8" (OuterVolumeSpecName: "kube-api-access-k94d8") pod "f60adbd0-ebd3-461a-a058-6889bb3278dc" (UID: "f60adbd0-ebd3-461a-a058-6889bb3278dc"). InnerVolumeSpecName "kube-api-access-k94d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:44:04 crc kubenswrapper[4840]: I0311 09:44:04.547208 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k94d8\" (UniqueName: \"kubernetes.io/projected/f60adbd0-ebd3-461a-a058-6889bb3278dc-kube-api-access-k94d8\") on node \"crc\" DevicePath \"\"" Mar 11 09:44:04 crc kubenswrapper[4840]: I0311 09:44:04.995084 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553704-k57pp" event={"ID":"f60adbd0-ebd3-461a-a058-6889bb3278dc","Type":"ContainerDied","Data":"70c1a42c6173aaa7022bdfa3a4c36c37772f80df3467b1faf3701d7c4a004312"} Mar 11 09:44:04 crc kubenswrapper[4840]: I0311 09:44:04.995121 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553704-k57pp" Mar 11 09:44:04 crc kubenswrapper[4840]: I0311 09:44:04.995129 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70c1a42c6173aaa7022bdfa3a4c36c37772f80df3467b1faf3701d7c4a004312" Mar 11 09:44:05 crc kubenswrapper[4840]: I0311 09:44:05.342400 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553698-528pg"] Mar 11 09:44:05 crc kubenswrapper[4840]: I0311 09:44:05.349556 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553698-528pg"] Mar 11 09:44:06 crc kubenswrapper[4840]: I0311 09:44:06.068136 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8df0adb4-be88-4988-bed8-184d43f7984c" path="/var/lib/kubelet/pods/8df0adb4-be88-4988-bed8-184d43f7984c/volumes" Mar 11 09:44:09 crc kubenswrapper[4840]: I0311 09:44:09.770643 4840 scope.go:117] "RemoveContainer" containerID="0a8c1072e43310f96923a9789350b012ce467d225bc7821c41c22b4976c48b8e" Mar 11 09:44:25 crc kubenswrapper[4840]: I0311 09:44:25.584789 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-68f45f9d9f-pqcxj" podUID="6d12563d-1416-4fd9-b38d-40bdadc53b40" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 11 09:44:57 crc kubenswrapper[4840]: I0311 09:44:57.446630 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:44:57 crc kubenswrapper[4840]: I0311 09:44:57.447900 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:45:00 crc kubenswrapper[4840]: I0311 09:45:00.155426 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553705-mqh9q"] Mar 11 09:45:00 crc kubenswrapper[4840]: E0311 09:45:00.157330 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60adbd0-ebd3-461a-a058-6889bb3278dc" containerName="oc" Mar 11 09:45:00 crc kubenswrapper[4840]: I0311 09:45:00.157426 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60adbd0-ebd3-461a-a058-6889bb3278dc" containerName="oc" Mar 11 09:45:00 crc kubenswrapper[4840]: I0311 09:45:00.157644 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f60adbd0-ebd3-461a-a058-6889bb3278dc" containerName="oc" Mar 11 09:45:00 crc kubenswrapper[4840]: I0311 09:45:00.158538 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-mqh9q" Mar 11 09:45:00 crc kubenswrapper[4840]: I0311 09:45:00.166641 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553705-mqh9q"] Mar 11 09:45:00 crc kubenswrapper[4840]: I0311 09:45:00.166798 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 11 09:45:00 crc kubenswrapper[4840]: I0311 09:45:00.167228 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 11 09:45:00 crc kubenswrapper[4840]: I0311 09:45:00.294735 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5663935d-2c68-4f3e-872c-c950bd096bd8-secret-volume\") pod \"collect-profiles-29553705-mqh9q\" (UID: \"5663935d-2c68-4f3e-872c-c950bd096bd8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-mqh9q" Mar 11 09:45:00 crc kubenswrapper[4840]: I0311 09:45:00.295060 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vchbg\" (UniqueName: \"kubernetes.io/projected/5663935d-2c68-4f3e-872c-c950bd096bd8-kube-api-access-vchbg\") pod \"collect-profiles-29553705-mqh9q\" (UID: \"5663935d-2c68-4f3e-872c-c950bd096bd8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-mqh9q" Mar 11 09:45:00 crc kubenswrapper[4840]: I0311 09:45:00.295184 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5663935d-2c68-4f3e-872c-c950bd096bd8-config-volume\") pod \"collect-profiles-29553705-mqh9q\" (UID: \"5663935d-2c68-4f3e-872c-c950bd096bd8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-mqh9q" Mar 11 09:45:00 crc kubenswrapper[4840]: I0311 09:45:00.396375 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5663935d-2c68-4f3e-872c-c950bd096bd8-secret-volume\") pod \"collect-profiles-29553705-mqh9q\" (UID: \"5663935d-2c68-4f3e-872c-c950bd096bd8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-mqh9q" Mar 11 09:45:00 crc kubenswrapper[4840]: I0311 09:45:00.396868 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vchbg\" (UniqueName: \"kubernetes.io/projected/5663935d-2c68-4f3e-872c-c950bd096bd8-kube-api-access-vchbg\") pod \"collect-profiles-29553705-mqh9q\" (UID: \"5663935d-2c68-4f3e-872c-c950bd096bd8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-mqh9q" Mar 11 09:45:00 crc kubenswrapper[4840]: I0311 09:45:00.397041 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5663935d-2c68-4f3e-872c-c950bd096bd8-config-volume\") pod \"collect-profiles-29553705-mqh9q\" (UID: \"5663935d-2c68-4f3e-872c-c950bd096bd8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-mqh9q" Mar 11 09:45:00 crc kubenswrapper[4840]: I0311 09:45:00.398254 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5663935d-2c68-4f3e-872c-c950bd096bd8-config-volume\") pod \"collect-profiles-29553705-mqh9q\" (UID: \"5663935d-2c68-4f3e-872c-c950bd096bd8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-mqh9q" Mar 11 09:45:00 crc kubenswrapper[4840]: I0311 09:45:00.402736 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5663935d-2c68-4f3e-872c-c950bd096bd8-secret-volume\") pod \"collect-profiles-29553705-mqh9q\" (UID: \"5663935d-2c68-4f3e-872c-c950bd096bd8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-mqh9q" Mar 11 09:45:00 crc kubenswrapper[4840]: I0311 09:45:00.412912 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vchbg\" (UniqueName: \"kubernetes.io/projected/5663935d-2c68-4f3e-872c-c950bd096bd8-kube-api-access-vchbg\") pod \"collect-profiles-29553705-mqh9q\" (UID: \"5663935d-2c68-4f3e-872c-c950bd096bd8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-mqh9q" Mar 11 09:45:00 crc kubenswrapper[4840]: I0311 09:45:00.485908 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-mqh9q" Mar 11 09:45:01 crc kubenswrapper[4840]: I0311 09:45:01.219430 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553705-mqh9q"] Mar 11 09:45:02 crc kubenswrapper[4840]: I0311 09:45:02.008892 4840 generic.go:334] "Generic (PLEG): container finished" podID="5663935d-2c68-4f3e-872c-c950bd096bd8" containerID="9ff08548cc5d4779be39cae9261d6d7416e2fdc4a83eebd34617a094df78b036" exitCode=0 Mar 11 09:45:02 crc kubenswrapper[4840]: I0311 09:45:02.008969 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-mqh9q" event={"ID":"5663935d-2c68-4f3e-872c-c950bd096bd8","Type":"ContainerDied","Data":"9ff08548cc5d4779be39cae9261d6d7416e2fdc4a83eebd34617a094df78b036"} Mar 11 09:45:02 crc kubenswrapper[4840]: I0311 09:45:02.009222 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-mqh9q" event={"ID":"5663935d-2c68-4f3e-872c-c950bd096bd8","Type":"ContainerStarted","Data":"1fa139f7bd5f8f6321b0fb03cf119647802e41d82b38a44045cb205e7e059d3f"} Mar 11 09:45:03 crc kubenswrapper[4840]: I0311 09:45:03.288597 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-mqh9q" Mar 11 09:45:03 crc kubenswrapper[4840]: I0311 09:45:03.342588 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5663935d-2c68-4f3e-872c-c950bd096bd8-secret-volume\") pod \"5663935d-2c68-4f3e-872c-c950bd096bd8\" (UID: \"5663935d-2c68-4f3e-872c-c950bd096bd8\") " Mar 11 09:45:03 crc kubenswrapper[4840]: I0311 09:45:03.342703 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vchbg\" (UniqueName: \"kubernetes.io/projected/5663935d-2c68-4f3e-872c-c950bd096bd8-kube-api-access-vchbg\") pod \"5663935d-2c68-4f3e-872c-c950bd096bd8\" (UID: \"5663935d-2c68-4f3e-872c-c950bd096bd8\") " Mar 11 09:45:03 crc kubenswrapper[4840]: I0311 09:45:03.342768 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5663935d-2c68-4f3e-872c-c950bd096bd8-config-volume\") pod \"5663935d-2c68-4f3e-872c-c950bd096bd8\" (UID: \"5663935d-2c68-4f3e-872c-c950bd096bd8\") " Mar 11 09:45:03 crc kubenswrapper[4840]: I0311 09:45:03.343961 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5663935d-2c68-4f3e-872c-c950bd096bd8-config-volume" (OuterVolumeSpecName: "config-volume") pod "5663935d-2c68-4f3e-872c-c950bd096bd8" (UID: "5663935d-2c68-4f3e-872c-c950bd096bd8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 09:45:03 crc kubenswrapper[4840]: I0311 09:45:03.349009 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5663935d-2c68-4f3e-872c-c950bd096bd8-kube-api-access-vchbg" (OuterVolumeSpecName: "kube-api-access-vchbg") pod "5663935d-2c68-4f3e-872c-c950bd096bd8" (UID: "5663935d-2c68-4f3e-872c-c950bd096bd8"). InnerVolumeSpecName "kube-api-access-vchbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:45:03 crc kubenswrapper[4840]: I0311 09:45:03.349574 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5663935d-2c68-4f3e-872c-c950bd096bd8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5663935d-2c68-4f3e-872c-c950bd096bd8" (UID: "5663935d-2c68-4f3e-872c-c950bd096bd8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 09:45:03 crc kubenswrapper[4840]: I0311 09:45:03.444432 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vchbg\" (UniqueName: \"kubernetes.io/projected/5663935d-2c68-4f3e-872c-c950bd096bd8-kube-api-access-vchbg\") on node \"crc\" DevicePath \"\"" Mar 11 09:45:03 crc kubenswrapper[4840]: I0311 09:45:03.444561 4840 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5663935d-2c68-4f3e-872c-c950bd096bd8-config-volume\") on node \"crc\" DevicePath \"\"" Mar 11 09:45:03 crc kubenswrapper[4840]: I0311 09:45:03.444572 4840 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5663935d-2c68-4f3e-872c-c950bd096bd8-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 11 09:45:04 crc kubenswrapper[4840]: I0311 09:45:04.028235 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-mqh9q" event={"ID":"5663935d-2c68-4f3e-872c-c950bd096bd8","Type":"ContainerDied","Data":"1fa139f7bd5f8f6321b0fb03cf119647802e41d82b38a44045cb205e7e059d3f"} Mar 11 09:45:04 crc kubenswrapper[4840]: I0311 09:45:04.028635 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fa139f7bd5f8f6321b0fb03cf119647802e41d82b38a44045cb205e7e059d3f" Mar 11 09:45:04 crc kubenswrapper[4840]: I0311 09:45:04.028298 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553705-mqh9q" Mar 11 09:45:04 crc kubenswrapper[4840]: I0311 09:45:04.355140 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553660-ftnls"] Mar 11 09:45:04 crc kubenswrapper[4840]: I0311 09:45:04.362275 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553660-ftnls"] Mar 11 09:45:06 crc kubenswrapper[4840]: I0311 09:45:06.072103 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8567275-d6d0-4690-a442-76581bbcf793" path="/var/lib/kubelet/pods/e8567275-d6d0-4690-a442-76581bbcf793/volumes" Mar 11 09:45:09 crc kubenswrapper[4840]: I0311 09:45:09.839792 4840 scope.go:117] "RemoveContainer" containerID="390e185053bf789329840250d721b084ea7b2658b30b610c5944f1c0387b6ead" Mar 11 09:45:27 crc kubenswrapper[4840]: I0311 09:45:27.445670 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:45:27 crc kubenswrapper[4840]: I0311 09:45:27.446205 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:45:57 crc kubenswrapper[4840]: I0311 09:45:57.445833 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:45:57 crc kubenswrapper[4840]: I0311 09:45:57.446532 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:45:57 crc kubenswrapper[4840]: I0311 09:45:57.446601 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 09:45:57 crc kubenswrapper[4840]: I0311 09:45:57.447347 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c"} pod="openshift-machine-config-operator/machine-config-daemon-brtht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 09:45:57 crc kubenswrapper[4840]: I0311 09:45:57.447422 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" containerID="cri-o://65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" gracePeriod=600 Mar 11 09:45:57 crc kubenswrapper[4840]: E0311 09:45:57.570658 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:45:58 crc kubenswrapper[4840]: I0311 09:45:58.404685 4840 generic.go:334] "Generic (PLEG): container finished" podID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" exitCode=0 Mar 11 09:45:58 crc kubenswrapper[4840]: I0311 09:45:58.404782 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerDied","Data":"65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c"} Mar 11 09:45:58 crc kubenswrapper[4840]: I0311 09:45:58.405295 4840 scope.go:117] "RemoveContainer" containerID="d03747e4fc9bc46a24602d6964eb7dac7d34cf189febc04713e199f66cc16972" Mar 11 09:45:58 crc kubenswrapper[4840]: I0311 09:45:58.405801 4840 scope.go:117] "RemoveContainer" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" Mar 11 09:45:58 crc kubenswrapper[4840]: E0311 09:45:58.406072 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:46:00 crc kubenswrapper[4840]: I0311 09:46:00.143920 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553706-sf2fd"] Mar 11 09:46:00 crc kubenswrapper[4840]: E0311 09:46:00.144699 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5663935d-2c68-4f3e-872c-c950bd096bd8" containerName="collect-profiles" Mar 11 09:46:00 crc kubenswrapper[4840]: I0311 09:46:00.144717 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="5663935d-2c68-4f3e-872c-c950bd096bd8" containerName="collect-profiles" Mar 11 09:46:00 crc kubenswrapper[4840]: I0311 09:46:00.144905 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="5663935d-2c68-4f3e-872c-c950bd096bd8" containerName="collect-profiles" Mar 11 09:46:00 crc kubenswrapper[4840]: I0311 09:46:00.145537 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553706-sf2fd" Mar 11 09:46:00 crc kubenswrapper[4840]: I0311 09:46:00.147520 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:46:00 crc kubenswrapper[4840]: I0311 09:46:00.148879 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:46:00 crc kubenswrapper[4840]: I0311 09:46:00.149139 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:46:00 crc kubenswrapper[4840]: I0311 09:46:00.152478 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553706-sf2fd"] Mar 11 09:46:00 crc kubenswrapper[4840]: I0311 09:46:00.200622 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc4cf\" (UniqueName: \"kubernetes.io/projected/af84bb79-450e-4071-bd0b-86b7a92e2e4c-kube-api-access-tc4cf\") pod \"auto-csr-approver-29553706-sf2fd\" (UID: \"af84bb79-450e-4071-bd0b-86b7a92e2e4c\") " pod="openshift-infra/auto-csr-approver-29553706-sf2fd" Mar 11 09:46:00 crc kubenswrapper[4840]: I0311 09:46:00.302325 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc4cf\" (UniqueName: \"kubernetes.io/projected/af84bb79-450e-4071-bd0b-86b7a92e2e4c-kube-api-access-tc4cf\") pod \"auto-csr-approver-29553706-sf2fd\" (UID: \"af84bb79-450e-4071-bd0b-86b7a92e2e4c\") " pod="openshift-infra/auto-csr-approver-29553706-sf2fd" Mar 11 09:46:00 crc kubenswrapper[4840]: I0311 09:46:00.319943 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc4cf\" (UniqueName: \"kubernetes.io/projected/af84bb79-450e-4071-bd0b-86b7a92e2e4c-kube-api-access-tc4cf\") pod \"auto-csr-approver-29553706-sf2fd\" (UID: \"af84bb79-450e-4071-bd0b-86b7a92e2e4c\") " pod="openshift-infra/auto-csr-approver-29553706-sf2fd" Mar 11 09:46:00 crc kubenswrapper[4840]: I0311 09:46:00.469903 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553706-sf2fd" Mar 11 09:46:00 crc kubenswrapper[4840]: I0311 09:46:00.871943 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553706-sf2fd"] Mar 11 09:46:00 crc kubenswrapper[4840]: I0311 09:46:00.878540 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 09:46:01 crc kubenswrapper[4840]: I0311 09:46:01.428364 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553706-sf2fd" event={"ID":"af84bb79-450e-4071-bd0b-86b7a92e2e4c","Type":"ContainerStarted","Data":"556760462b2e62b62bbfc1136a461d35cdf8cffc4d9f9704c2de876acf5b76eb"} Mar 11 09:46:03 crc kubenswrapper[4840]: I0311 09:46:03.441926 4840 generic.go:334] "Generic (PLEG): container finished" podID="af84bb79-450e-4071-bd0b-86b7a92e2e4c" containerID="0b293d0066dfef96c11219dc804c0252c6376f07b41b53528fc3bd12ff7bd375" exitCode=0 Mar 11 09:46:03 crc kubenswrapper[4840]: I0311 09:46:03.442003 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553706-sf2fd" event={"ID":"af84bb79-450e-4071-bd0b-86b7a92e2e4c","Type":"ContainerDied","Data":"0b293d0066dfef96c11219dc804c0252c6376f07b41b53528fc3bd12ff7bd375"} Mar 11 09:46:04 crc kubenswrapper[4840]: I0311 09:46:04.733258 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553706-sf2fd" Mar 11 09:46:04 crc kubenswrapper[4840]: I0311 09:46:04.775866 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc4cf\" (UniqueName: \"kubernetes.io/projected/af84bb79-450e-4071-bd0b-86b7a92e2e4c-kube-api-access-tc4cf\") pod \"af84bb79-450e-4071-bd0b-86b7a92e2e4c\" (UID: \"af84bb79-450e-4071-bd0b-86b7a92e2e4c\") " Mar 11 09:46:04 crc kubenswrapper[4840]: I0311 09:46:04.781499 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af84bb79-450e-4071-bd0b-86b7a92e2e4c-kube-api-access-tc4cf" (OuterVolumeSpecName: "kube-api-access-tc4cf") pod "af84bb79-450e-4071-bd0b-86b7a92e2e4c" (UID: "af84bb79-450e-4071-bd0b-86b7a92e2e4c"). InnerVolumeSpecName "kube-api-access-tc4cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:46:04 crc kubenswrapper[4840]: I0311 09:46:04.877398 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc4cf\" (UniqueName: \"kubernetes.io/projected/af84bb79-450e-4071-bd0b-86b7a92e2e4c-kube-api-access-tc4cf\") on node \"crc\" DevicePath \"\"" Mar 11 09:46:05 crc kubenswrapper[4840]: I0311 09:46:05.467129 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553706-sf2fd" event={"ID":"af84bb79-450e-4071-bd0b-86b7a92e2e4c","Type":"ContainerDied","Data":"556760462b2e62b62bbfc1136a461d35cdf8cffc4d9f9704c2de876acf5b76eb"} Mar 11 09:46:05 crc kubenswrapper[4840]: I0311 09:46:05.467181 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="556760462b2e62b62bbfc1136a461d35cdf8cffc4d9f9704c2de876acf5b76eb" Mar 11 09:46:05 crc kubenswrapper[4840]: I0311 09:46:05.467783 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553706-sf2fd" Mar 11 09:46:05 crc kubenswrapper[4840]: I0311 09:46:05.794913 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553700-dg4x5"] Mar 11 09:46:05 crc kubenswrapper[4840]: I0311 09:46:05.802495 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553700-dg4x5"] Mar 11 09:46:06 crc kubenswrapper[4840]: I0311 09:46:06.068370 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf092727-a72a-447d-a0fe-f62a283ee577" path="/var/lib/kubelet/pods/cf092727-a72a-447d-a0fe-f62a283ee577/volumes" Mar 11 09:46:09 crc kubenswrapper[4840]: I0311 09:46:09.893431 4840 scope.go:117] "RemoveContainer" containerID="d6e45f630bb8c342ec86ae3ffa1b8e3926636bf79a82b6d034d57769100d31df" Mar 11 09:46:11 crc kubenswrapper[4840]: I0311 09:46:11.060453 4840 scope.go:117] "RemoveContainer" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" Mar 11 09:46:11 crc kubenswrapper[4840]: E0311 09:46:11.060944 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:46:22 crc kubenswrapper[4840]: I0311 09:46:22.064592 4840 scope.go:117] "RemoveContainer" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" Mar 11 09:46:22 crc kubenswrapper[4840]: E0311 09:46:22.067011 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:46:35 crc kubenswrapper[4840]: I0311 09:46:35.061061 4840 scope.go:117] "RemoveContainer" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" Mar 11 09:46:35 crc kubenswrapper[4840]: E0311 09:46:35.061925 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:46:50 crc kubenswrapper[4840]: I0311 09:46:50.062186 4840 scope.go:117] "RemoveContainer" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" Mar 11 09:46:50 crc kubenswrapper[4840]: E0311 09:46:50.063126 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:47:01 crc kubenswrapper[4840]: I0311 09:47:01.060213 4840 scope.go:117] "RemoveContainer" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" Mar 11 09:47:01 crc kubenswrapper[4840]: E0311 09:47:01.061846 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:47:01 crc kubenswrapper[4840]: I0311 09:47:01.783681 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qxffp"] Mar 11 09:47:01 crc kubenswrapper[4840]: E0311 09:47:01.784105 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af84bb79-450e-4071-bd0b-86b7a92e2e4c" containerName="oc" Mar 11 09:47:01 crc kubenswrapper[4840]: I0311 09:47:01.784126 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="af84bb79-450e-4071-bd0b-86b7a92e2e4c" containerName="oc" Mar 11 09:47:01 crc kubenswrapper[4840]: I0311 09:47:01.784261 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="af84bb79-450e-4071-bd0b-86b7a92e2e4c" containerName="oc" Mar 11 09:47:01 crc kubenswrapper[4840]: I0311 09:47:01.785298 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qxffp" Mar 11 09:47:01 crc kubenswrapper[4840]: I0311 09:47:01.803796 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qxffp"] Mar 11 09:47:01 crc kubenswrapper[4840]: I0311 09:47:01.868439 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ed2643-af71-4014-a139-dae481809959-catalog-content\") pod \"redhat-operators-qxffp\" (UID: \"32ed2643-af71-4014-a139-dae481809959\") " pod="openshift-marketplace/redhat-operators-qxffp" Mar 11 09:47:01 crc kubenswrapper[4840]: I0311 09:47:01.868508 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9c57\" (UniqueName: \"kubernetes.io/projected/32ed2643-af71-4014-a139-dae481809959-kube-api-access-z9c57\") pod \"redhat-operators-qxffp\" (UID: \"32ed2643-af71-4014-a139-dae481809959\") " pod="openshift-marketplace/redhat-operators-qxffp" Mar 11 09:47:01 crc kubenswrapper[4840]: I0311 09:47:01.868531 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ed2643-af71-4014-a139-dae481809959-utilities\") pod \"redhat-operators-qxffp\" (UID: \"32ed2643-af71-4014-a139-dae481809959\") " pod="openshift-marketplace/redhat-operators-qxffp" Mar 11 09:47:01 crc kubenswrapper[4840]: I0311 09:47:01.969900 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ed2643-af71-4014-a139-dae481809959-catalog-content\") pod \"redhat-operators-qxffp\" (UID: \"32ed2643-af71-4014-a139-dae481809959\") " pod="openshift-marketplace/redhat-operators-qxffp" Mar 11 09:47:01 crc kubenswrapper[4840]: I0311 09:47:01.969964 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9c57\" (UniqueName: \"kubernetes.io/projected/32ed2643-af71-4014-a139-dae481809959-kube-api-access-z9c57\") pod \"redhat-operators-qxffp\" (UID: \"32ed2643-af71-4014-a139-dae481809959\") " pod="openshift-marketplace/redhat-operators-qxffp" Mar 11 09:47:01 crc kubenswrapper[4840]: I0311 09:47:01.969997 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ed2643-af71-4014-a139-dae481809959-utilities\") pod \"redhat-operators-qxffp\" (UID: \"32ed2643-af71-4014-a139-dae481809959\") " pod="openshift-marketplace/redhat-operators-qxffp" Mar 11 09:47:01 crc kubenswrapper[4840]: I0311 09:47:01.970608 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ed2643-af71-4014-a139-dae481809959-utilities\") pod \"redhat-operators-qxffp\" (UID: \"32ed2643-af71-4014-a139-dae481809959\") " pod="openshift-marketplace/redhat-operators-qxffp" Mar 11 09:47:01 crc kubenswrapper[4840]: I0311 09:47:01.970834 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ed2643-af71-4014-a139-dae481809959-catalog-content\") pod \"redhat-operators-qxffp\" (UID: \"32ed2643-af71-4014-a139-dae481809959\") " pod="openshift-marketplace/redhat-operators-qxffp" Mar 11 09:47:01 crc kubenswrapper[4840]: I0311 09:47:01.990506 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9c57\" (UniqueName: \"kubernetes.io/projected/32ed2643-af71-4014-a139-dae481809959-kube-api-access-z9c57\") pod \"redhat-operators-qxffp\" (UID: \"32ed2643-af71-4014-a139-dae481809959\") " pod="openshift-marketplace/redhat-operators-qxffp" Mar 11 09:47:02 crc kubenswrapper[4840]: I0311 09:47:02.106279 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qxffp" Mar 11 09:47:02 crc kubenswrapper[4840]: I0311 09:47:02.548227 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qxffp"] Mar 11 09:47:03 crc kubenswrapper[4840]: I0311 09:47:03.182281 4840 generic.go:334] "Generic (PLEG): container finished" podID="32ed2643-af71-4014-a139-dae481809959" containerID="71a685ec76c0191310e4983869194a236c612c00900015b3ec1183c98697dedb" exitCode=0 Mar 11 09:47:03 crc kubenswrapper[4840]: I0311 09:47:03.182327 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxffp" event={"ID":"32ed2643-af71-4014-a139-dae481809959","Type":"ContainerDied","Data":"71a685ec76c0191310e4983869194a236c612c00900015b3ec1183c98697dedb"} Mar 11 09:47:03 crc kubenswrapper[4840]: I0311 09:47:03.182357 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxffp" event={"ID":"32ed2643-af71-4014-a139-dae481809959","Type":"ContainerStarted","Data":"6fccf73e716f6f49fbc3886ed8eaebc356206ca286cda485ea8c48e046b12d06"} Mar 11 09:47:04 crc kubenswrapper[4840]: I0311 09:47:04.190281 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxffp" event={"ID":"32ed2643-af71-4014-a139-dae481809959","Type":"ContainerStarted","Data":"f82b2c088f8cbfdb1d975c51e269c87ce14dcbbeca084d165a3d51fc3b23a770"} Mar 11 09:47:05 crc kubenswrapper[4840]: I0311 09:47:05.196811 4840 generic.go:334] "Generic (PLEG): container finished" podID="32ed2643-af71-4014-a139-dae481809959" containerID="f82b2c088f8cbfdb1d975c51e269c87ce14dcbbeca084d165a3d51fc3b23a770" exitCode=0 Mar 11 09:47:05 crc kubenswrapper[4840]: I0311 09:47:05.196885 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxffp" event={"ID":"32ed2643-af71-4014-a139-dae481809959","Type":"ContainerDied","Data":"f82b2c088f8cbfdb1d975c51e269c87ce14dcbbeca084d165a3d51fc3b23a770"} Mar 11 09:47:08 crc kubenswrapper[4840]: I0311 09:47:08.526728 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxffp" event={"ID":"32ed2643-af71-4014-a139-dae481809959","Type":"ContainerStarted","Data":"21d21de3ccdbf0acac065892d5e32e519a48e09e27dd2c344c731adf1ee089f1"} Mar 11 09:47:08 crc kubenswrapper[4840]: I0311 09:47:08.562750 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qxffp" podStartSLOduration=2.6051192050000003 podStartE2EDuration="7.562729379s" podCreationTimestamp="2026-03-11 09:47:01 +0000 UTC" firstStartedPulling="2026-03-11 09:47:03.183643956 +0000 UTC m=+3021.849313771" lastFinishedPulling="2026-03-11 09:47:08.14125413 +0000 UTC m=+3026.806923945" observedRunningTime="2026-03-11 09:47:08.552977093 +0000 UTC m=+3027.218646908" watchObservedRunningTime="2026-03-11 09:47:08.562729379 +0000 UTC m=+3027.228399194" Mar 11 09:47:12 crc kubenswrapper[4840]: I0311 09:47:12.106948 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qxffp" Mar 11 09:47:12 crc kubenswrapper[4840]: I0311 09:47:12.107285 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qxffp" Mar 11 09:47:13 crc kubenswrapper[4840]: I0311 09:47:13.059983 4840 scope.go:117] "RemoveContainer" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" Mar 11 09:47:13 crc kubenswrapper[4840]: E0311 09:47:13.060526 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:47:13 crc kubenswrapper[4840]: I0311 09:47:13.152910 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qxffp" podUID="32ed2643-af71-4014-a139-dae481809959" containerName="registry-server" probeResult="failure" output=< Mar 11 09:47:13 crc kubenswrapper[4840]: timeout: failed to connect service ":50051" within 1s Mar 11 09:47:13 crc kubenswrapper[4840]: > Mar 11 09:47:22 crc kubenswrapper[4840]: I0311 09:47:22.149379 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qxffp" Mar 11 09:47:22 crc kubenswrapper[4840]: I0311 09:47:22.199347 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qxffp" Mar 11 09:47:22 crc kubenswrapper[4840]: I0311 09:47:22.423373 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qxffp"] Mar 11 09:47:23 crc kubenswrapper[4840]: I0311 09:47:23.627851 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qxffp" podUID="32ed2643-af71-4014-a139-dae481809959" containerName="registry-server" containerID="cri-o://21d21de3ccdbf0acac065892d5e32e519a48e09e27dd2c344c731adf1ee089f1" gracePeriod=2 Mar 11 09:47:24 crc kubenswrapper[4840]: I0311 09:47:24.028705 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qxffp" Mar 11 09:47:24 crc kubenswrapper[4840]: I0311 09:47:24.183367 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ed2643-af71-4014-a139-dae481809959-utilities\") pod \"32ed2643-af71-4014-a139-dae481809959\" (UID: \"32ed2643-af71-4014-a139-dae481809959\") " Mar 11 09:47:24 crc kubenswrapper[4840]: I0311 09:47:24.183553 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9c57\" (UniqueName: \"kubernetes.io/projected/32ed2643-af71-4014-a139-dae481809959-kube-api-access-z9c57\") pod \"32ed2643-af71-4014-a139-dae481809959\" (UID: \"32ed2643-af71-4014-a139-dae481809959\") " Mar 11 09:47:24 crc kubenswrapper[4840]: I0311 09:47:24.183606 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ed2643-af71-4014-a139-dae481809959-catalog-content\") pod \"32ed2643-af71-4014-a139-dae481809959\" (UID: \"32ed2643-af71-4014-a139-dae481809959\") " Mar 11 09:47:24 crc kubenswrapper[4840]: I0311 09:47:24.184364 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32ed2643-af71-4014-a139-dae481809959-utilities" (OuterVolumeSpecName: "utilities") pod "32ed2643-af71-4014-a139-dae481809959" (UID: "32ed2643-af71-4014-a139-dae481809959"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:47:24 crc kubenswrapper[4840]: I0311 09:47:24.188931 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32ed2643-af71-4014-a139-dae481809959-kube-api-access-z9c57" (OuterVolumeSpecName: "kube-api-access-z9c57") pod "32ed2643-af71-4014-a139-dae481809959" (UID: "32ed2643-af71-4014-a139-dae481809959"). InnerVolumeSpecName "kube-api-access-z9c57". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:47:24 crc kubenswrapper[4840]: I0311 09:47:24.286527 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9c57\" (UniqueName: \"kubernetes.io/projected/32ed2643-af71-4014-a139-dae481809959-kube-api-access-z9c57\") on node \"crc\" DevicePath \"\"" Mar 11 09:47:24 crc kubenswrapper[4840]: I0311 09:47:24.286562 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32ed2643-af71-4014-a139-dae481809959-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:47:24 crc kubenswrapper[4840]: I0311 09:47:24.322915 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32ed2643-af71-4014-a139-dae481809959-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32ed2643-af71-4014-a139-dae481809959" (UID: "32ed2643-af71-4014-a139-dae481809959"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:47:24 crc kubenswrapper[4840]: I0311 09:47:24.387427 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32ed2643-af71-4014-a139-dae481809959-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:47:24 crc kubenswrapper[4840]: I0311 09:47:24.635608 4840 generic.go:334] "Generic (PLEG): container finished" podID="32ed2643-af71-4014-a139-dae481809959" containerID="21d21de3ccdbf0acac065892d5e32e519a48e09e27dd2c344c731adf1ee089f1" exitCode=0 Mar 11 09:47:24 crc kubenswrapper[4840]: I0311 09:47:24.635677 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qxffp" Mar 11 09:47:24 crc kubenswrapper[4840]: I0311 09:47:24.635662 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxffp" event={"ID":"32ed2643-af71-4014-a139-dae481809959","Type":"ContainerDied","Data":"21d21de3ccdbf0acac065892d5e32e519a48e09e27dd2c344c731adf1ee089f1"} Mar 11 09:47:24 crc kubenswrapper[4840]: I0311 09:47:24.635870 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qxffp" event={"ID":"32ed2643-af71-4014-a139-dae481809959","Type":"ContainerDied","Data":"6fccf73e716f6f49fbc3886ed8eaebc356206ca286cda485ea8c48e046b12d06"} Mar 11 09:47:24 crc kubenswrapper[4840]: I0311 09:47:24.635905 4840 scope.go:117] "RemoveContainer" containerID="21d21de3ccdbf0acac065892d5e32e519a48e09e27dd2c344c731adf1ee089f1" Mar 11 09:47:24 crc kubenswrapper[4840]: I0311 09:47:24.670251 4840 scope.go:117] "RemoveContainer" containerID="f82b2c088f8cbfdb1d975c51e269c87ce14dcbbeca084d165a3d51fc3b23a770" Mar 11 09:47:24 crc kubenswrapper[4840]: I0311 09:47:24.672995 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qxffp"] Mar 11 09:47:24 crc kubenswrapper[4840]: I0311 09:47:24.678732 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qxffp"] Mar 11 09:47:24 crc kubenswrapper[4840]: I0311 09:47:24.690021 4840 scope.go:117] "RemoveContainer" containerID="71a685ec76c0191310e4983869194a236c612c00900015b3ec1183c98697dedb" Mar 11 09:47:24 crc kubenswrapper[4840]: I0311 09:47:24.715790 4840 scope.go:117] "RemoveContainer" containerID="21d21de3ccdbf0acac065892d5e32e519a48e09e27dd2c344c731adf1ee089f1" Mar 11 09:47:24 crc kubenswrapper[4840]: E0311 09:47:24.716269 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21d21de3ccdbf0acac065892d5e32e519a48e09e27dd2c344c731adf1ee089f1\": container with ID starting with 21d21de3ccdbf0acac065892d5e32e519a48e09e27dd2c344c731adf1ee089f1 not found: ID does not exist" containerID="21d21de3ccdbf0acac065892d5e32e519a48e09e27dd2c344c731adf1ee089f1" Mar 11 09:47:24 crc kubenswrapper[4840]: I0311 09:47:24.716305 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21d21de3ccdbf0acac065892d5e32e519a48e09e27dd2c344c731adf1ee089f1"} err="failed to get container status \"21d21de3ccdbf0acac065892d5e32e519a48e09e27dd2c344c731adf1ee089f1\": rpc error: code = NotFound desc = could not find container \"21d21de3ccdbf0acac065892d5e32e519a48e09e27dd2c344c731adf1ee089f1\": container with ID starting with 21d21de3ccdbf0acac065892d5e32e519a48e09e27dd2c344c731adf1ee089f1 not found: ID does not exist" Mar 11 09:47:24 crc kubenswrapper[4840]: I0311 09:47:24.716332 4840 scope.go:117] "RemoveContainer" containerID="f82b2c088f8cbfdb1d975c51e269c87ce14dcbbeca084d165a3d51fc3b23a770" Mar 11 09:47:24 crc kubenswrapper[4840]: E0311 09:47:24.716784 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f82b2c088f8cbfdb1d975c51e269c87ce14dcbbeca084d165a3d51fc3b23a770\": container with ID starting with f82b2c088f8cbfdb1d975c51e269c87ce14dcbbeca084d165a3d51fc3b23a770 not found: ID does not exist" containerID="f82b2c088f8cbfdb1d975c51e269c87ce14dcbbeca084d165a3d51fc3b23a770" Mar 11 09:47:24 crc kubenswrapper[4840]: I0311 09:47:24.716835 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f82b2c088f8cbfdb1d975c51e269c87ce14dcbbeca084d165a3d51fc3b23a770"} err="failed to get container status \"f82b2c088f8cbfdb1d975c51e269c87ce14dcbbeca084d165a3d51fc3b23a770\": rpc error: code = NotFound desc = could not find container \"f82b2c088f8cbfdb1d975c51e269c87ce14dcbbeca084d165a3d51fc3b23a770\": container with ID starting with f82b2c088f8cbfdb1d975c51e269c87ce14dcbbeca084d165a3d51fc3b23a770 not found: ID does not exist" Mar 11 09:47:24 crc kubenswrapper[4840]: I0311 09:47:24.716918 4840 scope.go:117] "RemoveContainer" containerID="71a685ec76c0191310e4983869194a236c612c00900015b3ec1183c98697dedb" Mar 11 09:47:24 crc kubenswrapper[4840]: E0311 09:47:24.717484 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71a685ec76c0191310e4983869194a236c612c00900015b3ec1183c98697dedb\": container with ID starting with 71a685ec76c0191310e4983869194a236c612c00900015b3ec1183c98697dedb not found: ID does not exist" containerID="71a685ec76c0191310e4983869194a236c612c00900015b3ec1183c98697dedb" Mar 11 09:47:24 crc kubenswrapper[4840]: I0311 09:47:24.717538 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71a685ec76c0191310e4983869194a236c612c00900015b3ec1183c98697dedb"} err="failed to get container status \"71a685ec76c0191310e4983869194a236c612c00900015b3ec1183c98697dedb\": rpc error: code = NotFound desc = could not find container \"71a685ec76c0191310e4983869194a236c612c00900015b3ec1183c98697dedb\": container with ID starting with 71a685ec76c0191310e4983869194a236c612c00900015b3ec1183c98697dedb not found: ID does not exist" Mar 11 09:47:25 crc kubenswrapper[4840]: I0311 09:47:25.059834 4840 scope.go:117] "RemoveContainer" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" Mar 11 09:47:25 crc kubenswrapper[4840]: E0311 09:47:25.060055 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:47:26 crc kubenswrapper[4840]: I0311 09:47:26.070073 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32ed2643-af71-4014-a139-dae481809959" path="/var/lib/kubelet/pods/32ed2643-af71-4014-a139-dae481809959/volumes" Mar 11 09:47:36 crc kubenswrapper[4840]: I0311 09:47:36.060778 4840 scope.go:117] "RemoveContainer" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" Mar 11 09:47:36 crc kubenswrapper[4840]: E0311 09:47:36.061657 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:47:47 crc kubenswrapper[4840]: I0311 09:47:47.059609 4840 scope.go:117] "RemoveContainer" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" Mar 11 09:47:47 crc kubenswrapper[4840]: E0311 09:47:47.060300 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:48:00 crc kubenswrapper[4840]: I0311 09:48:00.138625 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553708-5z7xq"] Mar 11 09:48:00 crc kubenswrapper[4840]: E0311 09:48:00.139486 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ed2643-af71-4014-a139-dae481809959" containerName="extract-utilities" Mar 11 09:48:00 crc kubenswrapper[4840]: I0311 09:48:00.139499 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ed2643-af71-4014-a139-dae481809959" containerName="extract-utilities" Mar 11 09:48:00 crc kubenswrapper[4840]: E0311 09:48:00.139516 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ed2643-af71-4014-a139-dae481809959" containerName="extract-content" Mar 11 09:48:00 crc kubenswrapper[4840]: I0311 09:48:00.139523 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ed2643-af71-4014-a139-dae481809959" containerName="extract-content" Mar 11 09:48:00 crc kubenswrapper[4840]: E0311 09:48:00.139539 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ed2643-af71-4014-a139-dae481809959" containerName="registry-server" Mar 11 09:48:00 crc kubenswrapper[4840]: I0311 09:48:00.139545 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ed2643-af71-4014-a139-dae481809959" containerName="registry-server" Mar 11 09:48:00 crc kubenswrapper[4840]: I0311 09:48:00.139679 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="32ed2643-af71-4014-a139-dae481809959" containerName="registry-server" Mar 11 09:48:00 crc kubenswrapper[4840]: I0311 09:48:00.140155 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553708-5z7xq" Mar 11 09:48:00 crc kubenswrapper[4840]: I0311 09:48:00.142385 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:48:00 crc kubenswrapper[4840]: I0311 09:48:00.149347 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553708-5z7xq"] Mar 11 09:48:00 crc kubenswrapper[4840]: I0311 09:48:00.150148 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:48:00 crc kubenswrapper[4840]: I0311 09:48:00.150320 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:48:00 crc kubenswrapper[4840]: I0311 09:48:00.295290 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq7f2\" (UniqueName: \"kubernetes.io/projected/3130eb87-1f5d-4af6-a2ff-462df6014c86-kube-api-access-lq7f2\") pod \"auto-csr-approver-29553708-5z7xq\" (UID: \"3130eb87-1f5d-4af6-a2ff-462df6014c86\") " pod="openshift-infra/auto-csr-approver-29553708-5z7xq" Mar 11 09:48:00 crc kubenswrapper[4840]: I0311 09:48:00.396688 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq7f2\" (UniqueName: \"kubernetes.io/projected/3130eb87-1f5d-4af6-a2ff-462df6014c86-kube-api-access-lq7f2\") pod \"auto-csr-approver-29553708-5z7xq\" (UID: \"3130eb87-1f5d-4af6-a2ff-462df6014c86\") " pod="openshift-infra/auto-csr-approver-29553708-5z7xq" Mar 11 09:48:00 crc kubenswrapper[4840]: I0311 09:48:00.417763 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq7f2\" (UniqueName: \"kubernetes.io/projected/3130eb87-1f5d-4af6-a2ff-462df6014c86-kube-api-access-lq7f2\") pod \"auto-csr-approver-29553708-5z7xq\" (UID: \"3130eb87-1f5d-4af6-a2ff-462df6014c86\") " pod="openshift-infra/auto-csr-approver-29553708-5z7xq" Mar 11 09:48:00 crc kubenswrapper[4840]: I0311 09:48:00.690786 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553708-5z7xq" Mar 11 09:48:01 crc kubenswrapper[4840]: I0311 09:48:01.059994 4840 scope.go:117] "RemoveContainer" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" Mar 11 09:48:01 crc kubenswrapper[4840]: E0311 09:48:01.060683 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:48:01 crc kubenswrapper[4840]: I0311 09:48:01.112826 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553708-5z7xq"] Mar 11 09:48:01 crc kubenswrapper[4840]: I0311 09:48:01.387858 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553708-5z7xq" event={"ID":"3130eb87-1f5d-4af6-a2ff-462df6014c86","Type":"ContainerStarted","Data":"255bb666d95f6db4b77ff91812a1684d8226efc706e171873c5e4c537cbbb544"} Mar 11 09:48:03 crc kubenswrapper[4840]: I0311 09:48:03.405314 4840 generic.go:334] "Generic (PLEG): container finished" podID="3130eb87-1f5d-4af6-a2ff-462df6014c86" containerID="94dc0378620b93b23c351d8384d57c6d92a7084fab8ba4ce60e44842b59b3360" exitCode=0 Mar 11 09:48:03 crc kubenswrapper[4840]: I0311 09:48:03.405397 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553708-5z7xq" event={"ID":"3130eb87-1f5d-4af6-a2ff-462df6014c86","Type":"ContainerDied","Data":"94dc0378620b93b23c351d8384d57c6d92a7084fab8ba4ce60e44842b59b3360"} Mar 11 09:48:04 crc kubenswrapper[4840]: I0311 09:48:04.676424 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553708-5z7xq" Mar 11 09:48:04 crc kubenswrapper[4840]: I0311 09:48:04.873722 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq7f2\" (UniqueName: \"kubernetes.io/projected/3130eb87-1f5d-4af6-a2ff-462df6014c86-kube-api-access-lq7f2\") pod \"3130eb87-1f5d-4af6-a2ff-462df6014c86\" (UID: \"3130eb87-1f5d-4af6-a2ff-462df6014c86\") " Mar 11 09:48:04 crc kubenswrapper[4840]: I0311 09:48:04.879371 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3130eb87-1f5d-4af6-a2ff-462df6014c86-kube-api-access-lq7f2" (OuterVolumeSpecName: "kube-api-access-lq7f2") pod "3130eb87-1f5d-4af6-a2ff-462df6014c86" (UID: "3130eb87-1f5d-4af6-a2ff-462df6014c86"). InnerVolumeSpecName "kube-api-access-lq7f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:48:04 crc kubenswrapper[4840]: I0311 09:48:04.975640 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq7f2\" (UniqueName: \"kubernetes.io/projected/3130eb87-1f5d-4af6-a2ff-462df6014c86-kube-api-access-lq7f2\") on node \"crc\" DevicePath \"\"" Mar 11 09:48:05 crc kubenswrapper[4840]: I0311 09:48:05.420665 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553708-5z7xq" event={"ID":"3130eb87-1f5d-4af6-a2ff-462df6014c86","Type":"ContainerDied","Data":"255bb666d95f6db4b77ff91812a1684d8226efc706e171873c5e4c537cbbb544"} Mar 11 09:48:05 crc kubenswrapper[4840]: I0311 09:48:05.420716 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="255bb666d95f6db4b77ff91812a1684d8226efc706e171873c5e4c537cbbb544" Mar 11 09:48:05 crc kubenswrapper[4840]: I0311 09:48:05.420739 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553708-5z7xq" Mar 11 09:48:05 crc kubenswrapper[4840]: I0311 09:48:05.738014 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553702-m9966"] Mar 11 09:48:05 crc kubenswrapper[4840]: I0311 09:48:05.742818 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553702-m9966"] Mar 11 09:48:06 crc kubenswrapper[4840]: I0311 09:48:06.069428 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9cb42d3-d732-4da7-912e-5b9859e38d18" path="/var/lib/kubelet/pods/e9cb42d3-d732-4da7-912e-5b9859e38d18/volumes" Mar 11 09:48:09 crc kubenswrapper[4840]: I0311 09:48:09.976826 4840 scope.go:117] "RemoveContainer" containerID="a151016a8d2d5016cad3c52bff3bfcd5babe212846dc6bc6f0620db22ea46a53" Mar 11 09:48:14 crc kubenswrapper[4840]: I0311 09:48:14.060124 4840 scope.go:117] "RemoveContainer" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" Mar 11 09:48:14 crc kubenswrapper[4840]: E0311 09:48:14.060833 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:48:27 crc kubenswrapper[4840]: I0311 09:48:27.060819 4840 scope.go:117] "RemoveContainer" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" Mar 11 09:48:27 crc kubenswrapper[4840]: E0311 09:48:27.061624 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:48:41 crc kubenswrapper[4840]: I0311 09:48:41.060344 4840 scope.go:117] "RemoveContainer" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" Mar 11 09:48:41 crc kubenswrapper[4840]: E0311 09:48:41.061139 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:48:56 crc kubenswrapper[4840]: I0311 09:48:56.060052 4840 scope.go:117] "RemoveContainer" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" Mar 11 09:48:56 crc kubenswrapper[4840]: E0311 09:48:56.060789 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:49:08 crc kubenswrapper[4840]: I0311 09:49:08.060305 4840 scope.go:117] "RemoveContainer" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" Mar 11 09:49:08 crc kubenswrapper[4840]: E0311 09:49:08.060932 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:49:19 crc kubenswrapper[4840]: I0311 09:49:19.060020 4840 scope.go:117] "RemoveContainer" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" Mar 11 09:49:19 crc kubenswrapper[4840]: E0311 09:49:19.060789 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:49:33 crc kubenswrapper[4840]: I0311 09:49:33.060071 4840 scope.go:117] "RemoveContainer" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" Mar 11 09:49:33 crc kubenswrapper[4840]: E0311 09:49:33.060930 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:49:48 crc kubenswrapper[4840]: I0311 09:49:48.060535 4840 scope.go:117] "RemoveContainer" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" Mar 11 09:49:48 crc kubenswrapper[4840]: E0311 09:49:48.061279 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:50:00 crc kubenswrapper[4840]: I0311 09:50:00.170537 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553710-zm8nt"] Mar 11 09:50:00 crc kubenswrapper[4840]: E0311 09:50:00.171395 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3130eb87-1f5d-4af6-a2ff-462df6014c86" containerName="oc" Mar 11 09:50:00 crc kubenswrapper[4840]: I0311 09:50:00.171407 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="3130eb87-1f5d-4af6-a2ff-462df6014c86" containerName="oc" Mar 11 09:50:00 crc kubenswrapper[4840]: I0311 09:50:00.171580 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="3130eb87-1f5d-4af6-a2ff-462df6014c86" containerName="oc" Mar 11 09:50:00 crc kubenswrapper[4840]: I0311 09:50:00.172309 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553710-zm8nt" Mar 11 09:50:00 crc kubenswrapper[4840]: I0311 09:50:00.174520 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:50:00 crc kubenswrapper[4840]: I0311 09:50:00.174650 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:50:00 crc kubenswrapper[4840]: I0311 09:50:00.174788 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:50:00 crc kubenswrapper[4840]: I0311 09:50:00.184021 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553710-zm8nt"] Mar 11 09:50:00 crc kubenswrapper[4840]: I0311 09:50:00.313638 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clfgx\" (UniqueName: \"kubernetes.io/projected/3ed9ce44-8536-46d5-9c77-1a10b6cfd91f-kube-api-access-clfgx\") pod \"auto-csr-approver-29553710-zm8nt\" (UID: \"3ed9ce44-8536-46d5-9c77-1a10b6cfd91f\") " pod="openshift-infra/auto-csr-approver-29553710-zm8nt" Mar 11 09:50:00 crc kubenswrapper[4840]: I0311 09:50:00.415303 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clfgx\" (UniqueName: \"kubernetes.io/projected/3ed9ce44-8536-46d5-9c77-1a10b6cfd91f-kube-api-access-clfgx\") pod \"auto-csr-approver-29553710-zm8nt\" (UID: \"3ed9ce44-8536-46d5-9c77-1a10b6cfd91f\") " pod="openshift-infra/auto-csr-approver-29553710-zm8nt" Mar 11 09:50:00 crc kubenswrapper[4840]: I0311 09:50:00.433134 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clfgx\" (UniqueName: \"kubernetes.io/projected/3ed9ce44-8536-46d5-9c77-1a10b6cfd91f-kube-api-access-clfgx\") pod \"auto-csr-approver-29553710-zm8nt\" (UID: \"3ed9ce44-8536-46d5-9c77-1a10b6cfd91f\") " pod="openshift-infra/auto-csr-approver-29553710-zm8nt" Mar 11 09:50:00 crc kubenswrapper[4840]: I0311 09:50:00.489688 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553710-zm8nt" Mar 11 09:50:00 crc kubenswrapper[4840]: I0311 09:50:00.922343 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553710-zm8nt"] Mar 11 09:50:01 crc kubenswrapper[4840]: I0311 09:50:01.266819 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553710-zm8nt" event={"ID":"3ed9ce44-8536-46d5-9c77-1a10b6cfd91f","Type":"ContainerStarted","Data":"13e2a5b564cc4abf26420ab784c0df067038e31ad8039cda6d411f25090a2e8a"} Mar 11 09:50:02 crc kubenswrapper[4840]: I0311 09:50:02.275179 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553710-zm8nt" event={"ID":"3ed9ce44-8536-46d5-9c77-1a10b6cfd91f","Type":"ContainerStarted","Data":"e22c81a423cb06fdf76a9f188391eb4b070e7c3ea0a1fb30ccd573ebffdd9909"} Mar 11 09:50:02 crc kubenswrapper[4840]: I0311 09:50:02.295881 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553710-zm8nt" podStartSLOduration=1.36403292 podStartE2EDuration="2.295857845s" podCreationTimestamp="2026-03-11 09:50:00 +0000 UTC" firstStartedPulling="2026-03-11 09:50:00.932870586 +0000 UTC m=+3199.598540401" lastFinishedPulling="2026-03-11 09:50:01.864695511 +0000 UTC m=+3200.530365326" observedRunningTime="2026-03-11 09:50:02.293780523 +0000 UTC m=+3200.959450348" watchObservedRunningTime="2026-03-11 09:50:02.295857845 +0000 UTC m=+3200.961527670" Mar 11 09:50:03 crc kubenswrapper[4840]: I0311 09:50:03.060544 4840 scope.go:117] "RemoveContainer" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" Mar 11 09:50:03 crc kubenswrapper[4840]: E0311 09:50:03.060767 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:50:03 crc kubenswrapper[4840]: I0311 09:50:03.282811 4840 generic.go:334] "Generic (PLEG): container finished" podID="3ed9ce44-8536-46d5-9c77-1a10b6cfd91f" containerID="e22c81a423cb06fdf76a9f188391eb4b070e7c3ea0a1fb30ccd573ebffdd9909" exitCode=0 Mar 11 09:50:03 crc kubenswrapper[4840]: I0311 09:50:03.282845 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553710-zm8nt" event={"ID":"3ed9ce44-8536-46d5-9c77-1a10b6cfd91f","Type":"ContainerDied","Data":"e22c81a423cb06fdf76a9f188391eb4b070e7c3ea0a1fb30ccd573ebffdd9909"} Mar 11 09:50:04 crc kubenswrapper[4840]: I0311 09:50:04.604825 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553710-zm8nt" Mar 11 09:50:04 crc kubenswrapper[4840]: I0311 09:50:04.681410 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clfgx\" (UniqueName: \"kubernetes.io/projected/3ed9ce44-8536-46d5-9c77-1a10b6cfd91f-kube-api-access-clfgx\") pod \"3ed9ce44-8536-46d5-9c77-1a10b6cfd91f\" (UID: \"3ed9ce44-8536-46d5-9c77-1a10b6cfd91f\") " Mar 11 09:50:04 crc kubenswrapper[4840]: I0311 09:50:04.691838 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ed9ce44-8536-46d5-9c77-1a10b6cfd91f-kube-api-access-clfgx" (OuterVolumeSpecName: "kube-api-access-clfgx") pod "3ed9ce44-8536-46d5-9c77-1a10b6cfd91f" (UID: "3ed9ce44-8536-46d5-9c77-1a10b6cfd91f"). InnerVolumeSpecName "kube-api-access-clfgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:50:04 crc kubenswrapper[4840]: I0311 09:50:04.783383 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clfgx\" (UniqueName: \"kubernetes.io/projected/3ed9ce44-8536-46d5-9c77-1a10b6cfd91f-kube-api-access-clfgx\") on node \"crc\" DevicePath \"\"" Mar 11 09:50:05 crc kubenswrapper[4840]: I0311 09:50:05.172863 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553704-k57pp"] Mar 11 09:50:05 crc kubenswrapper[4840]: I0311 09:50:05.182549 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553704-k57pp"] Mar 11 09:50:05 crc kubenswrapper[4840]: I0311 09:50:05.297955 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553710-zm8nt" event={"ID":"3ed9ce44-8536-46d5-9c77-1a10b6cfd91f","Type":"ContainerDied","Data":"13e2a5b564cc4abf26420ab784c0df067038e31ad8039cda6d411f25090a2e8a"} Mar 11 09:50:05 crc kubenswrapper[4840]: I0311 09:50:05.298008 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13e2a5b564cc4abf26420ab784c0df067038e31ad8039cda6d411f25090a2e8a" Mar 11 09:50:05 crc kubenswrapper[4840]: I0311 09:50:05.298057 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553710-zm8nt" Mar 11 09:50:06 crc kubenswrapper[4840]: I0311 09:50:06.070198 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f60adbd0-ebd3-461a-a058-6889bb3278dc" path="/var/lib/kubelet/pods/f60adbd0-ebd3-461a-a058-6889bb3278dc/volumes" Mar 11 09:50:10 crc kubenswrapper[4840]: I0311 09:50:10.074329 4840 scope.go:117] "RemoveContainer" containerID="934fe5e55580841fa40966cca06a9ab8e4d64a68357b9cf4530e60eaff2a2146" Mar 11 09:50:17 crc kubenswrapper[4840]: I0311 09:50:17.060310 4840 scope.go:117] "RemoveContainer" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" Mar 11 09:50:17 crc kubenswrapper[4840]: E0311 09:50:17.061141 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:50:30 crc kubenswrapper[4840]: I0311 09:50:30.060718 4840 scope.go:117] "RemoveContainer" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" Mar 11 09:50:30 crc kubenswrapper[4840]: E0311 09:50:30.061655 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:50:44 crc kubenswrapper[4840]: I0311 09:50:44.060733 4840 scope.go:117] "RemoveContainer" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" Mar 11 09:50:44 crc kubenswrapper[4840]: E0311 09:50:44.061522 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:50:59 crc kubenswrapper[4840]: I0311 09:50:59.060132 4840 scope.go:117] "RemoveContainer" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" Mar 11 09:51:01 crc kubenswrapper[4840]: I0311 09:51:01.226893 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"8bf282d749dcad120b80e750e64632fb5b1fd3d2b7112d2dccabeed39f466957"} Mar 11 09:52:00 crc kubenswrapper[4840]: I0311 09:52:00.151170 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553712-t888v"] Mar 11 09:52:00 crc kubenswrapper[4840]: E0311 09:52:00.152291 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ed9ce44-8536-46d5-9c77-1a10b6cfd91f" containerName="oc" Mar 11 09:52:00 crc kubenswrapper[4840]: I0311 09:52:00.152313 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ed9ce44-8536-46d5-9c77-1a10b6cfd91f" containerName="oc" Mar 11 09:52:00 crc kubenswrapper[4840]: I0311 09:52:00.152544 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ed9ce44-8536-46d5-9c77-1a10b6cfd91f" containerName="oc" Mar 11 09:52:00 crc kubenswrapper[4840]: I0311 09:52:00.153225 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553712-t888v" Mar 11 09:52:00 crc kubenswrapper[4840]: I0311 09:52:00.155925 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:52:00 crc kubenswrapper[4840]: I0311 09:52:00.156170 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:52:00 crc kubenswrapper[4840]: I0311 09:52:00.156368 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:52:00 crc kubenswrapper[4840]: I0311 09:52:00.158549 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553712-t888v"] Mar 11 09:52:00 crc kubenswrapper[4840]: I0311 09:52:00.273461 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzmgb\" (UniqueName: \"kubernetes.io/projected/cc24323f-57c7-4d5c-b436-c3e6cfdcaa87-kube-api-access-mzmgb\") pod \"auto-csr-approver-29553712-t888v\" (UID: \"cc24323f-57c7-4d5c-b436-c3e6cfdcaa87\") " pod="openshift-infra/auto-csr-approver-29553712-t888v" Mar 11 09:52:00 crc kubenswrapper[4840]: I0311 09:52:00.374972 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzmgb\" (UniqueName: \"kubernetes.io/projected/cc24323f-57c7-4d5c-b436-c3e6cfdcaa87-kube-api-access-mzmgb\") pod \"auto-csr-approver-29553712-t888v\" (UID: \"cc24323f-57c7-4d5c-b436-c3e6cfdcaa87\") " pod="openshift-infra/auto-csr-approver-29553712-t888v" Mar 11 09:52:00 crc kubenswrapper[4840]: I0311 09:52:00.398378 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzmgb\" (UniqueName: \"kubernetes.io/projected/cc24323f-57c7-4d5c-b436-c3e6cfdcaa87-kube-api-access-mzmgb\") pod \"auto-csr-approver-29553712-t888v\" (UID: \"cc24323f-57c7-4d5c-b436-c3e6cfdcaa87\") " pod="openshift-infra/auto-csr-approver-29553712-t888v" Mar 11 09:52:00 crc kubenswrapper[4840]: I0311 09:52:00.490553 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553712-t888v" Mar 11 09:52:00 crc kubenswrapper[4840]: I0311 09:52:00.926998 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553712-t888v"] Mar 11 09:52:00 crc kubenswrapper[4840]: I0311 09:52:00.931087 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 09:52:01 crc kubenswrapper[4840]: I0311 09:52:01.676253 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553712-t888v" event={"ID":"cc24323f-57c7-4d5c-b436-c3e6cfdcaa87","Type":"ContainerStarted","Data":"fcd6fba281bb99dbc528759fa2f4299efbe93900eb7b6becd83361c3a62474d3"} Mar 11 09:52:02 crc kubenswrapper[4840]: I0311 09:52:02.686013 4840 generic.go:334] "Generic (PLEG): container finished" podID="cc24323f-57c7-4d5c-b436-c3e6cfdcaa87" containerID="86737aeb9bee26f3c4e4ab889de92ac99272b12aede3b8f74d359bc7b5edf625" exitCode=0 Mar 11 09:52:02 crc kubenswrapper[4840]: I0311 09:52:02.686149 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553712-t888v" event={"ID":"cc24323f-57c7-4d5c-b436-c3e6cfdcaa87","Type":"ContainerDied","Data":"86737aeb9bee26f3c4e4ab889de92ac99272b12aede3b8f74d359bc7b5edf625"} Mar 11 09:52:03 crc kubenswrapper[4840]: I0311 09:52:03.954817 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553712-t888v" Mar 11 09:52:04 crc kubenswrapper[4840]: I0311 09:52:04.130637 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzmgb\" (UniqueName: \"kubernetes.io/projected/cc24323f-57c7-4d5c-b436-c3e6cfdcaa87-kube-api-access-mzmgb\") pod \"cc24323f-57c7-4d5c-b436-c3e6cfdcaa87\" (UID: \"cc24323f-57c7-4d5c-b436-c3e6cfdcaa87\") " Mar 11 09:52:04 crc kubenswrapper[4840]: I0311 09:52:04.135995 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc24323f-57c7-4d5c-b436-c3e6cfdcaa87-kube-api-access-mzmgb" (OuterVolumeSpecName: "kube-api-access-mzmgb") pod "cc24323f-57c7-4d5c-b436-c3e6cfdcaa87" (UID: "cc24323f-57c7-4d5c-b436-c3e6cfdcaa87"). InnerVolumeSpecName "kube-api-access-mzmgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:52:04 crc kubenswrapper[4840]: I0311 09:52:04.232707 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzmgb\" (UniqueName: \"kubernetes.io/projected/cc24323f-57c7-4d5c-b436-c3e6cfdcaa87-kube-api-access-mzmgb\") on node \"crc\" DevicePath \"\"" Mar 11 09:52:04 crc kubenswrapper[4840]: I0311 09:52:04.701489 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553712-t888v" event={"ID":"cc24323f-57c7-4d5c-b436-c3e6cfdcaa87","Type":"ContainerDied","Data":"fcd6fba281bb99dbc528759fa2f4299efbe93900eb7b6becd83361c3a62474d3"} Mar 11 09:52:04 crc kubenswrapper[4840]: I0311 09:52:04.701538 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcd6fba281bb99dbc528759fa2f4299efbe93900eb7b6becd83361c3a62474d3" Mar 11 09:52:04 crc kubenswrapper[4840]: I0311 09:52:04.701980 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553712-t888v" Mar 11 09:52:05 crc kubenswrapper[4840]: I0311 09:52:05.032430 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553706-sf2fd"] Mar 11 09:52:05 crc kubenswrapper[4840]: I0311 09:52:05.039424 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553706-sf2fd"] Mar 11 09:52:06 crc kubenswrapper[4840]: I0311 09:52:06.070139 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af84bb79-450e-4071-bd0b-86b7a92e2e4c" path="/var/lib/kubelet/pods/af84bb79-450e-4071-bd0b-86b7a92e2e4c/volumes" Mar 11 09:52:10 crc kubenswrapper[4840]: I0311 09:52:10.191070 4840 scope.go:117] "RemoveContainer" containerID="0b293d0066dfef96c11219dc804c0252c6376f07b41b53528fc3bd12ff7bd375" Mar 11 09:53:27 crc kubenswrapper[4840]: I0311 09:53:27.446248 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:53:27 crc kubenswrapper[4840]: I0311 09:53:27.446949 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:53:57 crc kubenswrapper[4840]: I0311 09:53:57.445897 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:53:57 crc kubenswrapper[4840]: I0311 09:53:57.446539 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:54:00 crc kubenswrapper[4840]: I0311 09:54:00.145932 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553714-4x95t"] Mar 11 09:54:00 crc kubenswrapper[4840]: E0311 09:54:00.146622 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc24323f-57c7-4d5c-b436-c3e6cfdcaa87" containerName="oc" Mar 11 09:54:00 crc kubenswrapper[4840]: I0311 09:54:00.146635 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc24323f-57c7-4d5c-b436-c3e6cfdcaa87" containerName="oc" Mar 11 09:54:00 crc kubenswrapper[4840]: I0311 09:54:00.146767 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc24323f-57c7-4d5c-b436-c3e6cfdcaa87" containerName="oc" Mar 11 09:54:00 crc kubenswrapper[4840]: I0311 09:54:00.147313 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553714-4x95t" Mar 11 09:54:00 crc kubenswrapper[4840]: I0311 09:54:00.156846 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553714-4x95t"] Mar 11 09:54:00 crc kubenswrapper[4840]: I0311 09:54:00.157673 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:54:00 crc kubenswrapper[4840]: I0311 09:54:00.157952 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:54:00 crc kubenswrapper[4840]: I0311 09:54:00.158456 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:54:00 crc kubenswrapper[4840]: I0311 09:54:00.415771 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vq74\" (UniqueName: \"kubernetes.io/projected/1d69b973-15ad-4a6a-a6a7-0896ccd3466f-kube-api-access-6vq74\") pod \"auto-csr-approver-29553714-4x95t\" (UID: \"1d69b973-15ad-4a6a-a6a7-0896ccd3466f\") " pod="openshift-infra/auto-csr-approver-29553714-4x95t" Mar 11 09:54:00 crc kubenswrapper[4840]: I0311 09:54:00.517211 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vq74\" (UniqueName: \"kubernetes.io/projected/1d69b973-15ad-4a6a-a6a7-0896ccd3466f-kube-api-access-6vq74\") pod \"auto-csr-approver-29553714-4x95t\" (UID: \"1d69b973-15ad-4a6a-a6a7-0896ccd3466f\") " pod="openshift-infra/auto-csr-approver-29553714-4x95t" Mar 11 09:54:00 crc kubenswrapper[4840]: I0311 09:54:00.541723 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vq74\" (UniqueName: \"kubernetes.io/projected/1d69b973-15ad-4a6a-a6a7-0896ccd3466f-kube-api-access-6vq74\") pod \"auto-csr-approver-29553714-4x95t\" (UID: \"1d69b973-15ad-4a6a-a6a7-0896ccd3466f\") " pod="openshift-infra/auto-csr-approver-29553714-4x95t" Mar 11 09:54:00 crc kubenswrapper[4840]: I0311 09:54:00.778016 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553714-4x95t" Mar 11 09:54:01 crc kubenswrapper[4840]: I0311 09:54:01.176400 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553714-4x95t"] Mar 11 09:54:01 crc kubenswrapper[4840]: I0311 09:54:01.563151 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553714-4x95t" event={"ID":"1d69b973-15ad-4a6a-a6a7-0896ccd3466f","Type":"ContainerStarted","Data":"2359935ac6f21544a305cda70f802aa95e428213ce977efd7b12bd95c1471d73"} Mar 11 09:54:02 crc kubenswrapper[4840]: I0311 09:54:02.570840 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553714-4x95t" event={"ID":"1d69b973-15ad-4a6a-a6a7-0896ccd3466f","Type":"ContainerStarted","Data":"9e50b8b810dfd572487cc1bc42d2149c95897fd308da0164b04493fef056d6c4"} Mar 11 09:54:02 crc kubenswrapper[4840]: I0311 09:54:02.584873 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553714-4x95t" podStartSLOduration=1.55109739 podStartE2EDuration="2.584850228s" podCreationTimestamp="2026-03-11 09:54:00 +0000 UTC" firstStartedPulling="2026-03-11 09:54:01.179786527 +0000 UTC m=+3439.845456342" lastFinishedPulling="2026-03-11 09:54:02.213539365 +0000 UTC m=+3440.879209180" observedRunningTime="2026-03-11 09:54:02.582832248 +0000 UTC m=+3441.248502063" watchObservedRunningTime="2026-03-11 09:54:02.584850228 +0000 UTC m=+3441.250520043" Mar 11 09:54:03 crc kubenswrapper[4840]: I0311 09:54:03.580489 4840 generic.go:334] "Generic (PLEG): container finished" podID="1d69b973-15ad-4a6a-a6a7-0896ccd3466f" containerID="9e50b8b810dfd572487cc1bc42d2149c95897fd308da0164b04493fef056d6c4" exitCode=0 Mar 11 09:54:03 crc kubenswrapper[4840]: I0311 09:54:03.580547 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553714-4x95t" event={"ID":"1d69b973-15ad-4a6a-a6a7-0896ccd3466f","Type":"ContainerDied","Data":"9e50b8b810dfd572487cc1bc42d2149c95897fd308da0164b04493fef056d6c4"} Mar 11 09:54:04 crc kubenswrapper[4840]: I0311 09:54:04.861515 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553714-4x95t" Mar 11 09:54:04 crc kubenswrapper[4840]: I0311 09:54:04.974707 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vq74\" (UniqueName: \"kubernetes.io/projected/1d69b973-15ad-4a6a-a6a7-0896ccd3466f-kube-api-access-6vq74\") pod \"1d69b973-15ad-4a6a-a6a7-0896ccd3466f\" (UID: \"1d69b973-15ad-4a6a-a6a7-0896ccd3466f\") " Mar 11 09:54:04 crc kubenswrapper[4840]: I0311 09:54:04.980677 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d69b973-15ad-4a6a-a6a7-0896ccd3466f-kube-api-access-6vq74" (OuterVolumeSpecName: "kube-api-access-6vq74") pod "1d69b973-15ad-4a6a-a6a7-0896ccd3466f" (UID: "1d69b973-15ad-4a6a-a6a7-0896ccd3466f"). InnerVolumeSpecName "kube-api-access-6vq74". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:54:05 crc kubenswrapper[4840]: I0311 09:54:05.076358 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vq74\" (UniqueName: \"kubernetes.io/projected/1d69b973-15ad-4a6a-a6a7-0896ccd3466f-kube-api-access-6vq74\") on node \"crc\" DevicePath \"\"" Mar 11 09:54:05 crc kubenswrapper[4840]: I0311 09:54:05.620970 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553714-4x95t" event={"ID":"1d69b973-15ad-4a6a-a6a7-0896ccd3466f","Type":"ContainerDied","Data":"2359935ac6f21544a305cda70f802aa95e428213ce977efd7b12bd95c1471d73"} Mar 11 09:54:05 crc kubenswrapper[4840]: I0311 09:54:05.621028 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2359935ac6f21544a305cda70f802aa95e428213ce977efd7b12bd95c1471d73" Mar 11 09:54:05 crc kubenswrapper[4840]: I0311 09:54:05.621058 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553714-4x95t" Mar 11 09:54:05 crc kubenswrapper[4840]: I0311 09:54:05.652255 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553708-5z7xq"] Mar 11 09:54:05 crc kubenswrapper[4840]: I0311 09:54:05.658052 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553708-5z7xq"] Mar 11 09:54:06 crc kubenswrapper[4840]: I0311 09:54:06.067565 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3130eb87-1f5d-4af6-a2ff-462df6014c86" path="/var/lib/kubelet/pods/3130eb87-1f5d-4af6-a2ff-462df6014c86/volumes" Mar 11 09:54:10 crc kubenswrapper[4840]: I0311 09:54:10.269399 4840 scope.go:117] "RemoveContainer" containerID="94dc0378620b93b23c351d8384d57c6d92a7084fab8ba4ce60e44842b59b3360" Mar 11 09:54:22 crc kubenswrapper[4840]: I0311 09:54:22.564673 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-99rds"] Mar 11 09:54:22 crc kubenswrapper[4840]: E0311 09:54:22.568716 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d69b973-15ad-4a6a-a6a7-0896ccd3466f" containerName="oc" Mar 11 09:54:22 crc kubenswrapper[4840]: I0311 09:54:22.568888 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d69b973-15ad-4a6a-a6a7-0896ccd3466f" containerName="oc" Mar 11 09:54:22 crc kubenswrapper[4840]: I0311 09:54:22.569240 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d69b973-15ad-4a6a-a6a7-0896ccd3466f" containerName="oc" Mar 11 09:54:22 crc kubenswrapper[4840]: I0311 09:54:22.591133 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-99rds"] Mar 11 09:54:22 crc kubenswrapper[4840]: I0311 09:54:22.591312 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-99rds" Mar 11 09:54:22 crc kubenswrapper[4840]: I0311 09:54:22.619768 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpsnb\" (UniqueName: \"kubernetes.io/projected/4d9de42c-76e5-4258-8c35-e0518d099124-kube-api-access-jpsnb\") pod \"redhat-marketplace-99rds\" (UID: \"4d9de42c-76e5-4258-8c35-e0518d099124\") " pod="openshift-marketplace/redhat-marketplace-99rds" Mar 11 09:54:22 crc kubenswrapper[4840]: I0311 09:54:22.619812 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9de42c-76e5-4258-8c35-e0518d099124-catalog-content\") pod \"redhat-marketplace-99rds\" (UID: \"4d9de42c-76e5-4258-8c35-e0518d099124\") " pod="openshift-marketplace/redhat-marketplace-99rds" Mar 11 09:54:22 crc kubenswrapper[4840]: I0311 09:54:22.619938 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9de42c-76e5-4258-8c35-e0518d099124-utilities\") pod \"redhat-marketplace-99rds\" (UID: \"4d9de42c-76e5-4258-8c35-e0518d099124\") " pod="openshift-marketplace/redhat-marketplace-99rds" Mar 11 09:54:22 crc kubenswrapper[4840]: I0311 09:54:22.722560 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpsnb\" (UniqueName: \"kubernetes.io/projected/4d9de42c-76e5-4258-8c35-e0518d099124-kube-api-access-jpsnb\") pod \"redhat-marketplace-99rds\" (UID: \"4d9de42c-76e5-4258-8c35-e0518d099124\") " pod="openshift-marketplace/redhat-marketplace-99rds" Mar 11 09:54:22 crc kubenswrapper[4840]: I0311 09:54:22.722633 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9de42c-76e5-4258-8c35-e0518d099124-catalog-content\") pod \"redhat-marketplace-99rds\" (UID: \"4d9de42c-76e5-4258-8c35-e0518d099124\") " pod="openshift-marketplace/redhat-marketplace-99rds" Mar 11 09:54:22 crc kubenswrapper[4840]: I0311 09:54:22.722712 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9de42c-76e5-4258-8c35-e0518d099124-utilities\") pod \"redhat-marketplace-99rds\" (UID: \"4d9de42c-76e5-4258-8c35-e0518d099124\") " pod="openshift-marketplace/redhat-marketplace-99rds" Mar 11 09:54:22 crc kubenswrapper[4840]: I0311 09:54:22.723327 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9de42c-76e5-4258-8c35-e0518d099124-utilities\") pod \"redhat-marketplace-99rds\" (UID: \"4d9de42c-76e5-4258-8c35-e0518d099124\") " pod="openshift-marketplace/redhat-marketplace-99rds" Mar 11 09:54:22 crc kubenswrapper[4840]: I0311 09:54:22.723419 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9de42c-76e5-4258-8c35-e0518d099124-catalog-content\") pod \"redhat-marketplace-99rds\" (UID: \"4d9de42c-76e5-4258-8c35-e0518d099124\") " pod="openshift-marketplace/redhat-marketplace-99rds" Mar 11 09:54:22 crc kubenswrapper[4840]: I0311 09:54:22.742311 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpsnb\" (UniqueName: \"kubernetes.io/projected/4d9de42c-76e5-4258-8c35-e0518d099124-kube-api-access-jpsnb\") pod \"redhat-marketplace-99rds\" (UID: \"4d9de42c-76e5-4258-8c35-e0518d099124\") " pod="openshift-marketplace/redhat-marketplace-99rds" Mar 11 09:54:22 crc kubenswrapper[4840]: I0311 09:54:22.921135 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-99rds" Mar 11 09:54:23 crc kubenswrapper[4840]: I0311 09:54:23.336107 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-99rds"] Mar 11 09:54:23 crc kubenswrapper[4840]: I0311 09:54:23.747850 4840 generic.go:334] "Generic (PLEG): container finished" podID="4d9de42c-76e5-4258-8c35-e0518d099124" containerID="58465efd2eff085fa414cbbb5ee50c6c2da0e98439862283cf00edec7260b7e4" exitCode=0 Mar 11 09:54:23 crc kubenswrapper[4840]: I0311 09:54:23.747895 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-99rds" event={"ID":"4d9de42c-76e5-4258-8c35-e0518d099124","Type":"ContainerDied","Data":"58465efd2eff085fa414cbbb5ee50c6c2da0e98439862283cf00edec7260b7e4"} Mar 11 09:54:23 crc kubenswrapper[4840]: I0311 09:54:23.747924 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-99rds" event={"ID":"4d9de42c-76e5-4258-8c35-e0518d099124","Type":"ContainerStarted","Data":"e6fc5fe841a2c69a915bed64bba575e78ed1a43e1314db90aa8e8b70c9dd062e"} Mar 11 09:54:25 crc kubenswrapper[4840]: I0311 09:54:25.766078 4840 generic.go:334] "Generic (PLEG): container finished" podID="4d9de42c-76e5-4258-8c35-e0518d099124" containerID="17bd120c1e88651d0ae1ae2c0aa5926dd06267233ee70dd8bcbba73433663ab6" exitCode=0 Mar 11 09:54:25 crc kubenswrapper[4840]: I0311 09:54:25.766193 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-99rds" event={"ID":"4d9de42c-76e5-4258-8c35-e0518d099124","Type":"ContainerDied","Data":"17bd120c1e88651d0ae1ae2c0aa5926dd06267233ee70dd8bcbba73433663ab6"} Mar 11 09:54:26 crc kubenswrapper[4840]: I0311 09:54:26.777482 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-99rds" event={"ID":"4d9de42c-76e5-4258-8c35-e0518d099124","Type":"ContainerStarted","Data":"b7e1ee5816278bd97e3e3b43f721d7209c4db34e05e25c477eca43f556d15fe4"} Mar 11 09:54:26 crc kubenswrapper[4840]: I0311 09:54:26.803911 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-99rds" podStartSLOduration=2.329925705 podStartE2EDuration="4.80389121s" podCreationTimestamp="2026-03-11 09:54:22 +0000 UTC" firstStartedPulling="2026-03-11 09:54:23.749228601 +0000 UTC m=+3462.414898416" lastFinishedPulling="2026-03-11 09:54:26.223194106 +0000 UTC m=+3464.888863921" observedRunningTime="2026-03-11 09:54:26.801183221 +0000 UTC m=+3465.466853056" watchObservedRunningTime="2026-03-11 09:54:26.80389121 +0000 UTC m=+3465.469561025" Mar 11 09:54:27 crc kubenswrapper[4840]: I0311 09:54:27.445928 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:54:27 crc kubenswrapper[4840]: I0311 09:54:27.446330 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:54:27 crc kubenswrapper[4840]: I0311 09:54:27.446390 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 09:54:27 crc kubenswrapper[4840]: I0311 09:54:27.447048 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8bf282d749dcad120b80e750e64632fb5b1fd3d2b7112d2dccabeed39f466957"} pod="openshift-machine-config-operator/machine-config-daemon-brtht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 09:54:27 crc kubenswrapper[4840]: I0311 09:54:27.447102 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" containerID="cri-o://8bf282d749dcad120b80e750e64632fb5b1fd3d2b7112d2dccabeed39f466957" gracePeriod=600 Mar 11 09:54:27 crc kubenswrapper[4840]: I0311 09:54:27.788728 4840 generic.go:334] "Generic (PLEG): container finished" podID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerID="8bf282d749dcad120b80e750e64632fb5b1fd3d2b7112d2dccabeed39f466957" exitCode=0 Mar 11 09:54:27 crc kubenswrapper[4840]: I0311 09:54:27.788750 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerDied","Data":"8bf282d749dcad120b80e750e64632fb5b1fd3d2b7112d2dccabeed39f466957"} Mar 11 09:54:27 crc kubenswrapper[4840]: I0311 09:54:27.788816 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03"} Mar 11 09:54:27 crc kubenswrapper[4840]: I0311 09:54:27.788842 4840 scope.go:117] "RemoveContainer" containerID="65b1f852cfe8ac96b44bb220dbaccfa816f23ff9b34e1410947313ed9ce7a62c" Mar 11 09:54:32 crc kubenswrapper[4840]: I0311 09:54:32.922427 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-99rds" Mar 11 09:54:32 crc kubenswrapper[4840]: I0311 09:54:32.923081 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-99rds" Mar 11 09:54:32 crc kubenswrapper[4840]: I0311 09:54:32.964496 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-99rds" Mar 11 09:54:33 crc kubenswrapper[4840]: I0311 09:54:33.869740 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-99rds" Mar 11 09:54:33 crc kubenswrapper[4840]: I0311 09:54:33.922817 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-99rds"] Mar 11 09:54:37 crc kubenswrapper[4840]: I0311 09:54:37.902277 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-99rds" podUID="4d9de42c-76e5-4258-8c35-e0518d099124" containerName="registry-server" containerID="cri-o://b7e1ee5816278bd97e3e3b43f721d7209c4db34e05e25c477eca43f556d15fe4" gracePeriod=2 Mar 11 09:54:39 crc kubenswrapper[4840]: I0311 09:54:39.687596 4840 generic.go:334] "Generic (PLEG): container finished" podID="4d9de42c-76e5-4258-8c35-e0518d099124" containerID="b7e1ee5816278bd97e3e3b43f721d7209c4db34e05e25c477eca43f556d15fe4" exitCode=0 Mar 11 09:54:39 crc kubenswrapper[4840]: I0311 09:54:39.687932 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-99rds" event={"ID":"4d9de42c-76e5-4258-8c35-e0518d099124","Type":"ContainerDied","Data":"b7e1ee5816278bd97e3e3b43f721d7209c4db34e05e25c477eca43f556d15fe4"} Mar 11 09:54:39 crc kubenswrapper[4840]: I0311 09:54:39.763701 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-99rds" Mar 11 09:54:39 crc kubenswrapper[4840]: I0311 09:54:39.960455 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9de42c-76e5-4258-8c35-e0518d099124-catalog-content\") pod \"4d9de42c-76e5-4258-8c35-e0518d099124\" (UID: \"4d9de42c-76e5-4258-8c35-e0518d099124\") " Mar 11 09:54:39 crc kubenswrapper[4840]: I0311 09:54:39.960686 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9de42c-76e5-4258-8c35-e0518d099124-utilities\") pod \"4d9de42c-76e5-4258-8c35-e0518d099124\" (UID: \"4d9de42c-76e5-4258-8c35-e0518d099124\") " Mar 11 09:54:39 crc kubenswrapper[4840]: I0311 09:54:39.960801 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpsnb\" (UniqueName: \"kubernetes.io/projected/4d9de42c-76e5-4258-8c35-e0518d099124-kube-api-access-jpsnb\") pod \"4d9de42c-76e5-4258-8c35-e0518d099124\" (UID: \"4d9de42c-76e5-4258-8c35-e0518d099124\") " Mar 11 09:54:39 crc kubenswrapper[4840]: I0311 09:54:39.961936 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d9de42c-76e5-4258-8c35-e0518d099124-utilities" (OuterVolumeSpecName: "utilities") pod "4d9de42c-76e5-4258-8c35-e0518d099124" (UID: "4d9de42c-76e5-4258-8c35-e0518d099124"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:54:39 crc kubenswrapper[4840]: I0311 09:54:39.966643 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d9de42c-76e5-4258-8c35-e0518d099124-kube-api-access-jpsnb" (OuterVolumeSpecName: "kube-api-access-jpsnb") pod "4d9de42c-76e5-4258-8c35-e0518d099124" (UID: "4d9de42c-76e5-4258-8c35-e0518d099124"). InnerVolumeSpecName "kube-api-access-jpsnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:54:39 crc kubenswrapper[4840]: I0311 09:54:39.986818 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d9de42c-76e5-4258-8c35-e0518d099124-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d9de42c-76e5-4258-8c35-e0518d099124" (UID: "4d9de42c-76e5-4258-8c35-e0518d099124"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:54:40 crc kubenswrapper[4840]: I0311 09:54:40.063096 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpsnb\" (UniqueName: \"kubernetes.io/projected/4d9de42c-76e5-4258-8c35-e0518d099124-kube-api-access-jpsnb\") on node \"crc\" DevicePath \"\"" Mar 11 09:54:40 crc kubenswrapper[4840]: I0311 09:54:40.063162 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d9de42c-76e5-4258-8c35-e0518d099124-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:54:40 crc kubenswrapper[4840]: I0311 09:54:40.063191 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d9de42c-76e5-4258-8c35-e0518d099124-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:54:41 crc kubenswrapper[4840]: I0311 09:54:41.224096 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-99rds" event={"ID":"4d9de42c-76e5-4258-8c35-e0518d099124","Type":"ContainerDied","Data":"e6fc5fe841a2c69a915bed64bba575e78ed1a43e1314db90aa8e8b70c9dd062e"} Mar 11 09:54:41 crc kubenswrapper[4840]: I0311 09:54:41.224417 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-99rds" Mar 11 09:54:41 crc kubenswrapper[4840]: I0311 09:54:41.224587 4840 scope.go:117] "RemoveContainer" containerID="b7e1ee5816278bd97e3e3b43f721d7209c4db34e05e25c477eca43f556d15fe4" Mar 11 09:54:41 crc kubenswrapper[4840]: I0311 09:54:41.249652 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-99rds"] Mar 11 09:54:41 crc kubenswrapper[4840]: I0311 09:54:41.256975 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-99rds"] Mar 11 09:54:41 crc kubenswrapper[4840]: I0311 09:54:41.269979 4840 scope.go:117] "RemoveContainer" containerID="17bd120c1e88651d0ae1ae2c0aa5926dd06267233ee70dd8bcbba73433663ab6" Mar 11 09:54:41 crc kubenswrapper[4840]: I0311 09:54:41.311872 4840 scope.go:117] "RemoveContainer" containerID="58465efd2eff085fa414cbbb5ee50c6c2da0e98439862283cf00edec7260b7e4" Mar 11 09:54:42 crc kubenswrapper[4840]: I0311 09:54:42.970854 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d9de42c-76e5-4258-8c35-e0518d099124" path="/var/lib/kubelet/pods/4d9de42c-76e5-4258-8c35-e0518d099124/volumes" Mar 11 09:56:00 crc kubenswrapper[4840]: I0311 09:56:00.149443 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553716-tt5hc"] Mar 11 09:56:00 crc kubenswrapper[4840]: E0311 09:56:00.150412 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9de42c-76e5-4258-8c35-e0518d099124" containerName="extract-content" Mar 11 09:56:00 crc kubenswrapper[4840]: I0311 09:56:00.150428 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9de42c-76e5-4258-8c35-e0518d099124" containerName="extract-content" Mar 11 09:56:00 crc kubenswrapper[4840]: E0311 09:56:00.150445 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9de42c-76e5-4258-8c35-e0518d099124" containerName="registry-server" Mar 11 09:56:00 crc kubenswrapper[4840]: I0311 09:56:00.150453 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9de42c-76e5-4258-8c35-e0518d099124" containerName="registry-server" Mar 11 09:56:00 crc kubenswrapper[4840]: E0311 09:56:00.150501 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9de42c-76e5-4258-8c35-e0518d099124" containerName="extract-utilities" Mar 11 09:56:00 crc kubenswrapper[4840]: I0311 09:56:00.150512 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9de42c-76e5-4258-8c35-e0518d099124" containerName="extract-utilities" Mar 11 09:56:00 crc kubenswrapper[4840]: I0311 09:56:00.150707 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d9de42c-76e5-4258-8c35-e0518d099124" containerName="registry-server" Mar 11 09:56:00 crc kubenswrapper[4840]: I0311 09:56:00.151279 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553716-tt5hc" Mar 11 09:56:00 crc kubenswrapper[4840]: I0311 09:56:00.153992 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:56:00 crc kubenswrapper[4840]: I0311 09:56:00.154913 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:56:00 crc kubenswrapper[4840]: I0311 09:56:00.155167 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:56:00 crc kubenswrapper[4840]: I0311 09:56:00.166237 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553716-tt5hc"] Mar 11 09:56:00 crc kubenswrapper[4840]: I0311 09:56:00.171805 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzq52\" (UniqueName: \"kubernetes.io/projected/733473e0-4889-47a6-8344-22c23936157e-kube-api-access-dzq52\") pod \"auto-csr-approver-29553716-tt5hc\" (UID: \"733473e0-4889-47a6-8344-22c23936157e\") " pod="openshift-infra/auto-csr-approver-29553716-tt5hc" Mar 11 09:56:00 crc kubenswrapper[4840]: I0311 09:56:00.273183 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzq52\" (UniqueName: \"kubernetes.io/projected/733473e0-4889-47a6-8344-22c23936157e-kube-api-access-dzq52\") pod \"auto-csr-approver-29553716-tt5hc\" (UID: \"733473e0-4889-47a6-8344-22c23936157e\") " pod="openshift-infra/auto-csr-approver-29553716-tt5hc" Mar 11 09:56:00 crc kubenswrapper[4840]: I0311 09:56:00.291556 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzq52\" (UniqueName: \"kubernetes.io/projected/733473e0-4889-47a6-8344-22c23936157e-kube-api-access-dzq52\") pod \"auto-csr-approver-29553716-tt5hc\" (UID: \"733473e0-4889-47a6-8344-22c23936157e\") " pod="openshift-infra/auto-csr-approver-29553716-tt5hc" Mar 11 09:56:00 crc kubenswrapper[4840]: I0311 09:56:00.470945 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553716-tt5hc" Mar 11 09:56:00 crc kubenswrapper[4840]: I0311 09:56:00.930201 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553716-tt5hc"] Mar 11 09:56:01 crc kubenswrapper[4840]: I0311 09:56:01.536817 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553716-tt5hc" event={"ID":"733473e0-4889-47a6-8344-22c23936157e","Type":"ContainerStarted","Data":"b540af6339ecf27a9386f3dd4b9a555ab13429754479a77299c5c7763c587952"} Mar 11 09:56:02 crc kubenswrapper[4840]: I0311 09:56:02.547497 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553716-tt5hc" event={"ID":"733473e0-4889-47a6-8344-22c23936157e","Type":"ContainerStarted","Data":"db74873c0746fc08efd730afccc0641320802aec66ea0f88f3cffb07a4e3a408"} Mar 11 09:56:02 crc kubenswrapper[4840]: I0311 09:56:02.566389 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553716-tt5hc" podStartSLOduration=1.186068782 podStartE2EDuration="2.566367458s" podCreationTimestamp="2026-03-11 09:56:00 +0000 UTC" firstStartedPulling="2026-03-11 09:56:00.920254329 +0000 UTC m=+3559.585924134" lastFinishedPulling="2026-03-11 09:56:02.300552995 +0000 UTC m=+3560.966222810" observedRunningTime="2026-03-11 09:56:02.560059839 +0000 UTC m=+3561.225729664" watchObservedRunningTime="2026-03-11 09:56:02.566367458 +0000 UTC m=+3561.232037263" Mar 11 09:56:03 crc kubenswrapper[4840]: I0311 09:56:03.556894 4840 generic.go:334] "Generic (PLEG): container finished" podID="733473e0-4889-47a6-8344-22c23936157e" containerID="db74873c0746fc08efd730afccc0641320802aec66ea0f88f3cffb07a4e3a408" exitCode=0 Mar 11 09:56:03 crc kubenswrapper[4840]: I0311 09:56:03.556986 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553716-tt5hc" event={"ID":"733473e0-4889-47a6-8344-22c23936157e","Type":"ContainerDied","Data":"db74873c0746fc08efd730afccc0641320802aec66ea0f88f3cffb07a4e3a408"} Mar 11 09:56:04 crc kubenswrapper[4840]: I0311 09:56:04.831900 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553716-tt5hc" Mar 11 09:56:04 crc kubenswrapper[4840]: I0311 09:56:04.846013 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzq52\" (UniqueName: \"kubernetes.io/projected/733473e0-4889-47a6-8344-22c23936157e-kube-api-access-dzq52\") pod \"733473e0-4889-47a6-8344-22c23936157e\" (UID: \"733473e0-4889-47a6-8344-22c23936157e\") " Mar 11 09:56:04 crc kubenswrapper[4840]: I0311 09:56:04.860351 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/733473e0-4889-47a6-8344-22c23936157e-kube-api-access-dzq52" (OuterVolumeSpecName: "kube-api-access-dzq52") pod "733473e0-4889-47a6-8344-22c23936157e" (UID: "733473e0-4889-47a6-8344-22c23936157e"). InnerVolumeSpecName "kube-api-access-dzq52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:56:04 crc kubenswrapper[4840]: I0311 09:56:04.947321 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzq52\" (UniqueName: \"kubernetes.io/projected/733473e0-4889-47a6-8344-22c23936157e-kube-api-access-dzq52\") on node \"crc\" DevicePath \"\"" Mar 11 09:56:05 crc kubenswrapper[4840]: I0311 09:56:05.145047 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553710-zm8nt"] Mar 11 09:56:05 crc kubenswrapper[4840]: I0311 09:56:05.150925 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553710-zm8nt"] Mar 11 09:56:05 crc kubenswrapper[4840]: I0311 09:56:05.572985 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553716-tt5hc" event={"ID":"733473e0-4889-47a6-8344-22c23936157e","Type":"ContainerDied","Data":"b540af6339ecf27a9386f3dd4b9a555ab13429754479a77299c5c7763c587952"} Mar 11 09:56:05 crc kubenswrapper[4840]: I0311 09:56:05.573037 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b540af6339ecf27a9386f3dd4b9a555ab13429754479a77299c5c7763c587952" Mar 11 09:56:05 crc kubenswrapper[4840]: I0311 09:56:05.573110 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553716-tt5hc" Mar 11 09:56:06 crc kubenswrapper[4840]: I0311 09:56:06.084820 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ed9ce44-8536-46d5-9c77-1a10b6cfd91f" path="/var/lib/kubelet/pods/3ed9ce44-8536-46d5-9c77-1a10b6cfd91f/volumes" Mar 11 09:56:10 crc kubenswrapper[4840]: I0311 09:56:10.364163 4840 scope.go:117] "RemoveContainer" containerID="e22c81a423cb06fdf76a9f188391eb4b070e7c3ea0a1fb30ccd573ebffdd9909" Mar 11 09:56:27 crc kubenswrapper[4840]: I0311 09:56:27.446371 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:56:27 crc kubenswrapper[4840]: I0311 09:56:27.448445 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:56:57 crc kubenswrapper[4840]: I0311 09:56:57.446275 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:56:57 crc kubenswrapper[4840]: I0311 09:56:57.446895 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:57:05 crc kubenswrapper[4840]: I0311 09:57:05.754051 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-88ns5"] Mar 11 09:57:05 crc kubenswrapper[4840]: E0311 09:57:05.755333 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="733473e0-4889-47a6-8344-22c23936157e" containerName="oc" Mar 11 09:57:05 crc kubenswrapper[4840]: I0311 09:57:05.755349 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="733473e0-4889-47a6-8344-22c23936157e" containerName="oc" Mar 11 09:57:05 crc kubenswrapper[4840]: I0311 09:57:05.755558 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="733473e0-4889-47a6-8344-22c23936157e" containerName="oc" Mar 11 09:57:05 crc kubenswrapper[4840]: I0311 09:57:05.767818 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-88ns5" Mar 11 09:57:05 crc kubenswrapper[4840]: I0311 09:57:05.804508 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-88ns5"] Mar 11 09:57:05 crc kubenswrapper[4840]: I0311 09:57:05.883594 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34606e72-7fbd-447f-b7e8-811aded0abd4-utilities\") pod \"community-operators-88ns5\" (UID: \"34606e72-7fbd-447f-b7e8-811aded0abd4\") " pod="openshift-marketplace/community-operators-88ns5" Mar 11 09:57:05 crc kubenswrapper[4840]: I0311 09:57:05.883685 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34606e72-7fbd-447f-b7e8-811aded0abd4-catalog-content\") pod \"community-operators-88ns5\" (UID: \"34606e72-7fbd-447f-b7e8-811aded0abd4\") " pod="openshift-marketplace/community-operators-88ns5" Mar 11 09:57:05 crc kubenswrapper[4840]: I0311 09:57:05.883789 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dg5r\" (UniqueName: \"kubernetes.io/projected/34606e72-7fbd-447f-b7e8-811aded0abd4-kube-api-access-6dg5r\") pod \"community-operators-88ns5\" (UID: \"34606e72-7fbd-447f-b7e8-811aded0abd4\") " pod="openshift-marketplace/community-operators-88ns5" Mar 11 09:57:05 crc kubenswrapper[4840]: I0311 09:57:05.985233 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34606e72-7fbd-447f-b7e8-811aded0abd4-utilities\") pod \"community-operators-88ns5\" (UID: \"34606e72-7fbd-447f-b7e8-811aded0abd4\") " pod="openshift-marketplace/community-operators-88ns5" Mar 11 09:57:05 crc kubenswrapper[4840]: I0311 09:57:05.985309 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34606e72-7fbd-447f-b7e8-811aded0abd4-catalog-content\") pod \"community-operators-88ns5\" (UID: \"34606e72-7fbd-447f-b7e8-811aded0abd4\") " pod="openshift-marketplace/community-operators-88ns5" Mar 11 09:57:05 crc kubenswrapper[4840]: I0311 09:57:05.985403 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dg5r\" (UniqueName: \"kubernetes.io/projected/34606e72-7fbd-447f-b7e8-811aded0abd4-kube-api-access-6dg5r\") pod \"community-operators-88ns5\" (UID: \"34606e72-7fbd-447f-b7e8-811aded0abd4\") " pod="openshift-marketplace/community-operators-88ns5" Mar 11 09:57:05 crc kubenswrapper[4840]: I0311 09:57:05.986263 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34606e72-7fbd-447f-b7e8-811aded0abd4-utilities\") pod \"community-operators-88ns5\" (UID: \"34606e72-7fbd-447f-b7e8-811aded0abd4\") " pod="openshift-marketplace/community-operators-88ns5" Mar 11 09:57:05 crc kubenswrapper[4840]: I0311 09:57:05.986564 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34606e72-7fbd-447f-b7e8-811aded0abd4-catalog-content\") pod \"community-operators-88ns5\" (UID: \"34606e72-7fbd-447f-b7e8-811aded0abd4\") " pod="openshift-marketplace/community-operators-88ns5" Mar 11 09:57:06 crc kubenswrapper[4840]: I0311 09:57:06.006231 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dg5r\" (UniqueName: \"kubernetes.io/projected/34606e72-7fbd-447f-b7e8-811aded0abd4-kube-api-access-6dg5r\") pod \"community-operators-88ns5\" (UID: \"34606e72-7fbd-447f-b7e8-811aded0abd4\") " pod="openshift-marketplace/community-operators-88ns5" Mar 11 09:57:06 crc kubenswrapper[4840]: I0311 09:57:06.104733 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-88ns5" Mar 11 09:57:06 crc kubenswrapper[4840]: I0311 09:57:06.370385 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g2z7k"] Mar 11 09:57:06 crc kubenswrapper[4840]: I0311 09:57:06.372398 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g2z7k" Mar 11 09:57:06 crc kubenswrapper[4840]: I0311 09:57:06.386116 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g2z7k"] Mar 11 09:57:06 crc kubenswrapper[4840]: I0311 09:57:06.397373 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xqs2\" (UniqueName: \"kubernetes.io/projected/b84f178d-5e52-4634-938c-fd57298ffddf-kube-api-access-9xqs2\") pod \"certified-operators-g2z7k\" (UID: \"b84f178d-5e52-4634-938c-fd57298ffddf\") " pod="openshift-marketplace/certified-operators-g2z7k" Mar 11 09:57:06 crc kubenswrapper[4840]: I0311 09:57:06.397415 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b84f178d-5e52-4634-938c-fd57298ffddf-utilities\") pod \"certified-operators-g2z7k\" (UID: \"b84f178d-5e52-4634-938c-fd57298ffddf\") " pod="openshift-marketplace/certified-operators-g2z7k" Mar 11 09:57:06 crc kubenswrapper[4840]: I0311 09:57:06.397444 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b84f178d-5e52-4634-938c-fd57298ffddf-catalog-content\") pod \"certified-operators-g2z7k\" (UID: \"b84f178d-5e52-4634-938c-fd57298ffddf\") " pod="openshift-marketplace/certified-operators-g2z7k" Mar 11 09:57:06 crc kubenswrapper[4840]: I0311 09:57:06.498498 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xqs2\" (UniqueName: \"kubernetes.io/projected/b84f178d-5e52-4634-938c-fd57298ffddf-kube-api-access-9xqs2\") pod \"certified-operators-g2z7k\" (UID: \"b84f178d-5e52-4634-938c-fd57298ffddf\") " pod="openshift-marketplace/certified-operators-g2z7k" Mar 11 09:57:06 crc kubenswrapper[4840]: I0311 09:57:06.498553 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b84f178d-5e52-4634-938c-fd57298ffddf-utilities\") pod \"certified-operators-g2z7k\" (UID: \"b84f178d-5e52-4634-938c-fd57298ffddf\") " pod="openshift-marketplace/certified-operators-g2z7k" Mar 11 09:57:06 crc kubenswrapper[4840]: I0311 09:57:06.498584 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b84f178d-5e52-4634-938c-fd57298ffddf-catalog-content\") pod \"certified-operators-g2z7k\" (UID: \"b84f178d-5e52-4634-938c-fd57298ffddf\") " pod="openshift-marketplace/certified-operators-g2z7k" Mar 11 09:57:06 crc kubenswrapper[4840]: I0311 09:57:06.499182 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b84f178d-5e52-4634-938c-fd57298ffddf-catalog-content\") pod \"certified-operators-g2z7k\" (UID: \"b84f178d-5e52-4634-938c-fd57298ffddf\") " pod="openshift-marketplace/certified-operators-g2z7k" Mar 11 09:57:06 crc kubenswrapper[4840]: I0311 09:57:06.499295 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b84f178d-5e52-4634-938c-fd57298ffddf-utilities\") pod \"certified-operators-g2z7k\" (UID: \"b84f178d-5e52-4634-938c-fd57298ffddf\") " pod="openshift-marketplace/certified-operators-g2z7k" Mar 11 09:57:06 crc kubenswrapper[4840]: I0311 09:57:06.516897 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xqs2\" (UniqueName: \"kubernetes.io/projected/b84f178d-5e52-4634-938c-fd57298ffddf-kube-api-access-9xqs2\") pod \"certified-operators-g2z7k\" (UID: \"b84f178d-5e52-4634-938c-fd57298ffddf\") " pod="openshift-marketplace/certified-operators-g2z7k" Mar 11 09:57:06 crc kubenswrapper[4840]: I0311 09:57:06.582181 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-88ns5"] Mar 11 09:57:06 crc kubenswrapper[4840]: I0311 09:57:06.702130 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g2z7k" Mar 11 09:57:07 crc kubenswrapper[4840]: I0311 09:57:07.022691 4840 generic.go:334] "Generic (PLEG): container finished" podID="34606e72-7fbd-447f-b7e8-811aded0abd4" containerID="e5b811b40fad1f7dc3f5fe917736e709f57c9e6693554bf9b12f9c7e13866d38" exitCode=0 Mar 11 09:57:07 crc kubenswrapper[4840]: I0311 09:57:07.022744 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88ns5" event={"ID":"34606e72-7fbd-447f-b7e8-811aded0abd4","Type":"ContainerDied","Data":"e5b811b40fad1f7dc3f5fe917736e709f57c9e6693554bf9b12f9c7e13866d38"} Mar 11 09:57:07 crc kubenswrapper[4840]: I0311 09:57:07.022778 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88ns5" event={"ID":"34606e72-7fbd-447f-b7e8-811aded0abd4","Type":"ContainerStarted","Data":"49090bf2b17ea8b514d8b9315a281be89fb40c838c9f72b25d9f00b52c00fba2"} Mar 11 09:57:07 crc kubenswrapper[4840]: I0311 09:57:07.026029 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 09:57:07 crc kubenswrapper[4840]: I0311 09:57:07.169683 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g2z7k"] Mar 11 09:57:07 crc kubenswrapper[4840]: W0311 09:57:07.170222 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb84f178d_5e52_4634_938c_fd57298ffddf.slice/crio-3bb16b6f32634bbc94cd512770e88fe4bc7dc26d86fe0321c38db342b6664f2c WatchSource:0}: Error finding container 3bb16b6f32634bbc94cd512770e88fe4bc7dc26d86fe0321c38db342b6664f2c: Status 404 returned error can't find the container with id 3bb16b6f32634bbc94cd512770e88fe4bc7dc26d86fe0321c38db342b6664f2c Mar 11 09:57:08 crc kubenswrapper[4840]: I0311 09:57:08.032111 4840 generic.go:334] "Generic (PLEG): container finished" podID="34606e72-7fbd-447f-b7e8-811aded0abd4" containerID="9b46af4e793f5b3e3010e83aa3f753ec682023269485a547c516e9ae3081a040" exitCode=0 Mar 11 09:57:08 crc kubenswrapper[4840]: I0311 09:57:08.032220 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88ns5" event={"ID":"34606e72-7fbd-447f-b7e8-811aded0abd4","Type":"ContainerDied","Data":"9b46af4e793f5b3e3010e83aa3f753ec682023269485a547c516e9ae3081a040"} Mar 11 09:57:08 crc kubenswrapper[4840]: I0311 09:57:08.034707 4840 generic.go:334] "Generic (PLEG): container finished" podID="b84f178d-5e52-4634-938c-fd57298ffddf" containerID="35d6ad2482f2c7c94a57b7aecee25f496122d1e828af81c3467d9b9c47417ae8" exitCode=0 Mar 11 09:57:08 crc kubenswrapper[4840]: I0311 09:57:08.034735 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g2z7k" event={"ID":"b84f178d-5e52-4634-938c-fd57298ffddf","Type":"ContainerDied","Data":"35d6ad2482f2c7c94a57b7aecee25f496122d1e828af81c3467d9b9c47417ae8"} Mar 11 09:57:08 crc kubenswrapper[4840]: I0311 09:57:08.034751 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g2z7k" event={"ID":"b84f178d-5e52-4634-938c-fd57298ffddf","Type":"ContainerStarted","Data":"3bb16b6f32634bbc94cd512770e88fe4bc7dc26d86fe0321c38db342b6664f2c"} Mar 11 09:57:08 crc kubenswrapper[4840]: I0311 09:57:08.751378 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-94lmc"] Mar 11 09:57:08 crc kubenswrapper[4840]: I0311 09:57:08.753234 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94lmc" Mar 11 09:57:08 crc kubenswrapper[4840]: I0311 09:57:08.799535 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-94lmc"] Mar 11 09:57:08 crc kubenswrapper[4840]: I0311 09:57:08.932889 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf6ht\" (UniqueName: \"kubernetes.io/projected/e79584ef-0c16-4c8e-8419-125218a0508d-kube-api-access-zf6ht\") pod \"redhat-operators-94lmc\" (UID: \"e79584ef-0c16-4c8e-8419-125218a0508d\") " pod="openshift-marketplace/redhat-operators-94lmc" Mar 11 09:57:08 crc kubenswrapper[4840]: I0311 09:57:08.932975 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e79584ef-0c16-4c8e-8419-125218a0508d-utilities\") pod \"redhat-operators-94lmc\" (UID: \"e79584ef-0c16-4c8e-8419-125218a0508d\") " pod="openshift-marketplace/redhat-operators-94lmc" Mar 11 09:57:08 crc kubenswrapper[4840]: I0311 09:57:08.933005 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e79584ef-0c16-4c8e-8419-125218a0508d-catalog-content\") pod \"redhat-operators-94lmc\" (UID: \"e79584ef-0c16-4c8e-8419-125218a0508d\") " pod="openshift-marketplace/redhat-operators-94lmc" Mar 11 09:57:09 crc kubenswrapper[4840]: I0311 09:57:09.034173 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf6ht\" (UniqueName: \"kubernetes.io/projected/e79584ef-0c16-4c8e-8419-125218a0508d-kube-api-access-zf6ht\") pod \"redhat-operators-94lmc\" (UID: \"e79584ef-0c16-4c8e-8419-125218a0508d\") " pod="openshift-marketplace/redhat-operators-94lmc" Mar 11 09:57:09 crc kubenswrapper[4840]: I0311 09:57:09.034249 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e79584ef-0c16-4c8e-8419-125218a0508d-utilities\") pod \"redhat-operators-94lmc\" (UID: \"e79584ef-0c16-4c8e-8419-125218a0508d\") " pod="openshift-marketplace/redhat-operators-94lmc" Mar 11 09:57:09 crc kubenswrapper[4840]: I0311 09:57:09.034282 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e79584ef-0c16-4c8e-8419-125218a0508d-catalog-content\") pod \"redhat-operators-94lmc\" (UID: \"e79584ef-0c16-4c8e-8419-125218a0508d\") " pod="openshift-marketplace/redhat-operators-94lmc" Mar 11 09:57:09 crc kubenswrapper[4840]: I0311 09:57:09.034904 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e79584ef-0c16-4c8e-8419-125218a0508d-utilities\") pod \"redhat-operators-94lmc\" (UID: \"e79584ef-0c16-4c8e-8419-125218a0508d\") " pod="openshift-marketplace/redhat-operators-94lmc" Mar 11 09:57:09 crc kubenswrapper[4840]: I0311 09:57:09.034950 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e79584ef-0c16-4c8e-8419-125218a0508d-catalog-content\") pod \"redhat-operators-94lmc\" (UID: \"e79584ef-0c16-4c8e-8419-125218a0508d\") " pod="openshift-marketplace/redhat-operators-94lmc" Mar 11 09:57:09 crc kubenswrapper[4840]: I0311 09:57:09.047536 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88ns5" event={"ID":"34606e72-7fbd-447f-b7e8-811aded0abd4","Type":"ContainerStarted","Data":"e61b2035395bc6ded1f927f3e527d45388e1019f39130cfdc64ff93549ff127f"} Mar 11 09:57:09 crc kubenswrapper[4840]: I0311 09:57:09.065844 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf6ht\" (UniqueName: \"kubernetes.io/projected/e79584ef-0c16-4c8e-8419-125218a0508d-kube-api-access-zf6ht\") pod \"redhat-operators-94lmc\" (UID: \"e79584ef-0c16-4c8e-8419-125218a0508d\") " pod="openshift-marketplace/redhat-operators-94lmc" Mar 11 09:57:09 crc kubenswrapper[4840]: I0311 09:57:09.076490 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-88ns5" podStartSLOduration=2.6163522280000002 podStartE2EDuration="4.076449437s" podCreationTimestamp="2026-03-11 09:57:05 +0000 UTC" firstStartedPulling="2026-03-11 09:57:07.025706313 +0000 UTC m=+3625.691376128" lastFinishedPulling="2026-03-11 09:57:08.485803522 +0000 UTC m=+3627.151473337" observedRunningTime="2026-03-11 09:57:09.072343193 +0000 UTC m=+3627.738013008" watchObservedRunningTime="2026-03-11 09:57:09.076449437 +0000 UTC m=+3627.742119252" Mar 11 09:57:09 crc kubenswrapper[4840]: I0311 09:57:09.078434 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94lmc" Mar 11 09:57:09 crc kubenswrapper[4840]: I0311 09:57:09.602592 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-94lmc"] Mar 11 09:57:10 crc kubenswrapper[4840]: I0311 09:57:10.056850 4840 generic.go:334] "Generic (PLEG): container finished" podID="e79584ef-0c16-4c8e-8419-125218a0508d" containerID="24e1e4fc7e9ce3a74f7ecf4c056423ed2c7af7a325157e19a40bf1f2241fee3a" exitCode=0 Mar 11 09:57:10 crc kubenswrapper[4840]: I0311 09:57:10.056951 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94lmc" event={"ID":"e79584ef-0c16-4c8e-8419-125218a0508d","Type":"ContainerDied","Data":"24e1e4fc7e9ce3a74f7ecf4c056423ed2c7af7a325157e19a40bf1f2241fee3a"} Mar 11 09:57:10 crc kubenswrapper[4840]: I0311 09:57:10.057237 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94lmc" event={"ID":"e79584ef-0c16-4c8e-8419-125218a0508d","Type":"ContainerStarted","Data":"77e60bb5f9d23060eac62f233cd76f8dafc521cde2c5fba924614ce5a17e699f"} Mar 11 09:57:10 crc kubenswrapper[4840]: I0311 09:57:10.060574 4840 generic.go:334] "Generic (PLEG): container finished" podID="b84f178d-5e52-4634-938c-fd57298ffddf" containerID="cd23bc0ff98c6a71cfd95b681de99a3845c10fa55e4599f7ecbd0b419a519007" exitCode=0 Mar 11 09:57:10 crc kubenswrapper[4840]: I0311 09:57:10.070229 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g2z7k" event={"ID":"b84f178d-5e52-4634-938c-fd57298ffddf","Type":"ContainerDied","Data":"cd23bc0ff98c6a71cfd95b681de99a3845c10fa55e4599f7ecbd0b419a519007"} Mar 11 09:57:11 crc kubenswrapper[4840]: I0311 09:57:11.068110 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94lmc" event={"ID":"e79584ef-0c16-4c8e-8419-125218a0508d","Type":"ContainerStarted","Data":"0773aa7f59a8429904ddd8c8eccbd5cd15eeff760b833e8eaa47c605eb7bf6cd"} Mar 11 09:57:11 crc kubenswrapper[4840]: I0311 09:57:11.069824 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g2z7k" event={"ID":"b84f178d-5e52-4634-938c-fd57298ffddf","Type":"ContainerStarted","Data":"8b45353424429797fc2d3d44245d362465631ad04ba98903ce80467894706b32"} Mar 11 09:57:12 crc kubenswrapper[4840]: I0311 09:57:12.077486 4840 generic.go:334] "Generic (PLEG): container finished" podID="e79584ef-0c16-4c8e-8419-125218a0508d" containerID="0773aa7f59a8429904ddd8c8eccbd5cd15eeff760b833e8eaa47c605eb7bf6cd" exitCode=0 Mar 11 09:57:12 crc kubenswrapper[4840]: I0311 09:57:12.077584 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94lmc" event={"ID":"e79584ef-0c16-4c8e-8419-125218a0508d","Type":"ContainerDied","Data":"0773aa7f59a8429904ddd8c8eccbd5cd15eeff760b833e8eaa47c605eb7bf6cd"} Mar 11 09:57:12 crc kubenswrapper[4840]: I0311 09:57:12.109180 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g2z7k" podStartSLOduration=3.7140022139999997 podStartE2EDuration="6.109150861s" podCreationTimestamp="2026-03-11 09:57:06 +0000 UTC" firstStartedPulling="2026-03-11 09:57:08.036131223 +0000 UTC m=+3626.701801038" lastFinishedPulling="2026-03-11 09:57:10.43127987 +0000 UTC m=+3629.096949685" observedRunningTime="2026-03-11 09:57:11.104678722 +0000 UTC m=+3629.770348537" watchObservedRunningTime="2026-03-11 09:57:12.109150861 +0000 UTC m=+3630.774820706" Mar 11 09:57:13 crc kubenswrapper[4840]: I0311 09:57:13.098424 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94lmc" event={"ID":"e79584ef-0c16-4c8e-8419-125218a0508d","Type":"ContainerStarted","Data":"c6152f3795da79f31f7386a28449ecc3cc8e64b951801272b9c3a79eeb68d483"} Mar 11 09:57:13 crc kubenswrapper[4840]: I0311 09:57:13.117614 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-94lmc" podStartSLOduration=2.710209624 podStartE2EDuration="5.117595101s" podCreationTimestamp="2026-03-11 09:57:08 +0000 UTC" firstStartedPulling="2026-03-11 09:57:10.060036339 +0000 UTC m=+3628.725706144" lastFinishedPulling="2026-03-11 09:57:12.467421806 +0000 UTC m=+3631.133091621" observedRunningTime="2026-03-11 09:57:13.116795071 +0000 UTC m=+3631.782464896" watchObservedRunningTime="2026-03-11 09:57:13.117595101 +0000 UTC m=+3631.783264916" Mar 11 09:57:16 crc kubenswrapper[4840]: I0311 09:57:16.105296 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-88ns5" Mar 11 09:57:16 crc kubenswrapper[4840]: I0311 09:57:16.105688 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-88ns5" Mar 11 09:57:16 crc kubenswrapper[4840]: I0311 09:57:16.149746 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-88ns5" Mar 11 09:57:16 crc kubenswrapper[4840]: I0311 09:57:16.195717 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-88ns5" Mar 11 09:57:16 crc kubenswrapper[4840]: I0311 09:57:16.703209 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g2z7k" Mar 11 09:57:16 crc kubenswrapper[4840]: I0311 09:57:16.703540 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g2z7k" Mar 11 09:57:16 crc kubenswrapper[4840]: I0311 09:57:16.741621 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g2z7k" Mar 11 09:57:16 crc kubenswrapper[4840]: I0311 09:57:16.953583 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-88ns5"] Mar 11 09:57:17 crc kubenswrapper[4840]: I0311 09:57:17.174625 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g2z7k" Mar 11 09:57:18 crc kubenswrapper[4840]: I0311 09:57:18.133856 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-88ns5" podUID="34606e72-7fbd-447f-b7e8-811aded0abd4" containerName="registry-server" containerID="cri-o://e61b2035395bc6ded1f927f3e527d45388e1019f39130cfdc64ff93549ff127f" gracePeriod=2 Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.079018 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-94lmc" Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.079497 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-94lmc" Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.132808 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-94lmc" Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.150067 4840 generic.go:334] "Generic (PLEG): container finished" podID="34606e72-7fbd-447f-b7e8-811aded0abd4" containerID="e61b2035395bc6ded1f927f3e527d45388e1019f39130cfdc64ff93549ff127f" exitCode=0 Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.151082 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88ns5" event={"ID":"34606e72-7fbd-447f-b7e8-811aded0abd4","Type":"ContainerDied","Data":"e61b2035395bc6ded1f927f3e527d45388e1019f39130cfdc64ff93549ff127f"} Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.154719 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g2z7k"] Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.154887 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g2z7k" podUID="b84f178d-5e52-4634-938c-fd57298ffddf" containerName="registry-server" containerID="cri-o://8b45353424429797fc2d3d44245d362465631ad04ba98903ce80467894706b32" gracePeriod=2 Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.205364 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-94lmc" Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.317344 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-88ns5" Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.383208 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34606e72-7fbd-447f-b7e8-811aded0abd4-catalog-content\") pod \"34606e72-7fbd-447f-b7e8-811aded0abd4\" (UID: \"34606e72-7fbd-447f-b7e8-811aded0abd4\") " Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.383619 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34606e72-7fbd-447f-b7e8-811aded0abd4-utilities\") pod \"34606e72-7fbd-447f-b7e8-811aded0abd4\" (UID: \"34606e72-7fbd-447f-b7e8-811aded0abd4\") " Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.383655 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dg5r\" (UniqueName: \"kubernetes.io/projected/34606e72-7fbd-447f-b7e8-811aded0abd4-kube-api-access-6dg5r\") pod \"34606e72-7fbd-447f-b7e8-811aded0abd4\" (UID: \"34606e72-7fbd-447f-b7e8-811aded0abd4\") " Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.385401 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34606e72-7fbd-447f-b7e8-811aded0abd4-utilities" (OuterVolumeSpecName: "utilities") pod "34606e72-7fbd-447f-b7e8-811aded0abd4" (UID: "34606e72-7fbd-447f-b7e8-811aded0abd4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.389392 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34606e72-7fbd-447f-b7e8-811aded0abd4-kube-api-access-6dg5r" (OuterVolumeSpecName: "kube-api-access-6dg5r") pod "34606e72-7fbd-447f-b7e8-811aded0abd4" (UID: "34606e72-7fbd-447f-b7e8-811aded0abd4"). InnerVolumeSpecName "kube-api-access-6dg5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.442885 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34606e72-7fbd-447f-b7e8-811aded0abd4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34606e72-7fbd-447f-b7e8-811aded0abd4" (UID: "34606e72-7fbd-447f-b7e8-811aded0abd4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.489881 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34606e72-7fbd-447f-b7e8-811aded0abd4-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.489925 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34606e72-7fbd-447f-b7e8-811aded0abd4-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.489940 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dg5r\" (UniqueName: \"kubernetes.io/projected/34606e72-7fbd-447f-b7e8-811aded0abd4-kube-api-access-6dg5r\") on node \"crc\" DevicePath \"\"" Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.563901 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g2z7k" Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.692026 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xqs2\" (UniqueName: \"kubernetes.io/projected/b84f178d-5e52-4634-938c-fd57298ffddf-kube-api-access-9xqs2\") pod \"b84f178d-5e52-4634-938c-fd57298ffddf\" (UID: \"b84f178d-5e52-4634-938c-fd57298ffddf\") " Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.692108 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b84f178d-5e52-4634-938c-fd57298ffddf-utilities\") pod \"b84f178d-5e52-4634-938c-fd57298ffddf\" (UID: \"b84f178d-5e52-4634-938c-fd57298ffddf\") " Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.692206 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b84f178d-5e52-4634-938c-fd57298ffddf-catalog-content\") pod \"b84f178d-5e52-4634-938c-fd57298ffddf\" (UID: \"b84f178d-5e52-4634-938c-fd57298ffddf\") " Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.692959 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b84f178d-5e52-4634-938c-fd57298ffddf-utilities" (OuterVolumeSpecName: "utilities") pod "b84f178d-5e52-4634-938c-fd57298ffddf" (UID: "b84f178d-5e52-4634-938c-fd57298ffddf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.694805 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b84f178d-5e52-4634-938c-fd57298ffddf-kube-api-access-9xqs2" (OuterVolumeSpecName: "kube-api-access-9xqs2") pod "b84f178d-5e52-4634-938c-fd57298ffddf" (UID: "b84f178d-5e52-4634-938c-fd57298ffddf"). InnerVolumeSpecName "kube-api-access-9xqs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.794271 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xqs2\" (UniqueName: \"kubernetes.io/projected/b84f178d-5e52-4634-938c-fd57298ffddf-kube-api-access-9xqs2\") on node \"crc\" DevicePath \"\"" Mar 11 09:57:19 crc kubenswrapper[4840]: I0311 09:57:19.794299 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b84f178d-5e52-4634-938c-fd57298ffddf-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:57:20 crc kubenswrapper[4840]: I0311 09:57:20.160275 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88ns5" event={"ID":"34606e72-7fbd-447f-b7e8-811aded0abd4","Type":"ContainerDied","Data":"49090bf2b17ea8b514d8b9315a281be89fb40c838c9f72b25d9f00b52c00fba2"} Mar 11 09:57:20 crc kubenswrapper[4840]: I0311 09:57:20.160320 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-88ns5" Mar 11 09:57:20 crc kubenswrapper[4840]: I0311 09:57:20.160373 4840 scope.go:117] "RemoveContainer" containerID="e61b2035395bc6ded1f927f3e527d45388e1019f39130cfdc64ff93549ff127f" Mar 11 09:57:20 crc kubenswrapper[4840]: I0311 09:57:20.167601 4840 generic.go:334] "Generic (PLEG): container finished" podID="b84f178d-5e52-4634-938c-fd57298ffddf" containerID="8b45353424429797fc2d3d44245d362465631ad04ba98903ce80467894706b32" exitCode=0 Mar 11 09:57:20 crc kubenswrapper[4840]: I0311 09:57:20.168116 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g2z7k" Mar 11 09:57:20 crc kubenswrapper[4840]: I0311 09:57:20.168518 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g2z7k" event={"ID":"b84f178d-5e52-4634-938c-fd57298ffddf","Type":"ContainerDied","Data":"8b45353424429797fc2d3d44245d362465631ad04ba98903ce80467894706b32"} Mar 11 09:57:20 crc kubenswrapper[4840]: I0311 09:57:20.168585 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g2z7k" event={"ID":"b84f178d-5e52-4634-938c-fd57298ffddf","Type":"ContainerDied","Data":"3bb16b6f32634bbc94cd512770e88fe4bc7dc26d86fe0321c38db342b6664f2c"} Mar 11 09:57:20 crc kubenswrapper[4840]: I0311 09:57:20.184651 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-88ns5"] Mar 11 09:57:20 crc kubenswrapper[4840]: I0311 09:57:20.188915 4840 scope.go:117] "RemoveContainer" containerID="9b46af4e793f5b3e3010e83aa3f753ec682023269485a547c516e9ae3081a040" Mar 11 09:57:20 crc kubenswrapper[4840]: I0311 09:57:20.190431 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-88ns5"] Mar 11 09:57:20 crc kubenswrapper[4840]: I0311 09:57:20.210736 4840 scope.go:117] "RemoveContainer" containerID="e5b811b40fad1f7dc3f5fe917736e709f57c9e6693554bf9b12f9c7e13866d38" Mar 11 09:57:20 crc kubenswrapper[4840]: I0311 09:57:20.216865 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b84f178d-5e52-4634-938c-fd57298ffddf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b84f178d-5e52-4634-938c-fd57298ffddf" (UID: "b84f178d-5e52-4634-938c-fd57298ffddf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:57:20 crc kubenswrapper[4840]: I0311 09:57:20.235406 4840 scope.go:117] "RemoveContainer" containerID="8b45353424429797fc2d3d44245d362465631ad04ba98903ce80467894706b32" Mar 11 09:57:20 crc kubenswrapper[4840]: I0311 09:57:20.257422 4840 scope.go:117] "RemoveContainer" containerID="cd23bc0ff98c6a71cfd95b681de99a3845c10fa55e4599f7ecbd0b419a519007" Mar 11 09:57:20 crc kubenswrapper[4840]: I0311 09:57:20.275032 4840 scope.go:117] "RemoveContainer" containerID="35d6ad2482f2c7c94a57b7aecee25f496122d1e828af81c3467d9b9c47417ae8" Mar 11 09:57:20 crc kubenswrapper[4840]: I0311 09:57:20.293439 4840 scope.go:117] "RemoveContainer" containerID="8b45353424429797fc2d3d44245d362465631ad04ba98903ce80467894706b32" Mar 11 09:57:20 crc kubenswrapper[4840]: E0311 09:57:20.293981 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b45353424429797fc2d3d44245d362465631ad04ba98903ce80467894706b32\": container with ID starting with 8b45353424429797fc2d3d44245d362465631ad04ba98903ce80467894706b32 not found: ID does not exist" containerID="8b45353424429797fc2d3d44245d362465631ad04ba98903ce80467894706b32" Mar 11 09:57:20 crc kubenswrapper[4840]: I0311 09:57:20.294025 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b45353424429797fc2d3d44245d362465631ad04ba98903ce80467894706b32"} err="failed to get container status \"8b45353424429797fc2d3d44245d362465631ad04ba98903ce80467894706b32\": rpc error: code = NotFound desc = could not find container \"8b45353424429797fc2d3d44245d362465631ad04ba98903ce80467894706b32\": container with ID starting with 8b45353424429797fc2d3d44245d362465631ad04ba98903ce80467894706b32 not found: ID does not exist" Mar 11 09:57:20 crc kubenswrapper[4840]: I0311 09:57:20.294063 4840 scope.go:117] "RemoveContainer" containerID="cd23bc0ff98c6a71cfd95b681de99a3845c10fa55e4599f7ecbd0b419a519007" Mar 11 09:57:20 crc kubenswrapper[4840]: E0311 09:57:20.294347 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd23bc0ff98c6a71cfd95b681de99a3845c10fa55e4599f7ecbd0b419a519007\": container with ID starting with cd23bc0ff98c6a71cfd95b681de99a3845c10fa55e4599f7ecbd0b419a519007 not found: ID does not exist" containerID="cd23bc0ff98c6a71cfd95b681de99a3845c10fa55e4599f7ecbd0b419a519007" Mar 11 09:57:20 crc kubenswrapper[4840]: I0311 09:57:20.294452 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd23bc0ff98c6a71cfd95b681de99a3845c10fa55e4599f7ecbd0b419a519007"} err="failed to get container status \"cd23bc0ff98c6a71cfd95b681de99a3845c10fa55e4599f7ecbd0b419a519007\": rpc error: code = NotFound desc = could not find container \"cd23bc0ff98c6a71cfd95b681de99a3845c10fa55e4599f7ecbd0b419a519007\": container with ID starting with cd23bc0ff98c6a71cfd95b681de99a3845c10fa55e4599f7ecbd0b419a519007 not found: ID does not exist" Mar 11 09:57:20 crc kubenswrapper[4840]: I0311 09:57:20.294488 4840 scope.go:117] "RemoveContainer" containerID="35d6ad2482f2c7c94a57b7aecee25f496122d1e828af81c3467d9b9c47417ae8" Mar 11 09:57:20 crc kubenswrapper[4840]: E0311 09:57:20.294763 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35d6ad2482f2c7c94a57b7aecee25f496122d1e828af81c3467d9b9c47417ae8\": container with ID starting with 35d6ad2482f2c7c94a57b7aecee25f496122d1e828af81c3467d9b9c47417ae8 not found: ID does not exist" containerID="35d6ad2482f2c7c94a57b7aecee25f496122d1e828af81c3467d9b9c47417ae8" Mar 11 09:57:20 crc kubenswrapper[4840]: I0311 09:57:20.294786 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35d6ad2482f2c7c94a57b7aecee25f496122d1e828af81c3467d9b9c47417ae8"} err="failed to get container status \"35d6ad2482f2c7c94a57b7aecee25f496122d1e828af81c3467d9b9c47417ae8\": rpc error: code = NotFound desc = could not find container \"35d6ad2482f2c7c94a57b7aecee25f496122d1e828af81c3467d9b9c47417ae8\": container with ID starting with 35d6ad2482f2c7c94a57b7aecee25f496122d1e828af81c3467d9b9c47417ae8 not found: ID does not exist" Mar 11 09:57:20 crc kubenswrapper[4840]: I0311 09:57:20.300932 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b84f178d-5e52-4634-938c-fd57298ffddf-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:57:20 crc kubenswrapper[4840]: I0311 09:57:20.518510 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g2z7k"] Mar 11 09:57:20 crc kubenswrapper[4840]: I0311 09:57:20.533534 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g2z7k"] Mar 11 09:57:21 crc kubenswrapper[4840]: I0311 09:57:21.742444 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-94lmc"] Mar 11 09:57:21 crc kubenswrapper[4840]: I0311 09:57:21.743124 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-94lmc" podUID="e79584ef-0c16-4c8e-8419-125218a0508d" containerName="registry-server" containerID="cri-o://c6152f3795da79f31f7386a28449ecc3cc8e64b951801272b9c3a79eeb68d483" gracePeriod=2 Mar 11 09:57:22 crc kubenswrapper[4840]: I0311 09:57:22.076363 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34606e72-7fbd-447f-b7e8-811aded0abd4" path="/var/lib/kubelet/pods/34606e72-7fbd-447f-b7e8-811aded0abd4/volumes" Mar 11 09:57:22 crc kubenswrapper[4840]: I0311 09:57:22.077247 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b84f178d-5e52-4634-938c-fd57298ffddf" path="/var/lib/kubelet/pods/b84f178d-5e52-4634-938c-fd57298ffddf/volumes" Mar 11 09:57:23 crc kubenswrapper[4840]: I0311 09:57:23.195506 4840 generic.go:334] "Generic (PLEG): container finished" podID="e79584ef-0c16-4c8e-8419-125218a0508d" containerID="c6152f3795da79f31f7386a28449ecc3cc8e64b951801272b9c3a79eeb68d483" exitCode=0 Mar 11 09:57:23 crc kubenswrapper[4840]: I0311 09:57:23.195559 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94lmc" event={"ID":"e79584ef-0c16-4c8e-8419-125218a0508d","Type":"ContainerDied","Data":"c6152f3795da79f31f7386a28449ecc3cc8e64b951801272b9c3a79eeb68d483"} Mar 11 09:57:23 crc kubenswrapper[4840]: I0311 09:57:23.195590 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94lmc" event={"ID":"e79584ef-0c16-4c8e-8419-125218a0508d","Type":"ContainerDied","Data":"77e60bb5f9d23060eac62f233cd76f8dafc521cde2c5fba924614ce5a17e699f"} Mar 11 09:57:23 crc kubenswrapper[4840]: I0311 09:57:23.195605 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77e60bb5f9d23060eac62f233cd76f8dafc521cde2c5fba924614ce5a17e699f" Mar 11 09:57:23 crc kubenswrapper[4840]: I0311 09:57:23.232887 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94lmc" Mar 11 09:57:23 crc kubenswrapper[4840]: I0311 09:57:23.347490 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e79584ef-0c16-4c8e-8419-125218a0508d-utilities\") pod \"e79584ef-0c16-4c8e-8419-125218a0508d\" (UID: \"e79584ef-0c16-4c8e-8419-125218a0508d\") " Mar 11 09:57:23 crc kubenswrapper[4840]: I0311 09:57:23.347654 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf6ht\" (UniqueName: \"kubernetes.io/projected/e79584ef-0c16-4c8e-8419-125218a0508d-kube-api-access-zf6ht\") pod \"e79584ef-0c16-4c8e-8419-125218a0508d\" (UID: \"e79584ef-0c16-4c8e-8419-125218a0508d\") " Mar 11 09:57:23 crc kubenswrapper[4840]: I0311 09:57:23.347709 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e79584ef-0c16-4c8e-8419-125218a0508d-catalog-content\") pod \"e79584ef-0c16-4c8e-8419-125218a0508d\" (UID: \"e79584ef-0c16-4c8e-8419-125218a0508d\") " Mar 11 09:57:23 crc kubenswrapper[4840]: I0311 09:57:23.348915 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e79584ef-0c16-4c8e-8419-125218a0508d-utilities" (OuterVolumeSpecName: "utilities") pod "e79584ef-0c16-4c8e-8419-125218a0508d" (UID: "e79584ef-0c16-4c8e-8419-125218a0508d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:57:23 crc kubenswrapper[4840]: I0311 09:57:23.353630 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e79584ef-0c16-4c8e-8419-125218a0508d-kube-api-access-zf6ht" (OuterVolumeSpecName: "kube-api-access-zf6ht") pod "e79584ef-0c16-4c8e-8419-125218a0508d" (UID: "e79584ef-0c16-4c8e-8419-125218a0508d"). InnerVolumeSpecName "kube-api-access-zf6ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:57:23 crc kubenswrapper[4840]: I0311 09:57:23.449367 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e79584ef-0c16-4c8e-8419-125218a0508d-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 09:57:23 crc kubenswrapper[4840]: I0311 09:57:23.449401 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zf6ht\" (UniqueName: \"kubernetes.io/projected/e79584ef-0c16-4c8e-8419-125218a0508d-kube-api-access-zf6ht\") on node \"crc\" DevicePath \"\"" Mar 11 09:57:23 crc kubenswrapper[4840]: I0311 09:57:23.482911 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e79584ef-0c16-4c8e-8419-125218a0508d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e79584ef-0c16-4c8e-8419-125218a0508d" (UID: "e79584ef-0c16-4c8e-8419-125218a0508d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 09:57:23 crc kubenswrapper[4840]: I0311 09:57:23.550602 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e79584ef-0c16-4c8e-8419-125218a0508d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 09:57:24 crc kubenswrapper[4840]: I0311 09:57:24.202622 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94lmc" Mar 11 09:57:24 crc kubenswrapper[4840]: I0311 09:57:24.228646 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-94lmc"] Mar 11 09:57:24 crc kubenswrapper[4840]: I0311 09:57:24.235691 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-94lmc"] Mar 11 09:57:26 crc kubenswrapper[4840]: I0311 09:57:26.074836 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e79584ef-0c16-4c8e-8419-125218a0508d" path="/var/lib/kubelet/pods/e79584ef-0c16-4c8e-8419-125218a0508d/volumes" Mar 11 09:57:27 crc kubenswrapper[4840]: I0311 09:57:27.445405 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 09:57:27 crc kubenswrapper[4840]: I0311 09:57:27.445475 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 09:57:27 crc kubenswrapper[4840]: I0311 09:57:27.445513 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 09:57:27 crc kubenswrapper[4840]: I0311 09:57:27.446057 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03"} pod="openshift-machine-config-operator/machine-config-daemon-brtht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 09:57:27 crc kubenswrapper[4840]: I0311 09:57:27.446134 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" containerID="cri-o://4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" gracePeriod=600 Mar 11 09:57:27 crc kubenswrapper[4840]: E0311 09:57:27.568420 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:57:28 crc kubenswrapper[4840]: I0311 09:57:28.233433 4840 generic.go:334] "Generic (PLEG): container finished" podID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" exitCode=0 Mar 11 09:57:28 crc kubenswrapper[4840]: I0311 09:57:28.233515 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerDied","Data":"4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03"} Mar 11 09:57:28 crc kubenswrapper[4840]: I0311 09:57:28.233806 4840 scope.go:117] "RemoveContainer" containerID="8bf282d749dcad120b80e750e64632fb5b1fd3d2b7112d2dccabeed39f466957" Mar 11 09:57:28 crc kubenswrapper[4840]: I0311 09:57:28.234447 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 09:57:28 crc kubenswrapper[4840]: E0311 09:57:28.234764 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:57:40 crc kubenswrapper[4840]: I0311 09:57:40.060243 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 09:57:40 crc kubenswrapper[4840]: E0311 09:57:40.061158 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:57:54 crc kubenswrapper[4840]: I0311 09:57:54.060262 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 09:57:54 crc kubenswrapper[4840]: E0311 09:57:54.060931 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:58:00 crc kubenswrapper[4840]: I0311 09:58:00.975793 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553718-kzsk9"] Mar 11 09:58:00 crc kubenswrapper[4840]: E0311 09:58:00.976552 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b84f178d-5e52-4634-938c-fd57298ffddf" containerName="registry-server" Mar 11 09:58:00 crc kubenswrapper[4840]: I0311 09:58:00.976564 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b84f178d-5e52-4634-938c-fd57298ffddf" containerName="registry-server" Mar 11 09:58:00 crc kubenswrapper[4840]: E0311 09:58:00.976576 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e79584ef-0c16-4c8e-8419-125218a0508d" containerName="extract-content" Mar 11 09:58:00 crc kubenswrapper[4840]: I0311 09:58:00.976582 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="e79584ef-0c16-4c8e-8419-125218a0508d" containerName="extract-content" Mar 11 09:58:00 crc kubenswrapper[4840]: E0311 09:58:00.976593 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e79584ef-0c16-4c8e-8419-125218a0508d" containerName="registry-server" Mar 11 09:58:00 crc kubenswrapper[4840]: I0311 09:58:00.976599 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="e79584ef-0c16-4c8e-8419-125218a0508d" containerName="registry-server" Mar 11 09:58:00 crc kubenswrapper[4840]: E0311 09:58:00.976609 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34606e72-7fbd-447f-b7e8-811aded0abd4" containerName="extract-content" Mar 11 09:58:00 crc kubenswrapper[4840]: I0311 09:58:00.976615 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="34606e72-7fbd-447f-b7e8-811aded0abd4" containerName="extract-content" Mar 11 09:58:00 crc kubenswrapper[4840]: E0311 09:58:00.976625 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b84f178d-5e52-4634-938c-fd57298ffddf" containerName="extract-content" Mar 11 09:58:00 crc kubenswrapper[4840]: I0311 09:58:00.976631 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b84f178d-5e52-4634-938c-fd57298ffddf" containerName="extract-content" Mar 11 09:58:00 crc kubenswrapper[4840]: E0311 09:58:00.976645 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34606e72-7fbd-447f-b7e8-811aded0abd4" containerName="registry-server" Mar 11 09:58:00 crc kubenswrapper[4840]: I0311 09:58:00.976651 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="34606e72-7fbd-447f-b7e8-811aded0abd4" containerName="registry-server" Mar 11 09:58:00 crc kubenswrapper[4840]: E0311 09:58:00.976665 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34606e72-7fbd-447f-b7e8-811aded0abd4" containerName="extract-utilities" Mar 11 09:58:00 crc kubenswrapper[4840]: I0311 09:58:00.976670 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="34606e72-7fbd-447f-b7e8-811aded0abd4" containerName="extract-utilities" Mar 11 09:58:00 crc kubenswrapper[4840]: E0311 09:58:00.976678 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e79584ef-0c16-4c8e-8419-125218a0508d" containerName="extract-utilities" Mar 11 09:58:00 crc kubenswrapper[4840]: I0311 09:58:00.976684 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="e79584ef-0c16-4c8e-8419-125218a0508d" containerName="extract-utilities" Mar 11 09:58:00 crc kubenswrapper[4840]: E0311 09:58:00.976700 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b84f178d-5e52-4634-938c-fd57298ffddf" containerName="extract-utilities" Mar 11 09:58:00 crc kubenswrapper[4840]: I0311 09:58:00.976706 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b84f178d-5e52-4634-938c-fd57298ffddf" containerName="extract-utilities" Mar 11 09:58:00 crc kubenswrapper[4840]: I0311 09:58:00.976836 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="34606e72-7fbd-447f-b7e8-811aded0abd4" containerName="registry-server" Mar 11 09:58:00 crc kubenswrapper[4840]: I0311 09:58:00.976850 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="e79584ef-0c16-4c8e-8419-125218a0508d" containerName="registry-server" Mar 11 09:58:00 crc kubenswrapper[4840]: I0311 09:58:00.976858 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b84f178d-5e52-4634-938c-fd57298ffddf" containerName="registry-server" Mar 11 09:58:00 crc kubenswrapper[4840]: I0311 09:58:00.977295 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553718-kzsk9" Mar 11 09:58:00 crc kubenswrapper[4840]: I0311 09:58:00.980108 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 09:58:00 crc kubenswrapper[4840]: I0311 09:58:00.980375 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 09:58:00 crc kubenswrapper[4840]: I0311 09:58:00.981798 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 09:58:00 crc kubenswrapper[4840]: I0311 09:58:00.984138 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553718-kzsk9"] Mar 11 09:58:01 crc kubenswrapper[4840]: I0311 09:58:01.073755 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4vzr\" (UniqueName: \"kubernetes.io/projected/13edae4a-b5de-44f5-9006-74864ab07ffc-kube-api-access-d4vzr\") pod \"auto-csr-approver-29553718-kzsk9\" (UID: \"13edae4a-b5de-44f5-9006-74864ab07ffc\") " pod="openshift-infra/auto-csr-approver-29553718-kzsk9" Mar 11 09:58:01 crc kubenswrapper[4840]: I0311 09:58:01.175142 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4vzr\" (UniqueName: \"kubernetes.io/projected/13edae4a-b5de-44f5-9006-74864ab07ffc-kube-api-access-d4vzr\") pod \"auto-csr-approver-29553718-kzsk9\" (UID: \"13edae4a-b5de-44f5-9006-74864ab07ffc\") " pod="openshift-infra/auto-csr-approver-29553718-kzsk9" Mar 11 09:58:01 crc kubenswrapper[4840]: I0311 09:58:01.196279 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4vzr\" (UniqueName: \"kubernetes.io/projected/13edae4a-b5de-44f5-9006-74864ab07ffc-kube-api-access-d4vzr\") pod \"auto-csr-approver-29553718-kzsk9\" (UID: \"13edae4a-b5de-44f5-9006-74864ab07ffc\") " pod="openshift-infra/auto-csr-approver-29553718-kzsk9" Mar 11 09:58:01 crc kubenswrapper[4840]: I0311 09:58:01.295742 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553718-kzsk9" Mar 11 09:58:01 crc kubenswrapper[4840]: I0311 09:58:01.697830 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553718-kzsk9"] Mar 11 09:58:02 crc kubenswrapper[4840]: I0311 09:58:02.475197 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553718-kzsk9" event={"ID":"13edae4a-b5de-44f5-9006-74864ab07ffc","Type":"ContainerStarted","Data":"7545acd6f2c71556290e02a36f6e418ec28314d39864cd5524a67c80b606b0ca"} Mar 11 09:58:03 crc kubenswrapper[4840]: I0311 09:58:03.486874 4840 generic.go:334] "Generic (PLEG): container finished" podID="13edae4a-b5de-44f5-9006-74864ab07ffc" containerID="690355669a789fdc82640a462fc791f10e35fe384577216fb3176547ebcaf9fe" exitCode=0 Mar 11 09:58:03 crc kubenswrapper[4840]: I0311 09:58:03.486959 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553718-kzsk9" event={"ID":"13edae4a-b5de-44f5-9006-74864ab07ffc","Type":"ContainerDied","Data":"690355669a789fdc82640a462fc791f10e35fe384577216fb3176547ebcaf9fe"} Mar 11 09:58:04 crc kubenswrapper[4840]: I0311 09:58:04.766174 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553718-kzsk9" Mar 11 09:58:04 crc kubenswrapper[4840]: I0311 09:58:04.841853 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4vzr\" (UniqueName: \"kubernetes.io/projected/13edae4a-b5de-44f5-9006-74864ab07ffc-kube-api-access-d4vzr\") pod \"13edae4a-b5de-44f5-9006-74864ab07ffc\" (UID: \"13edae4a-b5de-44f5-9006-74864ab07ffc\") " Mar 11 09:58:04 crc kubenswrapper[4840]: I0311 09:58:04.848630 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13edae4a-b5de-44f5-9006-74864ab07ffc-kube-api-access-d4vzr" (OuterVolumeSpecName: "kube-api-access-d4vzr") pod "13edae4a-b5de-44f5-9006-74864ab07ffc" (UID: "13edae4a-b5de-44f5-9006-74864ab07ffc"). InnerVolumeSpecName "kube-api-access-d4vzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 09:58:04 crc kubenswrapper[4840]: I0311 09:58:04.944248 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4vzr\" (UniqueName: \"kubernetes.io/projected/13edae4a-b5de-44f5-9006-74864ab07ffc-kube-api-access-d4vzr\") on node \"crc\" DevicePath \"\"" Mar 11 09:58:05 crc kubenswrapper[4840]: I0311 09:58:05.514507 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553718-kzsk9" event={"ID":"13edae4a-b5de-44f5-9006-74864ab07ffc","Type":"ContainerDied","Data":"7545acd6f2c71556290e02a36f6e418ec28314d39864cd5524a67c80b606b0ca"} Mar 11 09:58:05 crc kubenswrapper[4840]: I0311 09:58:05.514576 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7545acd6f2c71556290e02a36f6e418ec28314d39864cd5524a67c80b606b0ca" Mar 11 09:58:05 crc kubenswrapper[4840]: I0311 09:58:05.514526 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553718-kzsk9" Mar 11 09:58:05 crc kubenswrapper[4840]: I0311 09:58:05.832183 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553712-t888v"] Mar 11 09:58:05 crc kubenswrapper[4840]: I0311 09:58:05.837037 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553712-t888v"] Mar 11 09:58:06 crc kubenswrapper[4840]: I0311 09:58:06.069034 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc24323f-57c7-4d5c-b436-c3e6cfdcaa87" path="/var/lib/kubelet/pods/cc24323f-57c7-4d5c-b436-c3e6cfdcaa87/volumes" Mar 11 09:58:08 crc kubenswrapper[4840]: I0311 09:58:08.062313 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 09:58:08 crc kubenswrapper[4840]: E0311 09:58:08.062887 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:58:10 crc kubenswrapper[4840]: I0311 09:58:10.444231 4840 scope.go:117] "RemoveContainer" containerID="86737aeb9bee26f3c4e4ab889de92ac99272b12aede3b8f74d359bc7b5edf625" Mar 11 09:58:21 crc kubenswrapper[4840]: I0311 09:58:21.061679 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 09:58:21 crc kubenswrapper[4840]: E0311 09:58:21.063035 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:58:35 crc kubenswrapper[4840]: I0311 09:58:35.060633 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 09:58:35 crc kubenswrapper[4840]: E0311 09:58:35.061488 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:58:49 crc kubenswrapper[4840]: I0311 09:58:49.060740 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 09:58:49 crc kubenswrapper[4840]: E0311 09:58:49.061585 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:59:00 crc kubenswrapper[4840]: I0311 09:59:00.060720 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 09:59:00 crc kubenswrapper[4840]: E0311 09:59:00.061596 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:59:11 crc kubenswrapper[4840]: I0311 09:59:11.060433 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 09:59:11 crc kubenswrapper[4840]: E0311 09:59:11.061369 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:59:24 crc kubenswrapper[4840]: I0311 09:59:24.061923 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 09:59:24 crc kubenswrapper[4840]: E0311 09:59:24.063042 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:59:39 crc kubenswrapper[4840]: I0311 09:59:39.059970 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 09:59:39 crc kubenswrapper[4840]: E0311 09:59:39.060677 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 09:59:51 crc kubenswrapper[4840]: I0311 09:59:51.060706 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 09:59:51 crc kubenswrapper[4840]: E0311 09:59:51.061586 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.325632 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553720-knnwh"] Mar 11 10:00:00 crc kubenswrapper[4840]: E0311 10:00:00.326627 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13edae4a-b5de-44f5-9006-74864ab07ffc" containerName="oc" Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.326642 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="13edae4a-b5de-44f5-9006-74864ab07ffc" containerName="oc" Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.326811 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="13edae4a-b5de-44f5-9006-74864ab07ffc" containerName="oc" Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.327315 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553720-knnwh" Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.329415 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.330663 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.330708 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.334267 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553720-jgjh5"] Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.335807 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-jgjh5" Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.337822 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.338894 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.343156 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553720-jgjh5"] Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.354884 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553720-knnwh"] Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.424272 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjlnq\" (UniqueName: \"kubernetes.io/projected/2e41004e-87d6-4023-9792-55b2930786ba-kube-api-access-fjlnq\") pod \"collect-profiles-29553720-jgjh5\" (UID: \"2e41004e-87d6-4023-9792-55b2930786ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-jgjh5" Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.425223 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e41004e-87d6-4023-9792-55b2930786ba-config-volume\") pod \"collect-profiles-29553720-jgjh5\" (UID: \"2e41004e-87d6-4023-9792-55b2930786ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-jgjh5" Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.425423 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krvx4\" (UniqueName: \"kubernetes.io/projected/b693f7cf-8e48-48dd-af7e-156913aad645-kube-api-access-krvx4\") pod \"auto-csr-approver-29553720-knnwh\" (UID: \"b693f7cf-8e48-48dd-af7e-156913aad645\") " pod="openshift-infra/auto-csr-approver-29553720-knnwh" Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.425613 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e41004e-87d6-4023-9792-55b2930786ba-secret-volume\") pod \"collect-profiles-29553720-jgjh5\" (UID: \"2e41004e-87d6-4023-9792-55b2930786ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-jgjh5" Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.527361 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e41004e-87d6-4023-9792-55b2930786ba-config-volume\") pod \"collect-profiles-29553720-jgjh5\" (UID: \"2e41004e-87d6-4023-9792-55b2930786ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-jgjh5" Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.528547 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krvx4\" (UniqueName: \"kubernetes.io/projected/b693f7cf-8e48-48dd-af7e-156913aad645-kube-api-access-krvx4\") pod \"auto-csr-approver-29553720-knnwh\" (UID: \"b693f7cf-8e48-48dd-af7e-156913aad645\") " pod="openshift-infra/auto-csr-approver-29553720-knnwh" Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.528618 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e41004e-87d6-4023-9792-55b2930786ba-secret-volume\") pod \"collect-profiles-29553720-jgjh5\" (UID: \"2e41004e-87d6-4023-9792-55b2930786ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-jgjh5" Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.528678 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjlnq\" (UniqueName: \"kubernetes.io/projected/2e41004e-87d6-4023-9792-55b2930786ba-kube-api-access-fjlnq\") pod \"collect-profiles-29553720-jgjh5\" (UID: \"2e41004e-87d6-4023-9792-55b2930786ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-jgjh5" Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.528431 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e41004e-87d6-4023-9792-55b2930786ba-config-volume\") pod \"collect-profiles-29553720-jgjh5\" (UID: \"2e41004e-87d6-4023-9792-55b2930786ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-jgjh5" Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.539751 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e41004e-87d6-4023-9792-55b2930786ba-secret-volume\") pod \"collect-profiles-29553720-jgjh5\" (UID: \"2e41004e-87d6-4023-9792-55b2930786ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-jgjh5" Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.556300 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjlnq\" (UniqueName: \"kubernetes.io/projected/2e41004e-87d6-4023-9792-55b2930786ba-kube-api-access-fjlnq\") pod \"collect-profiles-29553720-jgjh5\" (UID: \"2e41004e-87d6-4023-9792-55b2930786ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-jgjh5" Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.558685 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krvx4\" (UniqueName: \"kubernetes.io/projected/b693f7cf-8e48-48dd-af7e-156913aad645-kube-api-access-krvx4\") pod \"auto-csr-approver-29553720-knnwh\" (UID: \"b693f7cf-8e48-48dd-af7e-156913aad645\") " pod="openshift-infra/auto-csr-approver-29553720-knnwh" Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.650687 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553720-knnwh" Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.678568 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-jgjh5" Mar 11 10:00:00 crc kubenswrapper[4840]: I0311 10:00:00.879570 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553720-knnwh"] Mar 11 10:00:01 crc kubenswrapper[4840]: I0311 10:00:01.142813 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553720-jgjh5"] Mar 11 10:00:01 crc kubenswrapper[4840]: W0311 10:00:01.144784 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e41004e_87d6_4023_9792_55b2930786ba.slice/crio-47439621beb45bffb01614a5fd99762670184c977bc6f7f30204b0b880bab3a9 WatchSource:0}: Error finding container 47439621beb45bffb01614a5fd99762670184c977bc6f7f30204b0b880bab3a9: Status 404 returned error can't find the container with id 47439621beb45bffb01614a5fd99762670184c977bc6f7f30204b0b880bab3a9 Mar 11 10:00:01 crc kubenswrapper[4840]: I0311 10:00:01.834249 4840 generic.go:334] "Generic (PLEG): container finished" podID="2e41004e-87d6-4023-9792-55b2930786ba" containerID="73119154aca4d25902e267de2bf6322ac8fb45f466719e06567b17b9524e87e5" exitCode=0 Mar 11 10:00:01 crc kubenswrapper[4840]: I0311 10:00:01.834335 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-jgjh5" event={"ID":"2e41004e-87d6-4023-9792-55b2930786ba","Type":"ContainerDied","Data":"73119154aca4d25902e267de2bf6322ac8fb45f466719e06567b17b9524e87e5"} Mar 11 10:00:01 crc kubenswrapper[4840]: I0311 10:00:01.834369 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-jgjh5" event={"ID":"2e41004e-87d6-4023-9792-55b2930786ba","Type":"ContainerStarted","Data":"47439621beb45bffb01614a5fd99762670184c977bc6f7f30204b0b880bab3a9"} Mar 11 10:00:01 crc kubenswrapper[4840]: I0311 10:00:01.835749 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553720-knnwh" event={"ID":"b693f7cf-8e48-48dd-af7e-156913aad645","Type":"ContainerStarted","Data":"80e445129c199e3cec44b93c605d3027257fb808795c10ff26043c7ebd209673"} Mar 11 10:00:02 crc kubenswrapper[4840]: I0311 10:00:02.066210 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 10:00:02 crc kubenswrapper[4840]: E0311 10:00:02.066856 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:00:03 crc kubenswrapper[4840]: I0311 10:00:03.125347 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-jgjh5" Mar 11 10:00:03 crc kubenswrapper[4840]: I0311 10:00:03.200921 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e41004e-87d6-4023-9792-55b2930786ba-config-volume\") pod \"2e41004e-87d6-4023-9792-55b2930786ba\" (UID: \"2e41004e-87d6-4023-9792-55b2930786ba\") " Mar 11 10:00:03 crc kubenswrapper[4840]: I0311 10:00:03.201019 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e41004e-87d6-4023-9792-55b2930786ba-secret-volume\") pod \"2e41004e-87d6-4023-9792-55b2930786ba\" (UID: \"2e41004e-87d6-4023-9792-55b2930786ba\") " Mar 11 10:00:03 crc kubenswrapper[4840]: I0311 10:00:03.201123 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjlnq\" (UniqueName: \"kubernetes.io/projected/2e41004e-87d6-4023-9792-55b2930786ba-kube-api-access-fjlnq\") pod \"2e41004e-87d6-4023-9792-55b2930786ba\" (UID: \"2e41004e-87d6-4023-9792-55b2930786ba\") " Mar 11 10:00:03 crc kubenswrapper[4840]: I0311 10:00:03.201897 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e41004e-87d6-4023-9792-55b2930786ba-config-volume" (OuterVolumeSpecName: "config-volume") pod "2e41004e-87d6-4023-9792-55b2930786ba" (UID: "2e41004e-87d6-4023-9792-55b2930786ba"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:00:03 crc kubenswrapper[4840]: I0311 10:00:03.208280 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e41004e-87d6-4023-9792-55b2930786ba-kube-api-access-fjlnq" (OuterVolumeSpecName: "kube-api-access-fjlnq") pod "2e41004e-87d6-4023-9792-55b2930786ba" (UID: "2e41004e-87d6-4023-9792-55b2930786ba"). InnerVolumeSpecName "kube-api-access-fjlnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:00:03 crc kubenswrapper[4840]: I0311 10:00:03.208404 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e41004e-87d6-4023-9792-55b2930786ba-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2e41004e-87d6-4023-9792-55b2930786ba" (UID: "2e41004e-87d6-4023-9792-55b2930786ba"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:00:03 crc kubenswrapper[4840]: I0311 10:00:03.303022 4840 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e41004e-87d6-4023-9792-55b2930786ba-config-volume\") on node \"crc\" DevicePath \"\"" Mar 11 10:00:03 crc kubenswrapper[4840]: I0311 10:00:03.303062 4840 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e41004e-87d6-4023-9792-55b2930786ba-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 11 10:00:03 crc kubenswrapper[4840]: I0311 10:00:03.303077 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjlnq\" (UniqueName: \"kubernetes.io/projected/2e41004e-87d6-4023-9792-55b2930786ba-kube-api-access-fjlnq\") on node \"crc\" DevicePath \"\"" Mar 11 10:00:03 crc kubenswrapper[4840]: I0311 10:00:03.854442 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-jgjh5" event={"ID":"2e41004e-87d6-4023-9792-55b2930786ba","Type":"ContainerDied","Data":"47439621beb45bffb01614a5fd99762670184c977bc6f7f30204b0b880bab3a9"} Mar 11 10:00:03 crc kubenswrapper[4840]: I0311 10:00:03.855145 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47439621beb45bffb01614a5fd99762670184c977bc6f7f30204b0b880bab3a9" Mar 11 10:00:03 crc kubenswrapper[4840]: I0311 10:00:03.855231 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553720-jgjh5" Mar 11 10:00:04 crc kubenswrapper[4840]: I0311 10:00:04.195729 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553675-5pxkp"] Mar 11 10:00:04 crc kubenswrapper[4840]: I0311 10:00:04.200816 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553675-5pxkp"] Mar 11 10:00:06 crc kubenswrapper[4840]: I0311 10:00:06.070725 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb4c8bbf-d375-4db3-8a45-e14a7c2400e2" path="/var/lib/kubelet/pods/fb4c8bbf-d375-4db3-8a45-e14a7c2400e2/volumes" Mar 11 10:00:10 crc kubenswrapper[4840]: I0311 10:00:10.564677 4840 scope.go:117] "RemoveContainer" containerID="6c0b1be5126e8d0ee313de963ae2c8496da65a2d4dd020b53c2bdafbb2244d71" Mar 11 10:00:14 crc kubenswrapper[4840]: I0311 10:00:14.061065 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 10:00:14 crc kubenswrapper[4840]: E0311 10:00:14.061705 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:00:16 crc kubenswrapper[4840]: I0311 10:00:15.945934 4840 generic.go:334] "Generic (PLEG): container finished" podID="b693f7cf-8e48-48dd-af7e-156913aad645" containerID="9a7fc39e25b43c64eb94d4fa93e2e17da8a3e73779b426baf77065d2e1b1837c" exitCode=0 Mar 11 10:00:16 crc kubenswrapper[4840]: I0311 10:00:15.946117 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553720-knnwh" event={"ID":"b693f7cf-8e48-48dd-af7e-156913aad645","Type":"ContainerDied","Data":"9a7fc39e25b43c64eb94d4fa93e2e17da8a3e73779b426baf77065d2e1b1837c"} Mar 11 10:00:18 crc kubenswrapper[4840]: I0311 10:00:18.074381 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553720-knnwh" event={"ID":"b693f7cf-8e48-48dd-af7e-156913aad645","Type":"ContainerDied","Data":"80e445129c199e3cec44b93c605d3027257fb808795c10ff26043c7ebd209673"} Mar 11 10:00:18 crc kubenswrapper[4840]: I0311 10:00:18.075038 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80e445129c199e3cec44b93c605d3027257fb808795c10ff26043c7ebd209673" Mar 11 10:00:18 crc kubenswrapper[4840]: I0311 10:00:18.120746 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553720-knnwh" Mar 11 10:00:18 crc kubenswrapper[4840]: I0311 10:00:18.271755 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krvx4\" (UniqueName: \"kubernetes.io/projected/b693f7cf-8e48-48dd-af7e-156913aad645-kube-api-access-krvx4\") pod \"b693f7cf-8e48-48dd-af7e-156913aad645\" (UID: \"b693f7cf-8e48-48dd-af7e-156913aad645\") " Mar 11 10:00:18 crc kubenswrapper[4840]: I0311 10:00:18.278326 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b693f7cf-8e48-48dd-af7e-156913aad645-kube-api-access-krvx4" (OuterVolumeSpecName: "kube-api-access-krvx4") pod "b693f7cf-8e48-48dd-af7e-156913aad645" (UID: "b693f7cf-8e48-48dd-af7e-156913aad645"). InnerVolumeSpecName "kube-api-access-krvx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:00:18 crc kubenswrapper[4840]: I0311 10:00:18.374222 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krvx4\" (UniqueName: \"kubernetes.io/projected/b693f7cf-8e48-48dd-af7e-156913aad645-kube-api-access-krvx4\") on node \"crc\" DevicePath \"\"" Mar 11 10:00:19 crc kubenswrapper[4840]: I0311 10:00:19.080421 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553720-knnwh" Mar 11 10:00:19 crc kubenswrapper[4840]: I0311 10:00:19.180861 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553714-4x95t"] Mar 11 10:00:19 crc kubenswrapper[4840]: I0311 10:00:19.185966 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553714-4x95t"] Mar 11 10:00:20 crc kubenswrapper[4840]: I0311 10:00:20.071569 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d69b973-15ad-4a6a-a6a7-0896ccd3466f" path="/var/lib/kubelet/pods/1d69b973-15ad-4a6a-a6a7-0896ccd3466f/volumes" Mar 11 10:00:28 crc kubenswrapper[4840]: I0311 10:00:28.060017 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 10:00:28 crc kubenswrapper[4840]: E0311 10:00:28.060703 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:00:40 crc kubenswrapper[4840]: I0311 10:00:40.060456 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 10:00:40 crc kubenswrapper[4840]: E0311 10:00:40.061284 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:00:54 crc kubenswrapper[4840]: I0311 10:00:54.060676 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 10:00:54 crc kubenswrapper[4840]: E0311 10:00:54.061571 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:01:08 crc kubenswrapper[4840]: I0311 10:01:08.060634 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 10:01:08 crc kubenswrapper[4840]: E0311 10:01:08.061408 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:01:10 crc kubenswrapper[4840]: I0311 10:01:10.624735 4840 scope.go:117] "RemoveContainer" containerID="9e50b8b810dfd572487cc1bc42d2149c95897fd308da0164b04493fef056d6c4" Mar 11 10:01:19 crc kubenswrapper[4840]: I0311 10:01:19.060089 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 10:01:19 crc kubenswrapper[4840]: E0311 10:01:19.061010 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:01:30 crc kubenswrapper[4840]: I0311 10:01:30.059830 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 10:01:30 crc kubenswrapper[4840]: E0311 10:01:30.060673 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:01:45 crc kubenswrapper[4840]: I0311 10:01:45.059912 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 10:01:45 crc kubenswrapper[4840]: E0311 10:01:45.060669 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:01:58 crc kubenswrapper[4840]: I0311 10:01:58.060367 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 10:01:58 crc kubenswrapper[4840]: E0311 10:01:58.061748 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:02:00 crc kubenswrapper[4840]: I0311 10:02:00.174476 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553722-bxzhv"] Mar 11 10:02:00 crc kubenswrapper[4840]: E0311 10:02:00.175781 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b693f7cf-8e48-48dd-af7e-156913aad645" containerName="oc" Mar 11 10:02:00 crc kubenswrapper[4840]: I0311 10:02:00.175806 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b693f7cf-8e48-48dd-af7e-156913aad645" containerName="oc" Mar 11 10:02:00 crc kubenswrapper[4840]: E0311 10:02:00.175841 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e41004e-87d6-4023-9792-55b2930786ba" containerName="collect-profiles" Mar 11 10:02:00 crc kubenswrapper[4840]: I0311 10:02:00.175849 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e41004e-87d6-4023-9792-55b2930786ba" containerName="collect-profiles" Mar 11 10:02:00 crc kubenswrapper[4840]: I0311 10:02:00.175980 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e41004e-87d6-4023-9792-55b2930786ba" containerName="collect-profiles" Mar 11 10:02:00 crc kubenswrapper[4840]: I0311 10:02:00.175998 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b693f7cf-8e48-48dd-af7e-156913aad645" containerName="oc" Mar 11 10:02:00 crc kubenswrapper[4840]: I0311 10:02:00.176533 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553722-bxzhv" Mar 11 10:02:00 crc kubenswrapper[4840]: I0311 10:02:00.179309 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:02:00 crc kubenswrapper[4840]: I0311 10:02:00.179663 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:02:00 crc kubenswrapper[4840]: I0311 10:02:00.181353 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:02:00 crc kubenswrapper[4840]: I0311 10:02:00.196413 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553722-bxzhv"] Mar 11 10:02:00 crc kubenswrapper[4840]: I0311 10:02:00.261172 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ct9m\" (UniqueName: \"kubernetes.io/projected/a1c81852-0f6c-449c-b7db-6ac3fc213ff9-kube-api-access-7ct9m\") pod \"auto-csr-approver-29553722-bxzhv\" (UID: \"a1c81852-0f6c-449c-b7db-6ac3fc213ff9\") " pod="openshift-infra/auto-csr-approver-29553722-bxzhv" Mar 11 10:02:00 crc kubenswrapper[4840]: I0311 10:02:00.362959 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ct9m\" (UniqueName: \"kubernetes.io/projected/a1c81852-0f6c-449c-b7db-6ac3fc213ff9-kube-api-access-7ct9m\") pod \"auto-csr-approver-29553722-bxzhv\" (UID: \"a1c81852-0f6c-449c-b7db-6ac3fc213ff9\") " pod="openshift-infra/auto-csr-approver-29553722-bxzhv" Mar 11 10:02:00 crc kubenswrapper[4840]: I0311 10:02:00.400499 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ct9m\" (UniqueName: \"kubernetes.io/projected/a1c81852-0f6c-449c-b7db-6ac3fc213ff9-kube-api-access-7ct9m\") pod \"auto-csr-approver-29553722-bxzhv\" (UID: \"a1c81852-0f6c-449c-b7db-6ac3fc213ff9\") " pod="openshift-infra/auto-csr-approver-29553722-bxzhv" Mar 11 10:02:00 crc kubenswrapper[4840]: I0311 10:02:00.499501 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553722-bxzhv" Mar 11 10:02:01 crc kubenswrapper[4840]: I0311 10:02:01.725994 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553722-bxzhv"] Mar 11 10:02:02 crc kubenswrapper[4840]: I0311 10:02:02.340160 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553722-bxzhv" event={"ID":"a1c81852-0f6c-449c-b7db-6ac3fc213ff9","Type":"ContainerStarted","Data":"c7981ebdf51fa052f6d818d90a814ea266b6053160a8ab085cb2c3c65e27e146"} Mar 11 10:02:03 crc kubenswrapper[4840]: I0311 10:02:03.350325 4840 generic.go:334] "Generic (PLEG): container finished" podID="a1c81852-0f6c-449c-b7db-6ac3fc213ff9" containerID="0dcd3448ab05048a42225f4770e30852c20abed4c48f88ebd755597ee68d9450" exitCode=0 Mar 11 10:02:03 crc kubenswrapper[4840]: I0311 10:02:03.350424 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553722-bxzhv" event={"ID":"a1c81852-0f6c-449c-b7db-6ac3fc213ff9","Type":"ContainerDied","Data":"0dcd3448ab05048a42225f4770e30852c20abed4c48f88ebd755597ee68d9450"} Mar 11 10:02:04 crc kubenswrapper[4840]: I0311 10:02:04.595890 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553722-bxzhv" Mar 11 10:02:04 crc kubenswrapper[4840]: I0311 10:02:04.721039 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ct9m\" (UniqueName: \"kubernetes.io/projected/a1c81852-0f6c-449c-b7db-6ac3fc213ff9-kube-api-access-7ct9m\") pod \"a1c81852-0f6c-449c-b7db-6ac3fc213ff9\" (UID: \"a1c81852-0f6c-449c-b7db-6ac3fc213ff9\") " Mar 11 10:02:04 crc kubenswrapper[4840]: I0311 10:02:04.727317 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1c81852-0f6c-449c-b7db-6ac3fc213ff9-kube-api-access-7ct9m" (OuterVolumeSpecName: "kube-api-access-7ct9m") pod "a1c81852-0f6c-449c-b7db-6ac3fc213ff9" (UID: "a1c81852-0f6c-449c-b7db-6ac3fc213ff9"). InnerVolumeSpecName "kube-api-access-7ct9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:02:04 crc kubenswrapper[4840]: I0311 10:02:04.823153 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ct9m\" (UniqueName: \"kubernetes.io/projected/a1c81852-0f6c-449c-b7db-6ac3fc213ff9-kube-api-access-7ct9m\") on node \"crc\" DevicePath \"\"" Mar 11 10:02:05 crc kubenswrapper[4840]: I0311 10:02:05.363548 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553722-bxzhv" event={"ID":"a1c81852-0f6c-449c-b7db-6ac3fc213ff9","Type":"ContainerDied","Data":"c7981ebdf51fa052f6d818d90a814ea266b6053160a8ab085cb2c3c65e27e146"} Mar 11 10:02:05 crc kubenswrapper[4840]: I0311 10:02:05.363614 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7981ebdf51fa052f6d818d90a814ea266b6053160a8ab085cb2c3c65e27e146" Mar 11 10:02:05 crc kubenswrapper[4840]: I0311 10:02:05.363616 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553722-bxzhv" Mar 11 10:02:05 crc kubenswrapper[4840]: I0311 10:02:05.657396 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553716-tt5hc"] Mar 11 10:02:05 crc kubenswrapper[4840]: I0311 10:02:05.662878 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553716-tt5hc"] Mar 11 10:02:06 crc kubenswrapper[4840]: I0311 10:02:06.068902 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="733473e0-4889-47a6-8344-22c23936157e" path="/var/lib/kubelet/pods/733473e0-4889-47a6-8344-22c23936157e/volumes" Mar 11 10:02:09 crc kubenswrapper[4840]: I0311 10:02:09.060643 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 10:02:09 crc kubenswrapper[4840]: E0311 10:02:09.061503 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:02:10 crc kubenswrapper[4840]: I0311 10:02:10.693550 4840 scope.go:117] "RemoveContainer" containerID="db74873c0746fc08efd730afccc0641320802aec66ea0f88f3cffb07a4e3a408" Mar 11 10:02:22 crc kubenswrapper[4840]: I0311 10:02:22.065518 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 10:02:22 crc kubenswrapper[4840]: E0311 10:02:22.066852 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:02:35 crc kubenswrapper[4840]: I0311 10:02:35.059755 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 10:02:35 crc kubenswrapper[4840]: I0311 10:02:35.590345 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"f2e23917570701ed55189d8a885e5297e6976e0276abf2611140252cd299be18"} Mar 11 10:03:10 crc kubenswrapper[4840]: I0311 10:03:10.752303 4840 scope.go:117] "RemoveContainer" containerID="0773aa7f59a8429904ddd8c8eccbd5cd15eeff760b833e8eaa47c605eb7bf6cd" Mar 11 10:03:10 crc kubenswrapper[4840]: I0311 10:03:10.798260 4840 scope.go:117] "RemoveContainer" containerID="24e1e4fc7e9ce3a74f7ecf4c056423ed2c7af7a325157e19a40bf1f2241fee3a" Mar 11 10:04:00 crc kubenswrapper[4840]: I0311 10:04:00.152638 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553724-sxqq5"] Mar 11 10:04:00 crc kubenswrapper[4840]: E0311 10:04:00.154100 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c81852-0f6c-449c-b7db-6ac3fc213ff9" containerName="oc" Mar 11 10:04:00 crc kubenswrapper[4840]: I0311 10:04:00.154123 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c81852-0f6c-449c-b7db-6ac3fc213ff9" containerName="oc" Mar 11 10:04:00 crc kubenswrapper[4840]: I0311 10:04:00.154363 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1c81852-0f6c-449c-b7db-6ac3fc213ff9" containerName="oc" Mar 11 10:04:00 crc kubenswrapper[4840]: I0311 10:04:00.155237 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553724-sxqq5" Mar 11 10:04:00 crc kubenswrapper[4840]: I0311 10:04:00.162000 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553724-sxqq5"] Mar 11 10:04:00 crc kubenswrapper[4840]: I0311 10:04:00.162283 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:04:00 crc kubenswrapper[4840]: I0311 10:04:00.162616 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:04:00 crc kubenswrapper[4840]: I0311 10:04:00.162676 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:04:00 crc kubenswrapper[4840]: I0311 10:04:00.249505 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vl7d\" (UniqueName: \"kubernetes.io/projected/6145b333-1eac-413a-9a6b-36e9118266d4-kube-api-access-4vl7d\") pod \"auto-csr-approver-29553724-sxqq5\" (UID: \"6145b333-1eac-413a-9a6b-36e9118266d4\") " pod="openshift-infra/auto-csr-approver-29553724-sxqq5" Mar 11 10:04:00 crc kubenswrapper[4840]: I0311 10:04:00.350847 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vl7d\" (UniqueName: \"kubernetes.io/projected/6145b333-1eac-413a-9a6b-36e9118266d4-kube-api-access-4vl7d\") pod \"auto-csr-approver-29553724-sxqq5\" (UID: \"6145b333-1eac-413a-9a6b-36e9118266d4\") " pod="openshift-infra/auto-csr-approver-29553724-sxqq5" Mar 11 10:04:00 crc kubenswrapper[4840]: I0311 10:04:00.372859 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vl7d\" (UniqueName: \"kubernetes.io/projected/6145b333-1eac-413a-9a6b-36e9118266d4-kube-api-access-4vl7d\") pod \"auto-csr-approver-29553724-sxqq5\" (UID: \"6145b333-1eac-413a-9a6b-36e9118266d4\") " pod="openshift-infra/auto-csr-approver-29553724-sxqq5" Mar 11 10:04:00 crc kubenswrapper[4840]: I0311 10:04:00.480840 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553724-sxqq5" Mar 11 10:04:00 crc kubenswrapper[4840]: I0311 10:04:00.921709 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553724-sxqq5"] Mar 11 10:04:01 crc kubenswrapper[4840]: I0311 10:04:01.194275 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 10:04:01 crc kubenswrapper[4840]: I0311 10:04:01.456008 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553724-sxqq5" event={"ID":"6145b333-1eac-413a-9a6b-36e9118266d4","Type":"ContainerStarted","Data":"0ee67b0d156b941eb5bf2652d437082b015370581b40f65c59caf0a607b59c00"} Mar 11 10:04:03 crc kubenswrapper[4840]: I0311 10:04:03.469595 4840 generic.go:334] "Generic (PLEG): container finished" podID="6145b333-1eac-413a-9a6b-36e9118266d4" containerID="fdf443aee2249fab43e720003aa4998d1596a7ea78a0abe8db74fe2dbc09f07d" exitCode=0 Mar 11 10:04:03 crc kubenswrapper[4840]: I0311 10:04:03.469686 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553724-sxqq5" event={"ID":"6145b333-1eac-413a-9a6b-36e9118266d4","Type":"ContainerDied","Data":"fdf443aee2249fab43e720003aa4998d1596a7ea78a0abe8db74fe2dbc09f07d"} Mar 11 10:04:04 crc kubenswrapper[4840]: I0311 10:04:04.805145 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553724-sxqq5" Mar 11 10:04:04 crc kubenswrapper[4840]: I0311 10:04:04.948343 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vl7d\" (UniqueName: \"kubernetes.io/projected/6145b333-1eac-413a-9a6b-36e9118266d4-kube-api-access-4vl7d\") pod \"6145b333-1eac-413a-9a6b-36e9118266d4\" (UID: \"6145b333-1eac-413a-9a6b-36e9118266d4\") " Mar 11 10:04:04 crc kubenswrapper[4840]: I0311 10:04:04.955179 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6145b333-1eac-413a-9a6b-36e9118266d4-kube-api-access-4vl7d" (OuterVolumeSpecName: "kube-api-access-4vl7d") pod "6145b333-1eac-413a-9a6b-36e9118266d4" (UID: "6145b333-1eac-413a-9a6b-36e9118266d4"). InnerVolumeSpecName "kube-api-access-4vl7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:04:05 crc kubenswrapper[4840]: I0311 10:04:05.050150 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vl7d\" (UniqueName: \"kubernetes.io/projected/6145b333-1eac-413a-9a6b-36e9118266d4-kube-api-access-4vl7d\") on node \"crc\" DevicePath \"\"" Mar 11 10:04:05 crc kubenswrapper[4840]: I0311 10:04:05.485037 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553724-sxqq5" event={"ID":"6145b333-1eac-413a-9a6b-36e9118266d4","Type":"ContainerDied","Data":"0ee67b0d156b941eb5bf2652d437082b015370581b40f65c59caf0a607b59c00"} Mar 11 10:04:05 crc kubenswrapper[4840]: I0311 10:04:05.485085 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ee67b0d156b941eb5bf2652d437082b015370581b40f65c59caf0a607b59c00" Mar 11 10:04:05 crc kubenswrapper[4840]: I0311 10:04:05.485098 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553724-sxqq5" Mar 11 10:04:05 crc kubenswrapper[4840]: I0311 10:04:05.868987 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553718-kzsk9"] Mar 11 10:04:05 crc kubenswrapper[4840]: I0311 10:04:05.873975 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553718-kzsk9"] Mar 11 10:04:06 crc kubenswrapper[4840]: I0311 10:04:06.070629 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13edae4a-b5de-44f5-9006-74864ab07ffc" path="/var/lib/kubelet/pods/13edae4a-b5de-44f5-9006-74864ab07ffc/volumes" Mar 11 10:04:10 crc kubenswrapper[4840]: I0311 10:04:10.844319 4840 scope.go:117] "RemoveContainer" containerID="c6152f3795da79f31f7386a28449ecc3cc8e64b951801272b9c3a79eeb68d483" Mar 11 10:04:10 crc kubenswrapper[4840]: I0311 10:04:10.869648 4840 scope.go:117] "RemoveContainer" containerID="690355669a789fdc82640a462fc791f10e35fe384577216fb3176547ebcaf9fe" Mar 11 10:04:57 crc kubenswrapper[4840]: I0311 10:04:57.445616 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:04:57 crc kubenswrapper[4840]: I0311 10:04:57.447307 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:05:27 crc kubenswrapper[4840]: I0311 10:05:27.445842 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:05:27 crc kubenswrapper[4840]: I0311 10:05:27.446486 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:05:57 crc kubenswrapper[4840]: I0311 10:05:57.447529 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:05:57 crc kubenswrapper[4840]: I0311 10:05:57.448632 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:05:57 crc kubenswrapper[4840]: I0311 10:05:57.448724 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 10:05:57 crc kubenswrapper[4840]: I0311 10:05:57.449863 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2e23917570701ed55189d8a885e5297e6976e0276abf2611140252cd299be18"} pod="openshift-machine-config-operator/machine-config-daemon-brtht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 10:05:57 crc kubenswrapper[4840]: I0311 10:05:57.449952 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" containerID="cri-o://f2e23917570701ed55189d8a885e5297e6976e0276abf2611140252cd299be18" gracePeriod=600 Mar 11 10:05:58 crc kubenswrapper[4840]: I0311 10:05:58.280101 4840 generic.go:334] "Generic (PLEG): container finished" podID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerID="f2e23917570701ed55189d8a885e5297e6976e0276abf2611140252cd299be18" exitCode=0 Mar 11 10:05:58 crc kubenswrapper[4840]: I0311 10:05:58.280177 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerDied","Data":"f2e23917570701ed55189d8a885e5297e6976e0276abf2611140252cd299be18"} Mar 11 10:05:58 crc kubenswrapper[4840]: I0311 10:05:58.280543 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607"} Mar 11 10:05:58 crc kubenswrapper[4840]: I0311 10:05:58.280580 4840 scope.go:117] "RemoveContainer" containerID="4efd0bd0ab9fb4f04fed07de60d73ee6a34850ff14fa42aed3595aad9fe76e03" Mar 11 10:06:00 crc kubenswrapper[4840]: I0311 10:06:00.169850 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553726-z5vml"] Mar 11 10:06:00 crc kubenswrapper[4840]: E0311 10:06:00.170458 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6145b333-1eac-413a-9a6b-36e9118266d4" containerName="oc" Mar 11 10:06:00 crc kubenswrapper[4840]: I0311 10:06:00.170484 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="6145b333-1eac-413a-9a6b-36e9118266d4" containerName="oc" Mar 11 10:06:00 crc kubenswrapper[4840]: I0311 10:06:00.170616 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="6145b333-1eac-413a-9a6b-36e9118266d4" containerName="oc" Mar 11 10:06:00 crc kubenswrapper[4840]: I0311 10:06:00.171120 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553726-z5vml" Mar 11 10:06:00 crc kubenswrapper[4840]: I0311 10:06:00.174256 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:06:00 crc kubenswrapper[4840]: I0311 10:06:00.174389 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:06:00 crc kubenswrapper[4840]: I0311 10:06:00.175146 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:06:00 crc kubenswrapper[4840]: I0311 10:06:00.194026 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553726-z5vml"] Mar 11 10:06:00 crc kubenswrapper[4840]: I0311 10:06:00.266038 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl2pr\" (UniqueName: \"kubernetes.io/projected/fa936ca8-e59f-46ed-be26-b6924bfe65f7-kube-api-access-cl2pr\") pod \"auto-csr-approver-29553726-z5vml\" (UID: \"fa936ca8-e59f-46ed-be26-b6924bfe65f7\") " pod="openshift-infra/auto-csr-approver-29553726-z5vml" Mar 11 10:06:00 crc kubenswrapper[4840]: I0311 10:06:00.367540 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl2pr\" (UniqueName: \"kubernetes.io/projected/fa936ca8-e59f-46ed-be26-b6924bfe65f7-kube-api-access-cl2pr\") pod \"auto-csr-approver-29553726-z5vml\" (UID: \"fa936ca8-e59f-46ed-be26-b6924bfe65f7\") " pod="openshift-infra/auto-csr-approver-29553726-z5vml" Mar 11 10:06:00 crc kubenswrapper[4840]: I0311 10:06:00.387949 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl2pr\" (UniqueName: \"kubernetes.io/projected/fa936ca8-e59f-46ed-be26-b6924bfe65f7-kube-api-access-cl2pr\") pod \"auto-csr-approver-29553726-z5vml\" (UID: \"fa936ca8-e59f-46ed-be26-b6924bfe65f7\") " pod="openshift-infra/auto-csr-approver-29553726-z5vml" Mar 11 10:06:00 crc kubenswrapper[4840]: I0311 10:06:00.487150 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553726-z5vml" Mar 11 10:06:00 crc kubenswrapper[4840]: I0311 10:06:00.902791 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553726-z5vml"] Mar 11 10:06:01 crc kubenswrapper[4840]: I0311 10:06:01.304341 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553726-z5vml" event={"ID":"fa936ca8-e59f-46ed-be26-b6924bfe65f7","Type":"ContainerStarted","Data":"f4b98384e26c992f6f3bdf08219a4dd724c09efe4257f5d940ab9b0c7006c1ef"} Mar 11 10:06:03 crc kubenswrapper[4840]: I0311 10:06:03.321384 4840 generic.go:334] "Generic (PLEG): container finished" podID="fa936ca8-e59f-46ed-be26-b6924bfe65f7" containerID="126a24073c250442a4f9fef6bd693ca371696efcc7b4a6e6bf75367a0244121e" exitCode=0 Mar 11 10:06:03 crc kubenswrapper[4840]: I0311 10:06:03.321532 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553726-z5vml" event={"ID":"fa936ca8-e59f-46ed-be26-b6924bfe65f7","Type":"ContainerDied","Data":"126a24073c250442a4f9fef6bd693ca371696efcc7b4a6e6bf75367a0244121e"} Mar 11 10:06:04 crc kubenswrapper[4840]: I0311 10:06:04.627635 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553726-z5vml" Mar 11 10:06:04 crc kubenswrapper[4840]: I0311 10:06:04.746236 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl2pr\" (UniqueName: \"kubernetes.io/projected/fa936ca8-e59f-46ed-be26-b6924bfe65f7-kube-api-access-cl2pr\") pod \"fa936ca8-e59f-46ed-be26-b6924bfe65f7\" (UID: \"fa936ca8-e59f-46ed-be26-b6924bfe65f7\") " Mar 11 10:06:04 crc kubenswrapper[4840]: I0311 10:06:04.751587 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa936ca8-e59f-46ed-be26-b6924bfe65f7-kube-api-access-cl2pr" (OuterVolumeSpecName: "kube-api-access-cl2pr") pod "fa936ca8-e59f-46ed-be26-b6924bfe65f7" (UID: "fa936ca8-e59f-46ed-be26-b6924bfe65f7"). InnerVolumeSpecName "kube-api-access-cl2pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:06:04 crc kubenswrapper[4840]: I0311 10:06:04.847640 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl2pr\" (UniqueName: \"kubernetes.io/projected/fa936ca8-e59f-46ed-be26-b6924bfe65f7-kube-api-access-cl2pr\") on node \"crc\" DevicePath \"\"" Mar 11 10:06:05 crc kubenswrapper[4840]: I0311 10:06:05.336045 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553726-z5vml" event={"ID":"fa936ca8-e59f-46ed-be26-b6924bfe65f7","Type":"ContainerDied","Data":"f4b98384e26c992f6f3bdf08219a4dd724c09efe4257f5d940ab9b0c7006c1ef"} Mar 11 10:06:05 crc kubenswrapper[4840]: I0311 10:06:05.336116 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4b98384e26c992f6f3bdf08219a4dd724c09efe4257f5d940ab9b0c7006c1ef" Mar 11 10:06:05 crc kubenswrapper[4840]: I0311 10:06:05.336137 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553726-z5vml" Mar 11 10:06:05 crc kubenswrapper[4840]: I0311 10:06:05.695581 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553720-knnwh"] Mar 11 10:06:05 crc kubenswrapper[4840]: I0311 10:06:05.700952 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553720-knnwh"] Mar 11 10:06:06 crc kubenswrapper[4840]: I0311 10:06:06.068691 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b693f7cf-8e48-48dd-af7e-156913aad645" path="/var/lib/kubelet/pods/b693f7cf-8e48-48dd-af7e-156913aad645/volumes" Mar 11 10:07:11 crc kubenswrapper[4840]: I0311 10:07:11.269012 4840 scope.go:117] "RemoveContainer" containerID="9a7fc39e25b43c64eb94d4fa93e2e17da8a3e73779b426baf77065d2e1b1837c" Mar 11 10:07:29 crc kubenswrapper[4840]: I0311 10:07:29.048867 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2rplh"] Mar 11 10:07:29 crc kubenswrapper[4840]: E0311 10:07:29.050961 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa936ca8-e59f-46ed-be26-b6924bfe65f7" containerName="oc" Mar 11 10:07:29 crc kubenswrapper[4840]: I0311 10:07:29.051038 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa936ca8-e59f-46ed-be26-b6924bfe65f7" containerName="oc" Mar 11 10:07:29 crc kubenswrapper[4840]: I0311 10:07:29.051295 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa936ca8-e59f-46ed-be26-b6924bfe65f7" containerName="oc" Mar 11 10:07:29 crc kubenswrapper[4840]: I0311 10:07:29.052532 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2rplh" Mar 11 10:07:29 crc kubenswrapper[4840]: I0311 10:07:29.062186 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2rplh"] Mar 11 10:07:29 crc kubenswrapper[4840]: I0311 10:07:29.112901 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b93006ae-f193-4b04-890d-9dbaf7eb29da-utilities\") pod \"redhat-operators-2rplh\" (UID: \"b93006ae-f193-4b04-890d-9dbaf7eb29da\") " pod="openshift-marketplace/redhat-operators-2rplh" Mar 11 10:07:29 crc kubenswrapper[4840]: I0311 10:07:29.112965 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kj2n\" (UniqueName: \"kubernetes.io/projected/b93006ae-f193-4b04-890d-9dbaf7eb29da-kube-api-access-4kj2n\") pod \"redhat-operators-2rplh\" (UID: \"b93006ae-f193-4b04-890d-9dbaf7eb29da\") " pod="openshift-marketplace/redhat-operators-2rplh" Mar 11 10:07:29 crc kubenswrapper[4840]: I0311 10:07:29.113041 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b93006ae-f193-4b04-890d-9dbaf7eb29da-catalog-content\") pod \"redhat-operators-2rplh\" (UID: \"b93006ae-f193-4b04-890d-9dbaf7eb29da\") " pod="openshift-marketplace/redhat-operators-2rplh" Mar 11 10:07:29 crc kubenswrapper[4840]: I0311 10:07:29.214068 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b93006ae-f193-4b04-890d-9dbaf7eb29da-utilities\") pod \"redhat-operators-2rplh\" (UID: \"b93006ae-f193-4b04-890d-9dbaf7eb29da\") " pod="openshift-marketplace/redhat-operators-2rplh" Mar 11 10:07:29 crc kubenswrapper[4840]: I0311 10:07:29.214141 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kj2n\" (UniqueName: \"kubernetes.io/projected/b93006ae-f193-4b04-890d-9dbaf7eb29da-kube-api-access-4kj2n\") pod \"redhat-operators-2rplh\" (UID: \"b93006ae-f193-4b04-890d-9dbaf7eb29da\") " pod="openshift-marketplace/redhat-operators-2rplh" Mar 11 10:07:29 crc kubenswrapper[4840]: I0311 10:07:29.214188 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b93006ae-f193-4b04-890d-9dbaf7eb29da-catalog-content\") pod \"redhat-operators-2rplh\" (UID: \"b93006ae-f193-4b04-890d-9dbaf7eb29da\") " pod="openshift-marketplace/redhat-operators-2rplh" Mar 11 10:07:29 crc kubenswrapper[4840]: I0311 10:07:29.214634 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b93006ae-f193-4b04-890d-9dbaf7eb29da-utilities\") pod \"redhat-operators-2rplh\" (UID: \"b93006ae-f193-4b04-890d-9dbaf7eb29da\") " pod="openshift-marketplace/redhat-operators-2rplh" Mar 11 10:07:29 crc kubenswrapper[4840]: I0311 10:07:29.214665 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b93006ae-f193-4b04-890d-9dbaf7eb29da-catalog-content\") pod \"redhat-operators-2rplh\" (UID: \"b93006ae-f193-4b04-890d-9dbaf7eb29da\") " pod="openshift-marketplace/redhat-operators-2rplh" Mar 11 10:07:29 crc kubenswrapper[4840]: I0311 10:07:29.239750 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kj2n\" (UniqueName: \"kubernetes.io/projected/b93006ae-f193-4b04-890d-9dbaf7eb29da-kube-api-access-4kj2n\") pod \"redhat-operators-2rplh\" (UID: \"b93006ae-f193-4b04-890d-9dbaf7eb29da\") " pod="openshift-marketplace/redhat-operators-2rplh" Mar 11 10:07:29 crc kubenswrapper[4840]: I0311 10:07:29.376249 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2rplh" Mar 11 10:07:29 crc kubenswrapper[4840]: I0311 10:07:29.628159 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2rplh"] Mar 11 10:07:29 crc kubenswrapper[4840]: I0311 10:07:29.838065 4840 generic.go:334] "Generic (PLEG): container finished" podID="b93006ae-f193-4b04-890d-9dbaf7eb29da" containerID="7473bf63095db4ce9bae0938e4fa5cacff16174c4556455a0572932bd6dd729f" exitCode=0 Mar 11 10:07:29 crc kubenswrapper[4840]: I0311 10:07:29.838366 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rplh" event={"ID":"b93006ae-f193-4b04-890d-9dbaf7eb29da","Type":"ContainerDied","Data":"7473bf63095db4ce9bae0938e4fa5cacff16174c4556455a0572932bd6dd729f"} Mar 11 10:07:29 crc kubenswrapper[4840]: I0311 10:07:29.838500 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rplh" event={"ID":"b93006ae-f193-4b04-890d-9dbaf7eb29da","Type":"ContainerStarted","Data":"a0b13ed37885df75a52160d1f88afa78e248a361ed4e98e608b9c0cd1a78af3c"} Mar 11 10:07:30 crc kubenswrapper[4840]: I0311 10:07:30.847521 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rplh" event={"ID":"b93006ae-f193-4b04-890d-9dbaf7eb29da","Type":"ContainerStarted","Data":"fa6b6592afc212f6f1213e7506d613e732f9e85a3fa959fa8ed679f0fa57b75f"} Mar 11 10:07:31 crc kubenswrapper[4840]: I0311 10:07:31.856749 4840 generic.go:334] "Generic (PLEG): container finished" podID="b93006ae-f193-4b04-890d-9dbaf7eb29da" containerID="fa6b6592afc212f6f1213e7506d613e732f9e85a3fa959fa8ed679f0fa57b75f" exitCode=0 Mar 11 10:07:31 crc kubenswrapper[4840]: I0311 10:07:31.856792 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rplh" event={"ID":"b93006ae-f193-4b04-890d-9dbaf7eb29da","Type":"ContainerDied","Data":"fa6b6592afc212f6f1213e7506d613e732f9e85a3fa959fa8ed679f0fa57b75f"} Mar 11 10:07:32 crc kubenswrapper[4840]: I0311 10:07:32.866782 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rplh" event={"ID":"b93006ae-f193-4b04-890d-9dbaf7eb29da","Type":"ContainerStarted","Data":"44f39f1a557afdd9e7f2c4aab0c771e676daaa1f4d607a9de2933bef1b776bae"} Mar 11 10:07:32 crc kubenswrapper[4840]: I0311 10:07:32.890708 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2rplh" podStartSLOduration=1.358083504 podStartE2EDuration="3.890682011s" podCreationTimestamp="2026-03-11 10:07:29 +0000 UTC" firstStartedPulling="2026-03-11 10:07:29.8396324 +0000 UTC m=+4248.505302215" lastFinishedPulling="2026-03-11 10:07:32.372230907 +0000 UTC m=+4251.037900722" observedRunningTime="2026-03-11 10:07:32.888099096 +0000 UTC m=+4251.553768931" watchObservedRunningTime="2026-03-11 10:07:32.890682011 +0000 UTC m=+4251.556351816" Mar 11 10:07:39 crc kubenswrapper[4840]: I0311 10:07:39.376895 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2rplh" Mar 11 10:07:39 crc kubenswrapper[4840]: I0311 10:07:39.377639 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2rplh" Mar 11 10:07:39 crc kubenswrapper[4840]: I0311 10:07:39.416660 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2rplh" Mar 11 10:07:40 crc kubenswrapper[4840]: I0311 10:07:40.222880 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2rplh" Mar 11 10:07:40 crc kubenswrapper[4840]: I0311 10:07:40.272029 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2rplh"] Mar 11 10:07:41 crc kubenswrapper[4840]: I0311 10:07:41.923608 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2rplh" podUID="b93006ae-f193-4b04-890d-9dbaf7eb29da" containerName="registry-server" containerID="cri-o://44f39f1a557afdd9e7f2c4aab0c771e676daaa1f4d607a9de2933bef1b776bae" gracePeriod=2 Mar 11 10:07:43 crc kubenswrapper[4840]: I0311 10:07:43.421739 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2rplh" Mar 11 10:07:43 crc kubenswrapper[4840]: I0311 10:07:43.526145 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b93006ae-f193-4b04-890d-9dbaf7eb29da-catalog-content\") pod \"b93006ae-f193-4b04-890d-9dbaf7eb29da\" (UID: \"b93006ae-f193-4b04-890d-9dbaf7eb29da\") " Mar 11 10:07:43 crc kubenswrapper[4840]: I0311 10:07:43.526242 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b93006ae-f193-4b04-890d-9dbaf7eb29da-utilities\") pod \"b93006ae-f193-4b04-890d-9dbaf7eb29da\" (UID: \"b93006ae-f193-4b04-890d-9dbaf7eb29da\") " Mar 11 10:07:43 crc kubenswrapper[4840]: I0311 10:07:43.526312 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kj2n\" (UniqueName: \"kubernetes.io/projected/b93006ae-f193-4b04-890d-9dbaf7eb29da-kube-api-access-4kj2n\") pod \"b93006ae-f193-4b04-890d-9dbaf7eb29da\" (UID: \"b93006ae-f193-4b04-890d-9dbaf7eb29da\") " Mar 11 10:07:43 crc kubenswrapper[4840]: I0311 10:07:43.527530 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b93006ae-f193-4b04-890d-9dbaf7eb29da-utilities" (OuterVolumeSpecName: "utilities") pod "b93006ae-f193-4b04-890d-9dbaf7eb29da" (UID: "b93006ae-f193-4b04-890d-9dbaf7eb29da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:07:43 crc kubenswrapper[4840]: I0311 10:07:43.531580 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b93006ae-f193-4b04-890d-9dbaf7eb29da-kube-api-access-4kj2n" (OuterVolumeSpecName: "kube-api-access-4kj2n") pod "b93006ae-f193-4b04-890d-9dbaf7eb29da" (UID: "b93006ae-f193-4b04-890d-9dbaf7eb29da"). InnerVolumeSpecName "kube-api-access-4kj2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:07:43 crc kubenswrapper[4840]: I0311 10:07:43.627703 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kj2n\" (UniqueName: \"kubernetes.io/projected/b93006ae-f193-4b04-890d-9dbaf7eb29da-kube-api-access-4kj2n\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:43 crc kubenswrapper[4840]: I0311 10:07:43.627744 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b93006ae-f193-4b04-890d-9dbaf7eb29da-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:43 crc kubenswrapper[4840]: I0311 10:07:43.668197 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b93006ae-f193-4b04-890d-9dbaf7eb29da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b93006ae-f193-4b04-890d-9dbaf7eb29da" (UID: "b93006ae-f193-4b04-890d-9dbaf7eb29da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:07:43 crc kubenswrapper[4840]: I0311 10:07:43.729826 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b93006ae-f193-4b04-890d-9dbaf7eb29da-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:07:43 crc kubenswrapper[4840]: I0311 10:07:43.943001 4840 generic.go:334] "Generic (PLEG): container finished" podID="b93006ae-f193-4b04-890d-9dbaf7eb29da" containerID="44f39f1a557afdd9e7f2c4aab0c771e676daaa1f4d607a9de2933bef1b776bae" exitCode=0 Mar 11 10:07:43 crc kubenswrapper[4840]: I0311 10:07:43.943108 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2rplh" Mar 11 10:07:43 crc kubenswrapper[4840]: I0311 10:07:43.943084 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rplh" event={"ID":"b93006ae-f193-4b04-890d-9dbaf7eb29da","Type":"ContainerDied","Data":"44f39f1a557afdd9e7f2c4aab0c771e676daaa1f4d607a9de2933bef1b776bae"} Mar 11 10:07:43 crc kubenswrapper[4840]: I0311 10:07:43.943302 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2rplh" event={"ID":"b93006ae-f193-4b04-890d-9dbaf7eb29da","Type":"ContainerDied","Data":"a0b13ed37885df75a52160d1f88afa78e248a361ed4e98e608b9c0cd1a78af3c"} Mar 11 10:07:43 crc kubenswrapper[4840]: I0311 10:07:43.943331 4840 scope.go:117] "RemoveContainer" containerID="44f39f1a557afdd9e7f2c4aab0c771e676daaa1f4d607a9de2933bef1b776bae" Mar 11 10:07:43 crc kubenswrapper[4840]: I0311 10:07:43.975544 4840 scope.go:117] "RemoveContainer" containerID="fa6b6592afc212f6f1213e7506d613e732f9e85a3fa959fa8ed679f0fa57b75f" Mar 11 10:07:43 crc kubenswrapper[4840]: I0311 10:07:43.986987 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2rplh"] Mar 11 10:07:43 crc kubenswrapper[4840]: I0311 10:07:43.993061 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2rplh"] Mar 11 10:07:43 crc kubenswrapper[4840]: I0311 10:07:43.999777 4840 scope.go:117] "RemoveContainer" containerID="7473bf63095db4ce9bae0938e4fa5cacff16174c4556455a0572932bd6dd729f" Mar 11 10:07:44 crc kubenswrapper[4840]: I0311 10:07:44.026529 4840 scope.go:117] "RemoveContainer" containerID="44f39f1a557afdd9e7f2c4aab0c771e676daaa1f4d607a9de2933bef1b776bae" Mar 11 10:07:44 crc kubenswrapper[4840]: E0311 10:07:44.026993 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44f39f1a557afdd9e7f2c4aab0c771e676daaa1f4d607a9de2933bef1b776bae\": container with ID starting with 44f39f1a557afdd9e7f2c4aab0c771e676daaa1f4d607a9de2933bef1b776bae not found: ID does not exist" containerID="44f39f1a557afdd9e7f2c4aab0c771e676daaa1f4d607a9de2933bef1b776bae" Mar 11 10:07:44 crc kubenswrapper[4840]: I0311 10:07:44.027035 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44f39f1a557afdd9e7f2c4aab0c771e676daaa1f4d607a9de2933bef1b776bae"} err="failed to get container status \"44f39f1a557afdd9e7f2c4aab0c771e676daaa1f4d607a9de2933bef1b776bae\": rpc error: code = NotFound desc = could not find container \"44f39f1a557afdd9e7f2c4aab0c771e676daaa1f4d607a9de2933bef1b776bae\": container with ID starting with 44f39f1a557afdd9e7f2c4aab0c771e676daaa1f4d607a9de2933bef1b776bae not found: ID does not exist" Mar 11 10:07:44 crc kubenswrapper[4840]: I0311 10:07:44.027062 4840 scope.go:117] "RemoveContainer" containerID="fa6b6592afc212f6f1213e7506d613e732f9e85a3fa959fa8ed679f0fa57b75f" Mar 11 10:07:44 crc kubenswrapper[4840]: E0311 10:07:44.027432 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa6b6592afc212f6f1213e7506d613e732f9e85a3fa959fa8ed679f0fa57b75f\": container with ID starting with fa6b6592afc212f6f1213e7506d613e732f9e85a3fa959fa8ed679f0fa57b75f not found: ID does not exist" containerID="fa6b6592afc212f6f1213e7506d613e732f9e85a3fa959fa8ed679f0fa57b75f" Mar 11 10:07:44 crc kubenswrapper[4840]: I0311 10:07:44.027517 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa6b6592afc212f6f1213e7506d613e732f9e85a3fa959fa8ed679f0fa57b75f"} err="failed to get container status \"fa6b6592afc212f6f1213e7506d613e732f9e85a3fa959fa8ed679f0fa57b75f\": rpc error: code = NotFound desc = could not find container \"fa6b6592afc212f6f1213e7506d613e732f9e85a3fa959fa8ed679f0fa57b75f\": container with ID starting with fa6b6592afc212f6f1213e7506d613e732f9e85a3fa959fa8ed679f0fa57b75f not found: ID does not exist" Mar 11 10:07:44 crc kubenswrapper[4840]: I0311 10:07:44.027577 4840 scope.go:117] "RemoveContainer" containerID="7473bf63095db4ce9bae0938e4fa5cacff16174c4556455a0572932bd6dd729f" Mar 11 10:07:44 crc kubenswrapper[4840]: E0311 10:07:44.027946 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7473bf63095db4ce9bae0938e4fa5cacff16174c4556455a0572932bd6dd729f\": container with ID starting with 7473bf63095db4ce9bae0938e4fa5cacff16174c4556455a0572932bd6dd729f not found: ID does not exist" containerID="7473bf63095db4ce9bae0938e4fa5cacff16174c4556455a0572932bd6dd729f" Mar 11 10:07:44 crc kubenswrapper[4840]: I0311 10:07:44.027978 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7473bf63095db4ce9bae0938e4fa5cacff16174c4556455a0572932bd6dd729f"} err="failed to get container status \"7473bf63095db4ce9bae0938e4fa5cacff16174c4556455a0572932bd6dd729f\": rpc error: code = NotFound desc = could not find container \"7473bf63095db4ce9bae0938e4fa5cacff16174c4556455a0572932bd6dd729f\": container with ID starting with 7473bf63095db4ce9bae0938e4fa5cacff16174c4556455a0572932bd6dd729f not found: ID does not exist" Mar 11 10:07:44 crc kubenswrapper[4840]: I0311 10:07:44.068540 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b93006ae-f193-4b04-890d-9dbaf7eb29da" path="/var/lib/kubelet/pods/b93006ae-f193-4b04-890d-9dbaf7eb29da/volumes" Mar 11 10:07:52 crc kubenswrapper[4840]: I0311 10:07:52.173776 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bhfrp"] Mar 11 10:07:52 crc kubenswrapper[4840]: E0311 10:07:52.174830 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93006ae-f193-4b04-890d-9dbaf7eb29da" containerName="extract-utilities" Mar 11 10:07:52 crc kubenswrapper[4840]: I0311 10:07:52.174851 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93006ae-f193-4b04-890d-9dbaf7eb29da" containerName="extract-utilities" Mar 11 10:07:52 crc kubenswrapper[4840]: E0311 10:07:52.174871 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93006ae-f193-4b04-890d-9dbaf7eb29da" containerName="registry-server" Mar 11 10:07:52 crc kubenswrapper[4840]: I0311 10:07:52.174879 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93006ae-f193-4b04-890d-9dbaf7eb29da" containerName="registry-server" Mar 11 10:07:52 crc kubenswrapper[4840]: E0311 10:07:52.174900 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b93006ae-f193-4b04-890d-9dbaf7eb29da" containerName="extract-content" Mar 11 10:07:52 crc kubenswrapper[4840]: I0311 10:07:52.174910 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b93006ae-f193-4b04-890d-9dbaf7eb29da" containerName="extract-content" Mar 11 10:07:52 crc kubenswrapper[4840]: I0311 10:07:52.175072 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b93006ae-f193-4b04-890d-9dbaf7eb29da" containerName="registry-server" Mar 11 10:07:52 crc kubenswrapper[4840]: I0311 10:07:52.176415 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhfrp" Mar 11 10:07:52 crc kubenswrapper[4840]: I0311 10:07:52.188925 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bhfrp"] Mar 11 10:07:52 crc kubenswrapper[4840]: I0311 10:07:52.259607 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j226\" (UniqueName: \"kubernetes.io/projected/6be23909-d6ef-4c24-917b-92559c10e15d-kube-api-access-8j226\") pod \"certified-operators-bhfrp\" (UID: \"6be23909-d6ef-4c24-917b-92559c10e15d\") " pod="openshift-marketplace/certified-operators-bhfrp" Mar 11 10:07:52 crc kubenswrapper[4840]: I0311 10:07:52.259678 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be23909-d6ef-4c24-917b-92559c10e15d-utilities\") pod \"certified-operators-bhfrp\" (UID: \"6be23909-d6ef-4c24-917b-92559c10e15d\") " pod="openshift-marketplace/certified-operators-bhfrp" Mar 11 10:07:52 crc kubenswrapper[4840]: I0311 10:07:52.259749 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be23909-d6ef-4c24-917b-92559c10e15d-catalog-content\") pod \"certified-operators-bhfrp\" (UID: \"6be23909-d6ef-4c24-917b-92559c10e15d\") " pod="openshift-marketplace/certified-operators-bhfrp" Mar 11 10:07:52 crc kubenswrapper[4840]: I0311 10:07:52.361501 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j226\" (UniqueName: \"kubernetes.io/projected/6be23909-d6ef-4c24-917b-92559c10e15d-kube-api-access-8j226\") pod \"certified-operators-bhfrp\" (UID: \"6be23909-d6ef-4c24-917b-92559c10e15d\") " pod="openshift-marketplace/certified-operators-bhfrp" Mar 11 10:07:52 crc kubenswrapper[4840]: I0311 10:07:52.361580 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be23909-d6ef-4c24-917b-92559c10e15d-utilities\") pod \"certified-operators-bhfrp\" (UID: \"6be23909-d6ef-4c24-917b-92559c10e15d\") " pod="openshift-marketplace/certified-operators-bhfrp" Mar 11 10:07:52 crc kubenswrapper[4840]: I0311 10:07:52.361641 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be23909-d6ef-4c24-917b-92559c10e15d-catalog-content\") pod \"certified-operators-bhfrp\" (UID: \"6be23909-d6ef-4c24-917b-92559c10e15d\") " pod="openshift-marketplace/certified-operators-bhfrp" Mar 11 10:07:52 crc kubenswrapper[4840]: I0311 10:07:52.362333 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be23909-d6ef-4c24-917b-92559c10e15d-catalog-content\") pod \"certified-operators-bhfrp\" (UID: \"6be23909-d6ef-4c24-917b-92559c10e15d\") " pod="openshift-marketplace/certified-operators-bhfrp" Mar 11 10:07:52 crc kubenswrapper[4840]: I0311 10:07:52.362388 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be23909-d6ef-4c24-917b-92559c10e15d-utilities\") pod \"certified-operators-bhfrp\" (UID: \"6be23909-d6ef-4c24-917b-92559c10e15d\") " pod="openshift-marketplace/certified-operators-bhfrp" Mar 11 10:07:52 crc kubenswrapper[4840]: I0311 10:07:52.379900 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j226\" (UniqueName: \"kubernetes.io/projected/6be23909-d6ef-4c24-917b-92559c10e15d-kube-api-access-8j226\") pod \"certified-operators-bhfrp\" (UID: \"6be23909-d6ef-4c24-917b-92559c10e15d\") " pod="openshift-marketplace/certified-operators-bhfrp" Mar 11 10:07:52 crc kubenswrapper[4840]: I0311 10:07:52.511597 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhfrp" Mar 11 10:07:53 crc kubenswrapper[4840]: I0311 10:07:53.064792 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bhfrp"] Mar 11 10:07:54 crc kubenswrapper[4840]: I0311 10:07:54.029507 4840 generic.go:334] "Generic (PLEG): container finished" podID="6be23909-d6ef-4c24-917b-92559c10e15d" containerID="5a376ed1ede4c3e10fb286c9a5c87a7d38427b988ef799bef1fab8edc5d9b7c6" exitCode=0 Mar 11 10:07:54 crc kubenswrapper[4840]: I0311 10:07:54.029553 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhfrp" event={"ID":"6be23909-d6ef-4c24-917b-92559c10e15d","Type":"ContainerDied","Data":"5a376ed1ede4c3e10fb286c9a5c87a7d38427b988ef799bef1fab8edc5d9b7c6"} Mar 11 10:07:54 crc kubenswrapper[4840]: I0311 10:07:54.029583 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhfrp" event={"ID":"6be23909-d6ef-4c24-917b-92559c10e15d","Type":"ContainerStarted","Data":"a9183b5814767aba9b9e6a3ac23c42eb5419ffa1aa2d1c0394335c369ca1bb47"} Mar 11 10:07:55 crc kubenswrapper[4840]: I0311 10:07:55.039494 4840 generic.go:334] "Generic (PLEG): container finished" podID="6be23909-d6ef-4c24-917b-92559c10e15d" containerID="4044deb31051f5fa3ca5acba40234d32f4181ece626153c665a0219035b517b0" exitCode=0 Mar 11 10:07:55 crc kubenswrapper[4840]: I0311 10:07:55.039585 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhfrp" event={"ID":"6be23909-d6ef-4c24-917b-92559c10e15d","Type":"ContainerDied","Data":"4044deb31051f5fa3ca5acba40234d32f4181ece626153c665a0219035b517b0"} Mar 11 10:07:56 crc kubenswrapper[4840]: I0311 10:07:56.049702 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhfrp" event={"ID":"6be23909-d6ef-4c24-917b-92559c10e15d","Type":"ContainerStarted","Data":"5800b1f053f98a95f0c5ae6787f1905a6dec9694dde80ff21f29e03b2535cfc2"} Mar 11 10:07:56 crc kubenswrapper[4840]: I0311 10:07:56.075652 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bhfrp" podStartSLOduration=2.683183566 podStartE2EDuration="4.075635119s" podCreationTimestamp="2026-03-11 10:07:52 +0000 UTC" firstStartedPulling="2026-03-11 10:07:54.031133971 +0000 UTC m=+4272.696803786" lastFinishedPulling="2026-03-11 10:07:55.423585524 +0000 UTC m=+4274.089255339" observedRunningTime="2026-03-11 10:07:56.073018723 +0000 UTC m=+4274.738688538" watchObservedRunningTime="2026-03-11 10:07:56.075635119 +0000 UTC m=+4274.741304934" Mar 11 10:07:57 crc kubenswrapper[4840]: I0311 10:07:57.446536 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:07:57 crc kubenswrapper[4840]: I0311 10:07:57.446625 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:08:00 crc kubenswrapper[4840]: I0311 10:08:00.165792 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553728-bvbsk"] Mar 11 10:08:00 crc kubenswrapper[4840]: I0311 10:08:00.170040 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553728-bvbsk" Mar 11 10:08:00 crc kubenswrapper[4840]: I0311 10:08:00.172590 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:08:00 crc kubenswrapper[4840]: I0311 10:08:00.172811 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:08:00 crc kubenswrapper[4840]: I0311 10:08:00.172923 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:08:00 crc kubenswrapper[4840]: I0311 10:08:00.177143 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553728-bvbsk"] Mar 11 10:08:00 crc kubenswrapper[4840]: I0311 10:08:00.277496 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9md2x\" (UniqueName: \"kubernetes.io/projected/02e3c919-5404-4cb9-b9ed-abc028140136-kube-api-access-9md2x\") pod \"auto-csr-approver-29553728-bvbsk\" (UID: \"02e3c919-5404-4cb9-b9ed-abc028140136\") " pod="openshift-infra/auto-csr-approver-29553728-bvbsk" Mar 11 10:08:00 crc kubenswrapper[4840]: I0311 10:08:00.378990 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9md2x\" (UniqueName: \"kubernetes.io/projected/02e3c919-5404-4cb9-b9ed-abc028140136-kube-api-access-9md2x\") pod \"auto-csr-approver-29553728-bvbsk\" (UID: \"02e3c919-5404-4cb9-b9ed-abc028140136\") " pod="openshift-infra/auto-csr-approver-29553728-bvbsk" Mar 11 10:08:00 crc kubenswrapper[4840]: I0311 10:08:00.399716 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9md2x\" (UniqueName: \"kubernetes.io/projected/02e3c919-5404-4cb9-b9ed-abc028140136-kube-api-access-9md2x\") pod \"auto-csr-approver-29553728-bvbsk\" (UID: \"02e3c919-5404-4cb9-b9ed-abc028140136\") " pod="openshift-infra/auto-csr-approver-29553728-bvbsk" Mar 11 10:08:00 crc kubenswrapper[4840]: I0311 10:08:00.517267 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553728-bvbsk" Mar 11 10:08:01 crc kubenswrapper[4840]: I0311 10:08:01.209700 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553728-bvbsk"] Mar 11 10:08:02 crc kubenswrapper[4840]: I0311 10:08:02.100605 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553728-bvbsk" event={"ID":"02e3c919-5404-4cb9-b9ed-abc028140136","Type":"ContainerStarted","Data":"44f6d697f87e16a09b49984249789f7c833ed5bb5b166c08145816c5793b5ca4"} Mar 11 10:08:02 crc kubenswrapper[4840]: I0311 10:08:02.511800 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bhfrp" Mar 11 10:08:02 crc kubenswrapper[4840]: I0311 10:08:02.512140 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bhfrp" Mar 11 10:08:02 crc kubenswrapper[4840]: I0311 10:08:02.555023 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bhfrp" Mar 11 10:08:03 crc kubenswrapper[4840]: I0311 10:08:03.109357 4840 generic.go:334] "Generic (PLEG): container finished" podID="02e3c919-5404-4cb9-b9ed-abc028140136" containerID="ea3a9ed30c9a1e109fd9faed77b886fa13419888dedd8925db722967556a3231" exitCode=0 Mar 11 10:08:03 crc kubenswrapper[4840]: I0311 10:08:03.109440 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553728-bvbsk" event={"ID":"02e3c919-5404-4cb9-b9ed-abc028140136","Type":"ContainerDied","Data":"ea3a9ed30c9a1e109fd9faed77b886fa13419888dedd8925db722967556a3231"} Mar 11 10:08:03 crc kubenswrapper[4840]: I0311 10:08:03.159107 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bhfrp" Mar 11 10:08:03 crc kubenswrapper[4840]: I0311 10:08:03.205938 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bhfrp"] Mar 11 10:08:04 crc kubenswrapper[4840]: I0311 10:08:04.386133 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553728-bvbsk" Mar 11 10:08:04 crc kubenswrapper[4840]: I0311 10:08:04.538835 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9md2x\" (UniqueName: \"kubernetes.io/projected/02e3c919-5404-4cb9-b9ed-abc028140136-kube-api-access-9md2x\") pod \"02e3c919-5404-4cb9-b9ed-abc028140136\" (UID: \"02e3c919-5404-4cb9-b9ed-abc028140136\") " Mar 11 10:08:04 crc kubenswrapper[4840]: I0311 10:08:04.545039 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02e3c919-5404-4cb9-b9ed-abc028140136-kube-api-access-9md2x" (OuterVolumeSpecName: "kube-api-access-9md2x") pod "02e3c919-5404-4cb9-b9ed-abc028140136" (UID: "02e3c919-5404-4cb9-b9ed-abc028140136"). InnerVolumeSpecName "kube-api-access-9md2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:08:04 crc kubenswrapper[4840]: I0311 10:08:04.641290 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9md2x\" (UniqueName: \"kubernetes.io/projected/02e3c919-5404-4cb9-b9ed-abc028140136-kube-api-access-9md2x\") on node \"crc\" DevicePath \"\"" Mar 11 10:08:05 crc kubenswrapper[4840]: I0311 10:08:05.126513 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553728-bvbsk" event={"ID":"02e3c919-5404-4cb9-b9ed-abc028140136","Type":"ContainerDied","Data":"44f6d697f87e16a09b49984249789f7c833ed5bb5b166c08145816c5793b5ca4"} Mar 11 10:08:05 crc kubenswrapper[4840]: I0311 10:08:05.127037 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44f6d697f87e16a09b49984249789f7c833ed5bb5b166c08145816c5793b5ca4" Mar 11 10:08:05 crc kubenswrapper[4840]: I0311 10:08:05.126721 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bhfrp" podUID="6be23909-d6ef-4c24-917b-92559c10e15d" containerName="registry-server" containerID="cri-o://5800b1f053f98a95f0c5ae6787f1905a6dec9694dde80ff21f29e03b2535cfc2" gracePeriod=2 Mar 11 10:08:05 crc kubenswrapper[4840]: I0311 10:08:05.126605 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553728-bvbsk" Mar 11 10:08:05 crc kubenswrapper[4840]: I0311 10:08:05.462270 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553722-bxzhv"] Mar 11 10:08:05 crc kubenswrapper[4840]: I0311 10:08:05.468806 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553722-bxzhv"] Mar 11 10:08:05 crc kubenswrapper[4840]: I0311 10:08:05.496826 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhfrp" Mar 11 10:08:05 crc kubenswrapper[4840]: I0311 10:08:05.558267 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be23909-d6ef-4c24-917b-92559c10e15d-catalog-content\") pod \"6be23909-d6ef-4c24-917b-92559c10e15d\" (UID: \"6be23909-d6ef-4c24-917b-92559c10e15d\") " Mar 11 10:08:05 crc kubenswrapper[4840]: I0311 10:08:05.558358 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be23909-d6ef-4c24-917b-92559c10e15d-utilities\") pod \"6be23909-d6ef-4c24-917b-92559c10e15d\" (UID: \"6be23909-d6ef-4c24-917b-92559c10e15d\") " Mar 11 10:08:05 crc kubenswrapper[4840]: I0311 10:08:05.558414 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j226\" (UniqueName: \"kubernetes.io/projected/6be23909-d6ef-4c24-917b-92559c10e15d-kube-api-access-8j226\") pod \"6be23909-d6ef-4c24-917b-92559c10e15d\" (UID: \"6be23909-d6ef-4c24-917b-92559c10e15d\") " Mar 11 10:08:05 crc kubenswrapper[4840]: I0311 10:08:05.566843 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6be23909-d6ef-4c24-917b-92559c10e15d-utilities" (OuterVolumeSpecName: "utilities") pod "6be23909-d6ef-4c24-917b-92559c10e15d" (UID: "6be23909-d6ef-4c24-917b-92559c10e15d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:08:05 crc kubenswrapper[4840]: I0311 10:08:05.570730 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6be23909-d6ef-4c24-917b-92559c10e15d-kube-api-access-8j226" (OuterVolumeSpecName: "kube-api-access-8j226") pod "6be23909-d6ef-4c24-917b-92559c10e15d" (UID: "6be23909-d6ef-4c24-917b-92559c10e15d"). InnerVolumeSpecName "kube-api-access-8j226". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:08:05 crc kubenswrapper[4840]: I0311 10:08:05.624443 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6be23909-d6ef-4c24-917b-92559c10e15d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6be23909-d6ef-4c24-917b-92559c10e15d" (UID: "6be23909-d6ef-4c24-917b-92559c10e15d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:08:05 crc kubenswrapper[4840]: I0311 10:08:05.660956 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j226\" (UniqueName: \"kubernetes.io/projected/6be23909-d6ef-4c24-917b-92559c10e15d-kube-api-access-8j226\") on node \"crc\" DevicePath \"\"" Mar 11 10:08:05 crc kubenswrapper[4840]: I0311 10:08:05.661028 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be23909-d6ef-4c24-917b-92559c10e15d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:08:05 crc kubenswrapper[4840]: I0311 10:08:05.661053 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be23909-d6ef-4c24-917b-92559c10e15d-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:08:06 crc kubenswrapper[4840]: I0311 10:08:06.074389 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1c81852-0f6c-449c-b7db-6ac3fc213ff9" path="/var/lib/kubelet/pods/a1c81852-0f6c-449c-b7db-6ac3fc213ff9/volumes" Mar 11 10:08:06 crc kubenswrapper[4840]: I0311 10:08:06.135812 4840 generic.go:334] "Generic (PLEG): container finished" podID="6be23909-d6ef-4c24-917b-92559c10e15d" containerID="5800b1f053f98a95f0c5ae6787f1905a6dec9694dde80ff21f29e03b2535cfc2" exitCode=0 Mar 11 10:08:06 crc kubenswrapper[4840]: I0311 10:08:06.135873 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhfrp" event={"ID":"6be23909-d6ef-4c24-917b-92559c10e15d","Type":"ContainerDied","Data":"5800b1f053f98a95f0c5ae6787f1905a6dec9694dde80ff21f29e03b2535cfc2"} Mar 11 10:08:06 crc kubenswrapper[4840]: I0311 10:08:06.135894 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhfrp" Mar 11 10:08:06 crc kubenswrapper[4840]: I0311 10:08:06.135915 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhfrp" event={"ID":"6be23909-d6ef-4c24-917b-92559c10e15d","Type":"ContainerDied","Data":"a9183b5814767aba9b9e6a3ac23c42eb5419ffa1aa2d1c0394335c369ca1bb47"} Mar 11 10:08:06 crc kubenswrapper[4840]: I0311 10:08:06.135943 4840 scope.go:117] "RemoveContainer" containerID="5800b1f053f98a95f0c5ae6787f1905a6dec9694dde80ff21f29e03b2535cfc2" Mar 11 10:08:06 crc kubenswrapper[4840]: I0311 10:08:06.161481 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bhfrp"] Mar 11 10:08:06 crc kubenswrapper[4840]: I0311 10:08:06.163712 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bhfrp"] Mar 11 10:08:06 crc kubenswrapper[4840]: I0311 10:08:06.169573 4840 scope.go:117] "RemoveContainer" containerID="4044deb31051f5fa3ca5acba40234d32f4181ece626153c665a0219035b517b0" Mar 11 10:08:06 crc kubenswrapper[4840]: I0311 10:08:06.196974 4840 scope.go:117] "RemoveContainer" containerID="5a376ed1ede4c3e10fb286c9a5c87a7d38427b988ef799bef1fab8edc5d9b7c6" Mar 11 10:08:06 crc kubenswrapper[4840]: I0311 10:08:06.223685 4840 scope.go:117] "RemoveContainer" containerID="5800b1f053f98a95f0c5ae6787f1905a6dec9694dde80ff21f29e03b2535cfc2" Mar 11 10:08:06 crc kubenswrapper[4840]: E0311 10:08:06.224388 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5800b1f053f98a95f0c5ae6787f1905a6dec9694dde80ff21f29e03b2535cfc2\": container with ID starting with 5800b1f053f98a95f0c5ae6787f1905a6dec9694dde80ff21f29e03b2535cfc2 not found: ID does not exist" containerID="5800b1f053f98a95f0c5ae6787f1905a6dec9694dde80ff21f29e03b2535cfc2" Mar 11 10:08:06 crc kubenswrapper[4840]: I0311 10:08:06.224427 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5800b1f053f98a95f0c5ae6787f1905a6dec9694dde80ff21f29e03b2535cfc2"} err="failed to get container status \"5800b1f053f98a95f0c5ae6787f1905a6dec9694dde80ff21f29e03b2535cfc2\": rpc error: code = NotFound desc = could not find container \"5800b1f053f98a95f0c5ae6787f1905a6dec9694dde80ff21f29e03b2535cfc2\": container with ID starting with 5800b1f053f98a95f0c5ae6787f1905a6dec9694dde80ff21f29e03b2535cfc2 not found: ID does not exist" Mar 11 10:08:06 crc kubenswrapper[4840]: I0311 10:08:06.224476 4840 scope.go:117] "RemoveContainer" containerID="4044deb31051f5fa3ca5acba40234d32f4181ece626153c665a0219035b517b0" Mar 11 10:08:06 crc kubenswrapper[4840]: E0311 10:08:06.224895 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4044deb31051f5fa3ca5acba40234d32f4181ece626153c665a0219035b517b0\": container with ID starting with 4044deb31051f5fa3ca5acba40234d32f4181ece626153c665a0219035b517b0 not found: ID does not exist" containerID="4044deb31051f5fa3ca5acba40234d32f4181ece626153c665a0219035b517b0" Mar 11 10:08:06 crc kubenswrapper[4840]: I0311 10:08:06.224925 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4044deb31051f5fa3ca5acba40234d32f4181ece626153c665a0219035b517b0"} err="failed to get container status \"4044deb31051f5fa3ca5acba40234d32f4181ece626153c665a0219035b517b0\": rpc error: code = NotFound desc = could not find container \"4044deb31051f5fa3ca5acba40234d32f4181ece626153c665a0219035b517b0\": container with ID starting with 4044deb31051f5fa3ca5acba40234d32f4181ece626153c665a0219035b517b0 not found: ID does not exist" Mar 11 10:08:06 crc kubenswrapper[4840]: I0311 10:08:06.224943 4840 scope.go:117] "RemoveContainer" containerID="5a376ed1ede4c3e10fb286c9a5c87a7d38427b988ef799bef1fab8edc5d9b7c6" Mar 11 10:08:06 crc kubenswrapper[4840]: E0311 10:08:06.227793 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a376ed1ede4c3e10fb286c9a5c87a7d38427b988ef799bef1fab8edc5d9b7c6\": container with ID starting with 5a376ed1ede4c3e10fb286c9a5c87a7d38427b988ef799bef1fab8edc5d9b7c6 not found: ID does not exist" containerID="5a376ed1ede4c3e10fb286c9a5c87a7d38427b988ef799bef1fab8edc5d9b7c6" Mar 11 10:08:06 crc kubenswrapper[4840]: I0311 10:08:06.227829 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a376ed1ede4c3e10fb286c9a5c87a7d38427b988ef799bef1fab8edc5d9b7c6"} err="failed to get container status \"5a376ed1ede4c3e10fb286c9a5c87a7d38427b988ef799bef1fab8edc5d9b7c6\": rpc error: code = NotFound desc = could not find container \"5a376ed1ede4c3e10fb286c9a5c87a7d38427b988ef799bef1fab8edc5d9b7c6\": container with ID starting with 5a376ed1ede4c3e10fb286c9a5c87a7d38427b988ef799bef1fab8edc5d9b7c6 not found: ID does not exist" Mar 11 10:08:08 crc kubenswrapper[4840]: I0311 10:08:08.071182 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6be23909-d6ef-4c24-917b-92559c10e15d" path="/var/lib/kubelet/pods/6be23909-d6ef-4c24-917b-92559c10e15d/volumes" Mar 11 10:08:11 crc kubenswrapper[4840]: I0311 10:08:11.333372 4840 scope.go:117] "RemoveContainer" containerID="0dcd3448ab05048a42225f4770e30852c20abed4c48f88ebd755597ee68d9450" Mar 11 10:08:27 crc kubenswrapper[4840]: I0311 10:08:27.446300 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:08:27 crc kubenswrapper[4840]: I0311 10:08:27.446881 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:08:57 crc kubenswrapper[4840]: I0311 10:08:57.446192 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:08:57 crc kubenswrapper[4840]: I0311 10:08:57.447194 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:08:57 crc kubenswrapper[4840]: I0311 10:08:57.447251 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 10:08:57 crc kubenswrapper[4840]: I0311 10:08:57.447881 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607"} pod="openshift-machine-config-operator/machine-config-daemon-brtht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 10:08:57 crc kubenswrapper[4840]: I0311 10:08:57.447942 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" containerID="cri-o://e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" gracePeriod=600 Mar 11 10:08:57 crc kubenswrapper[4840]: E0311 10:08:57.571727 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:08:58 crc kubenswrapper[4840]: I0311 10:08:58.498326 4840 generic.go:334] "Generic (PLEG): container finished" podID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" exitCode=0 Mar 11 10:08:58 crc kubenswrapper[4840]: I0311 10:08:58.498414 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerDied","Data":"e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607"} Mar 11 10:08:58 crc kubenswrapper[4840]: I0311 10:08:58.498711 4840 scope.go:117] "RemoveContainer" containerID="f2e23917570701ed55189d8a885e5297e6976e0276abf2611140252cd299be18" Mar 11 10:08:58 crc kubenswrapper[4840]: I0311 10:08:58.501556 4840 scope.go:117] "RemoveContainer" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" Mar 11 10:08:58 crc kubenswrapper[4840]: E0311 10:08:58.501941 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:09:11 crc kubenswrapper[4840]: I0311 10:09:11.060396 4840 scope.go:117] "RemoveContainer" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" Mar 11 10:09:11 crc kubenswrapper[4840]: E0311 10:09:11.061253 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:09:26 crc kubenswrapper[4840]: I0311 10:09:26.060010 4840 scope.go:117] "RemoveContainer" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" Mar 11 10:09:26 crc kubenswrapper[4840]: E0311 10:09:26.060743 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:09:40 crc kubenswrapper[4840]: I0311 10:09:40.061573 4840 scope.go:117] "RemoveContainer" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" Mar 11 10:09:40 crc kubenswrapper[4840]: E0311 10:09:40.062450 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:09:55 crc kubenswrapper[4840]: I0311 10:09:55.061325 4840 scope.go:117] "RemoveContainer" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" Mar 11 10:09:55 crc kubenswrapper[4840]: E0311 10:09:55.062219 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:10:00 crc kubenswrapper[4840]: I0311 10:10:00.139449 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553730-tcxxk"] Mar 11 10:10:00 crc kubenswrapper[4840]: E0311 10:10:00.140169 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02e3c919-5404-4cb9-b9ed-abc028140136" containerName="oc" Mar 11 10:10:00 crc kubenswrapper[4840]: I0311 10:10:00.140187 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="02e3c919-5404-4cb9-b9ed-abc028140136" containerName="oc" Mar 11 10:10:00 crc kubenswrapper[4840]: E0311 10:10:00.140208 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be23909-d6ef-4c24-917b-92559c10e15d" containerName="registry-server" Mar 11 10:10:00 crc kubenswrapper[4840]: I0311 10:10:00.140216 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be23909-d6ef-4c24-917b-92559c10e15d" containerName="registry-server" Mar 11 10:10:00 crc kubenswrapper[4840]: E0311 10:10:00.140234 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be23909-d6ef-4c24-917b-92559c10e15d" containerName="extract-content" Mar 11 10:10:00 crc kubenswrapper[4840]: I0311 10:10:00.140241 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be23909-d6ef-4c24-917b-92559c10e15d" containerName="extract-content" Mar 11 10:10:00 crc kubenswrapper[4840]: E0311 10:10:00.140256 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6be23909-d6ef-4c24-917b-92559c10e15d" containerName="extract-utilities" Mar 11 10:10:00 crc kubenswrapper[4840]: I0311 10:10:00.140264 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="6be23909-d6ef-4c24-917b-92559c10e15d" containerName="extract-utilities" Mar 11 10:10:00 crc kubenswrapper[4840]: I0311 10:10:00.140433 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="6be23909-d6ef-4c24-917b-92559c10e15d" containerName="registry-server" Mar 11 10:10:00 crc kubenswrapper[4840]: I0311 10:10:00.140459 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="02e3c919-5404-4cb9-b9ed-abc028140136" containerName="oc" Mar 11 10:10:00 crc kubenswrapper[4840]: I0311 10:10:00.141115 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553730-tcxxk" Mar 11 10:10:00 crc kubenswrapper[4840]: I0311 10:10:00.148079 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:10:00 crc kubenswrapper[4840]: I0311 10:10:00.148324 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:10:00 crc kubenswrapper[4840]: I0311 10:10:00.148401 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:10:00 crc kubenswrapper[4840]: I0311 10:10:00.151126 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553730-tcxxk"] Mar 11 10:10:00 crc kubenswrapper[4840]: I0311 10:10:00.312194 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dhj7\" (UniqueName: \"kubernetes.io/projected/2e259382-393b-46c3-bc59-5be19c0c9fc4-kube-api-access-5dhj7\") pod \"auto-csr-approver-29553730-tcxxk\" (UID: \"2e259382-393b-46c3-bc59-5be19c0c9fc4\") " pod="openshift-infra/auto-csr-approver-29553730-tcxxk" Mar 11 10:10:00 crc kubenswrapper[4840]: I0311 10:10:00.416743 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dhj7\" (UniqueName: \"kubernetes.io/projected/2e259382-393b-46c3-bc59-5be19c0c9fc4-kube-api-access-5dhj7\") pod \"auto-csr-approver-29553730-tcxxk\" (UID: \"2e259382-393b-46c3-bc59-5be19c0c9fc4\") " pod="openshift-infra/auto-csr-approver-29553730-tcxxk" Mar 11 10:10:00 crc kubenswrapper[4840]: I0311 10:10:00.449873 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dhj7\" (UniqueName: \"kubernetes.io/projected/2e259382-393b-46c3-bc59-5be19c0c9fc4-kube-api-access-5dhj7\") pod \"auto-csr-approver-29553730-tcxxk\" (UID: \"2e259382-393b-46c3-bc59-5be19c0c9fc4\") " pod="openshift-infra/auto-csr-approver-29553730-tcxxk" Mar 11 10:10:00 crc kubenswrapper[4840]: I0311 10:10:00.465075 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553730-tcxxk" Mar 11 10:10:01 crc kubenswrapper[4840]: I0311 10:10:01.145361 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 10:10:01 crc kubenswrapper[4840]: I0311 10:10:01.149993 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553730-tcxxk"] Mar 11 10:10:01 crc kubenswrapper[4840]: I0311 10:10:01.986717 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553730-tcxxk" event={"ID":"2e259382-393b-46c3-bc59-5be19c0c9fc4","Type":"ContainerStarted","Data":"771e152dd3ad2f9e673f6235bf0ebc0f37787825c0592414faed4887c049d687"} Mar 11 10:10:02 crc kubenswrapper[4840]: I0311 10:10:02.997723 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553730-tcxxk" event={"ID":"2e259382-393b-46c3-bc59-5be19c0c9fc4","Type":"ContainerStarted","Data":"70aa32a809d369b51115f4ae50b895173716b2e4000431a17327f9dabf7e0b90"} Mar 11 10:10:03 crc kubenswrapper[4840]: I0311 10:10:03.019017 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553730-tcxxk" podStartSLOduration=1.514157144 podStartE2EDuration="3.018995522s" podCreationTimestamp="2026-03-11 10:10:00 +0000 UTC" firstStartedPulling="2026-03-11 10:10:01.145114888 +0000 UTC m=+4399.810784703" lastFinishedPulling="2026-03-11 10:10:02.649953266 +0000 UTC m=+4401.315623081" observedRunningTime="2026-03-11 10:10:03.01064404 +0000 UTC m=+4401.676313875" watchObservedRunningTime="2026-03-11 10:10:03.018995522 +0000 UTC m=+4401.684665337" Mar 11 10:10:04 crc kubenswrapper[4840]: I0311 10:10:04.009093 4840 generic.go:334] "Generic (PLEG): container finished" podID="2e259382-393b-46c3-bc59-5be19c0c9fc4" containerID="70aa32a809d369b51115f4ae50b895173716b2e4000431a17327f9dabf7e0b90" exitCode=0 Mar 11 10:10:04 crc kubenswrapper[4840]: I0311 10:10:04.009225 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553730-tcxxk" event={"ID":"2e259382-393b-46c3-bc59-5be19c0c9fc4","Type":"ContainerDied","Data":"70aa32a809d369b51115f4ae50b895173716b2e4000431a17327f9dabf7e0b90"} Mar 11 10:10:05 crc kubenswrapper[4840]: I0311 10:10:05.283177 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553730-tcxxk" Mar 11 10:10:05 crc kubenswrapper[4840]: I0311 10:10:05.385148 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dhj7\" (UniqueName: \"kubernetes.io/projected/2e259382-393b-46c3-bc59-5be19c0c9fc4-kube-api-access-5dhj7\") pod \"2e259382-393b-46c3-bc59-5be19c0c9fc4\" (UID: \"2e259382-393b-46c3-bc59-5be19c0c9fc4\") " Mar 11 10:10:05 crc kubenswrapper[4840]: I0311 10:10:05.390879 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e259382-393b-46c3-bc59-5be19c0c9fc4-kube-api-access-5dhj7" (OuterVolumeSpecName: "kube-api-access-5dhj7") pod "2e259382-393b-46c3-bc59-5be19c0c9fc4" (UID: "2e259382-393b-46c3-bc59-5be19c0c9fc4"). InnerVolumeSpecName "kube-api-access-5dhj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:10:05 crc kubenswrapper[4840]: I0311 10:10:05.488272 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dhj7\" (UniqueName: \"kubernetes.io/projected/2e259382-393b-46c3-bc59-5be19c0c9fc4-kube-api-access-5dhj7\") on node \"crc\" DevicePath \"\"" Mar 11 10:10:06 crc kubenswrapper[4840]: I0311 10:10:06.023853 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553730-tcxxk" event={"ID":"2e259382-393b-46c3-bc59-5be19c0c9fc4","Type":"ContainerDied","Data":"771e152dd3ad2f9e673f6235bf0ebc0f37787825c0592414faed4887c049d687"} Mar 11 10:10:06 crc kubenswrapper[4840]: I0311 10:10:06.023935 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553730-tcxxk" Mar 11 10:10:06 crc kubenswrapper[4840]: I0311 10:10:06.023948 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="771e152dd3ad2f9e673f6235bf0ebc0f37787825c0592414faed4887c049d687" Mar 11 10:10:06 crc kubenswrapper[4840]: I0311 10:10:06.084752 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553724-sxqq5"] Mar 11 10:10:06 crc kubenswrapper[4840]: I0311 10:10:06.090074 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553724-sxqq5"] Mar 11 10:10:08 crc kubenswrapper[4840]: I0311 10:10:08.068129 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6145b333-1eac-413a-9a6b-36e9118266d4" path="/var/lib/kubelet/pods/6145b333-1eac-413a-9a6b-36e9118266d4/volumes" Mar 11 10:10:09 crc kubenswrapper[4840]: I0311 10:10:09.060489 4840 scope.go:117] "RemoveContainer" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" Mar 11 10:10:09 crc kubenswrapper[4840]: E0311 10:10:09.060758 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:10:11 crc kubenswrapper[4840]: I0311 10:10:11.436918 4840 scope.go:117] "RemoveContainer" containerID="fdf443aee2249fab43e720003aa4998d1596a7ea78a0abe8db74fe2dbc09f07d" Mar 11 10:10:22 crc kubenswrapper[4840]: I0311 10:10:22.075576 4840 scope.go:117] "RemoveContainer" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" Mar 11 10:10:22 crc kubenswrapper[4840]: E0311 10:10:22.076385 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:10:35 crc kubenswrapper[4840]: I0311 10:10:35.060770 4840 scope.go:117] "RemoveContainer" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" Mar 11 10:10:35 crc kubenswrapper[4840]: E0311 10:10:35.061446 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:10:36 crc kubenswrapper[4840]: I0311 10:10:36.177656 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7jkg4"] Mar 11 10:10:36 crc kubenswrapper[4840]: E0311 10:10:36.178396 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e259382-393b-46c3-bc59-5be19c0c9fc4" containerName="oc" Mar 11 10:10:36 crc kubenswrapper[4840]: I0311 10:10:36.178413 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e259382-393b-46c3-bc59-5be19c0c9fc4" containerName="oc" Mar 11 10:10:36 crc kubenswrapper[4840]: I0311 10:10:36.178635 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e259382-393b-46c3-bc59-5be19c0c9fc4" containerName="oc" Mar 11 10:10:36 crc kubenswrapper[4840]: I0311 10:10:36.180385 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7jkg4" Mar 11 10:10:36 crc kubenswrapper[4840]: I0311 10:10:36.195422 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jkg4"] Mar 11 10:10:36 crc kubenswrapper[4840]: I0311 10:10:36.347382 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1c37326-3a09-4c06-a110-ec6382c90ebd-utilities\") pod \"redhat-marketplace-7jkg4\" (UID: \"b1c37326-3a09-4c06-a110-ec6382c90ebd\") " pod="openshift-marketplace/redhat-marketplace-7jkg4" Mar 11 10:10:36 crc kubenswrapper[4840]: I0311 10:10:36.347493 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1c37326-3a09-4c06-a110-ec6382c90ebd-catalog-content\") pod \"redhat-marketplace-7jkg4\" (UID: \"b1c37326-3a09-4c06-a110-ec6382c90ebd\") " pod="openshift-marketplace/redhat-marketplace-7jkg4" Mar 11 10:10:36 crc kubenswrapper[4840]: I0311 10:10:36.347564 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2t9q\" (UniqueName: \"kubernetes.io/projected/b1c37326-3a09-4c06-a110-ec6382c90ebd-kube-api-access-g2t9q\") pod \"redhat-marketplace-7jkg4\" (UID: \"b1c37326-3a09-4c06-a110-ec6382c90ebd\") " pod="openshift-marketplace/redhat-marketplace-7jkg4" Mar 11 10:10:36 crc kubenswrapper[4840]: I0311 10:10:36.448876 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1c37326-3a09-4c06-a110-ec6382c90ebd-catalog-content\") pod \"redhat-marketplace-7jkg4\" (UID: \"b1c37326-3a09-4c06-a110-ec6382c90ebd\") " pod="openshift-marketplace/redhat-marketplace-7jkg4" Mar 11 10:10:36 crc kubenswrapper[4840]: I0311 10:10:36.448954 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2t9q\" (UniqueName: \"kubernetes.io/projected/b1c37326-3a09-4c06-a110-ec6382c90ebd-kube-api-access-g2t9q\") pod \"redhat-marketplace-7jkg4\" (UID: \"b1c37326-3a09-4c06-a110-ec6382c90ebd\") " pod="openshift-marketplace/redhat-marketplace-7jkg4" Mar 11 10:10:36 crc kubenswrapper[4840]: I0311 10:10:36.449049 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1c37326-3a09-4c06-a110-ec6382c90ebd-utilities\") pod \"redhat-marketplace-7jkg4\" (UID: \"b1c37326-3a09-4c06-a110-ec6382c90ebd\") " pod="openshift-marketplace/redhat-marketplace-7jkg4" Mar 11 10:10:36 crc kubenswrapper[4840]: I0311 10:10:36.449680 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1c37326-3a09-4c06-a110-ec6382c90ebd-utilities\") pod \"redhat-marketplace-7jkg4\" (UID: \"b1c37326-3a09-4c06-a110-ec6382c90ebd\") " pod="openshift-marketplace/redhat-marketplace-7jkg4" Mar 11 10:10:36 crc kubenswrapper[4840]: I0311 10:10:36.450183 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1c37326-3a09-4c06-a110-ec6382c90ebd-catalog-content\") pod \"redhat-marketplace-7jkg4\" (UID: \"b1c37326-3a09-4c06-a110-ec6382c90ebd\") " pod="openshift-marketplace/redhat-marketplace-7jkg4" Mar 11 10:10:36 crc kubenswrapper[4840]: I0311 10:10:36.473637 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2t9q\" (UniqueName: \"kubernetes.io/projected/b1c37326-3a09-4c06-a110-ec6382c90ebd-kube-api-access-g2t9q\") pod \"redhat-marketplace-7jkg4\" (UID: \"b1c37326-3a09-4c06-a110-ec6382c90ebd\") " pod="openshift-marketplace/redhat-marketplace-7jkg4" Mar 11 10:10:36 crc kubenswrapper[4840]: I0311 10:10:36.556480 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7jkg4" Mar 11 10:10:36 crc kubenswrapper[4840]: I0311 10:10:36.812745 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jkg4"] Mar 11 10:10:37 crc kubenswrapper[4840]: I0311 10:10:37.245794 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jkg4" event={"ID":"b1c37326-3a09-4c06-a110-ec6382c90ebd","Type":"ContainerStarted","Data":"1a0d38affb63058bb270c210d2118e53f87d16b54d8116f35b89e30eab37d7e0"} Mar 11 10:10:38 crc kubenswrapper[4840]: I0311 10:10:38.255535 4840 generic.go:334] "Generic (PLEG): container finished" podID="b1c37326-3a09-4c06-a110-ec6382c90ebd" containerID="9244907ce3d8703e1cc3b8f8d4439d01e9cab80c38ebc415908bde88fae371d6" exitCode=0 Mar 11 10:10:38 crc kubenswrapper[4840]: I0311 10:10:38.255606 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jkg4" event={"ID":"b1c37326-3a09-4c06-a110-ec6382c90ebd","Type":"ContainerDied","Data":"9244907ce3d8703e1cc3b8f8d4439d01e9cab80c38ebc415908bde88fae371d6"} Mar 11 10:10:38 crc kubenswrapper[4840]: I0311 10:10:38.575440 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wg7kx"] Mar 11 10:10:38 crc kubenswrapper[4840]: I0311 10:10:38.576960 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wg7kx" Mar 11 10:10:38 crc kubenswrapper[4840]: I0311 10:10:38.582601 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/310928c9-4184-4a96-99e4-87f09d867b85-utilities\") pod \"community-operators-wg7kx\" (UID: \"310928c9-4184-4a96-99e4-87f09d867b85\") " pod="openshift-marketplace/community-operators-wg7kx" Mar 11 10:10:38 crc kubenswrapper[4840]: I0311 10:10:38.582899 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/310928c9-4184-4a96-99e4-87f09d867b85-catalog-content\") pod \"community-operators-wg7kx\" (UID: \"310928c9-4184-4a96-99e4-87f09d867b85\") " pod="openshift-marketplace/community-operators-wg7kx" Mar 11 10:10:38 crc kubenswrapper[4840]: I0311 10:10:38.582986 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f57rr\" (UniqueName: \"kubernetes.io/projected/310928c9-4184-4a96-99e4-87f09d867b85-kube-api-access-f57rr\") pod \"community-operators-wg7kx\" (UID: \"310928c9-4184-4a96-99e4-87f09d867b85\") " pod="openshift-marketplace/community-operators-wg7kx" Mar 11 10:10:38 crc kubenswrapper[4840]: I0311 10:10:38.592811 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wg7kx"] Mar 11 10:10:38 crc kubenswrapper[4840]: I0311 10:10:38.686022 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/310928c9-4184-4a96-99e4-87f09d867b85-catalog-content\") pod \"community-operators-wg7kx\" (UID: \"310928c9-4184-4a96-99e4-87f09d867b85\") " pod="openshift-marketplace/community-operators-wg7kx" Mar 11 10:10:38 crc kubenswrapper[4840]: I0311 10:10:38.686099 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f57rr\" (UniqueName: \"kubernetes.io/projected/310928c9-4184-4a96-99e4-87f09d867b85-kube-api-access-f57rr\") pod \"community-operators-wg7kx\" (UID: \"310928c9-4184-4a96-99e4-87f09d867b85\") " pod="openshift-marketplace/community-operators-wg7kx" Mar 11 10:10:38 crc kubenswrapper[4840]: I0311 10:10:38.686156 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/310928c9-4184-4a96-99e4-87f09d867b85-utilities\") pod \"community-operators-wg7kx\" (UID: \"310928c9-4184-4a96-99e4-87f09d867b85\") " pod="openshift-marketplace/community-operators-wg7kx" Mar 11 10:10:38 crc kubenswrapper[4840]: I0311 10:10:38.686690 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/310928c9-4184-4a96-99e4-87f09d867b85-utilities\") pod \"community-operators-wg7kx\" (UID: \"310928c9-4184-4a96-99e4-87f09d867b85\") " pod="openshift-marketplace/community-operators-wg7kx" Mar 11 10:10:38 crc kubenswrapper[4840]: I0311 10:10:38.687052 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/310928c9-4184-4a96-99e4-87f09d867b85-catalog-content\") pod \"community-operators-wg7kx\" (UID: \"310928c9-4184-4a96-99e4-87f09d867b85\") " pod="openshift-marketplace/community-operators-wg7kx" Mar 11 10:10:38 crc kubenswrapper[4840]: I0311 10:10:38.719785 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f57rr\" (UniqueName: \"kubernetes.io/projected/310928c9-4184-4a96-99e4-87f09d867b85-kube-api-access-f57rr\") pod \"community-operators-wg7kx\" (UID: \"310928c9-4184-4a96-99e4-87f09d867b85\") " pod="openshift-marketplace/community-operators-wg7kx" Mar 11 10:10:38 crc kubenswrapper[4840]: I0311 10:10:38.954655 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wg7kx" Mar 11 10:10:39 crc kubenswrapper[4840]: I0311 10:10:39.231082 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wg7kx"] Mar 11 10:10:39 crc kubenswrapper[4840]: W0311 10:10:39.235389 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod310928c9_4184_4a96_99e4_87f09d867b85.slice/crio-ad9e7153ae5679979415950e4f7e9d70a1aeb019ba34c7d0b61767da35a30ccd WatchSource:0}: Error finding container ad9e7153ae5679979415950e4f7e9d70a1aeb019ba34c7d0b61767da35a30ccd: Status 404 returned error can't find the container with id ad9e7153ae5679979415950e4f7e9d70a1aeb019ba34c7d0b61767da35a30ccd Mar 11 10:10:39 crc kubenswrapper[4840]: I0311 10:10:39.268098 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wg7kx" event={"ID":"310928c9-4184-4a96-99e4-87f09d867b85","Type":"ContainerStarted","Data":"ad9e7153ae5679979415950e4f7e9d70a1aeb019ba34c7d0b61767da35a30ccd"} Mar 11 10:10:40 crc kubenswrapper[4840]: I0311 10:10:40.277363 4840 generic.go:334] "Generic (PLEG): container finished" podID="310928c9-4184-4a96-99e4-87f09d867b85" containerID="e8d6097e01891ca510d8afa3da734c3271c75f2bbca5c621624e1e439236624a" exitCode=0 Mar 11 10:10:40 crc kubenswrapper[4840]: I0311 10:10:40.277444 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wg7kx" event={"ID":"310928c9-4184-4a96-99e4-87f09d867b85","Type":"ContainerDied","Data":"e8d6097e01891ca510d8afa3da734c3271c75f2bbca5c621624e1e439236624a"} Mar 11 10:10:40 crc kubenswrapper[4840]: I0311 10:10:40.280449 4840 generic.go:334] "Generic (PLEG): container finished" podID="b1c37326-3a09-4c06-a110-ec6382c90ebd" containerID="7b5d5d9f361cc3ffd8353393eb720e146984169d7e4e3c73a823951c4b84fd31" exitCode=0 Mar 11 10:10:40 crc kubenswrapper[4840]: I0311 10:10:40.280491 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jkg4" event={"ID":"b1c37326-3a09-4c06-a110-ec6382c90ebd","Type":"ContainerDied","Data":"7b5d5d9f361cc3ffd8353393eb720e146984169d7e4e3c73a823951c4b84fd31"} Mar 11 10:10:41 crc kubenswrapper[4840]: I0311 10:10:41.289199 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jkg4" event={"ID":"b1c37326-3a09-4c06-a110-ec6382c90ebd","Type":"ContainerStarted","Data":"7db7e9e44661f4a246aa2ea9c97560ae2ad49fd9ddfe5bedee8f19c2458ed6fb"} Mar 11 10:10:41 crc kubenswrapper[4840]: I0311 10:10:41.291337 4840 generic.go:334] "Generic (PLEG): container finished" podID="310928c9-4184-4a96-99e4-87f09d867b85" containerID="db5c63a45433cf6a05c571aa1ec5c36ea300bd65b3ef4a7e2636d68152b46b6f" exitCode=0 Mar 11 10:10:41 crc kubenswrapper[4840]: I0311 10:10:41.291378 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wg7kx" event={"ID":"310928c9-4184-4a96-99e4-87f09d867b85","Type":"ContainerDied","Data":"db5c63a45433cf6a05c571aa1ec5c36ea300bd65b3ef4a7e2636d68152b46b6f"} Mar 11 10:10:41 crc kubenswrapper[4840]: I0311 10:10:41.312328 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7jkg4" podStartSLOduration=2.806122683 podStartE2EDuration="5.312311163s" podCreationTimestamp="2026-03-11 10:10:36 +0000 UTC" firstStartedPulling="2026-03-11 10:10:38.258064338 +0000 UTC m=+4436.923734153" lastFinishedPulling="2026-03-11 10:10:40.764252818 +0000 UTC m=+4439.429922633" observedRunningTime="2026-03-11 10:10:41.308202129 +0000 UTC m=+4439.973871944" watchObservedRunningTime="2026-03-11 10:10:41.312311163 +0000 UTC m=+4439.977980978" Mar 11 10:10:42 crc kubenswrapper[4840]: I0311 10:10:42.303148 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wg7kx" event={"ID":"310928c9-4184-4a96-99e4-87f09d867b85","Type":"ContainerStarted","Data":"796f2951d38da348e608cebc4e47645e66e4b5b38389cdea1832499851a0e820"} Mar 11 10:10:42 crc kubenswrapper[4840]: I0311 10:10:42.325121 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wg7kx" podStartSLOduration=2.798525115 podStartE2EDuration="4.325102984s" podCreationTimestamp="2026-03-11 10:10:38 +0000 UTC" firstStartedPulling="2026-03-11 10:10:40.279810073 +0000 UTC m=+4438.945479888" lastFinishedPulling="2026-03-11 10:10:41.806387922 +0000 UTC m=+4440.472057757" observedRunningTime="2026-03-11 10:10:42.322869197 +0000 UTC m=+4440.988539022" watchObservedRunningTime="2026-03-11 10:10:42.325102984 +0000 UTC m=+4440.990772799" Mar 11 10:10:46 crc kubenswrapper[4840]: I0311 10:10:46.556879 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7jkg4" Mar 11 10:10:46 crc kubenswrapper[4840]: I0311 10:10:46.557208 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7jkg4" Mar 11 10:10:46 crc kubenswrapper[4840]: I0311 10:10:46.606237 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7jkg4" Mar 11 10:10:47 crc kubenswrapper[4840]: I0311 10:10:47.391722 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7jkg4" Mar 11 10:10:47 crc kubenswrapper[4840]: I0311 10:10:47.454051 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jkg4"] Mar 11 10:10:48 crc kubenswrapper[4840]: I0311 10:10:48.955676 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wg7kx" Mar 11 10:10:48 crc kubenswrapper[4840]: I0311 10:10:48.955906 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wg7kx" Mar 11 10:10:48 crc kubenswrapper[4840]: I0311 10:10:48.996837 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wg7kx" Mar 11 10:10:49 crc kubenswrapper[4840]: I0311 10:10:49.356748 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7jkg4" podUID="b1c37326-3a09-4c06-a110-ec6382c90ebd" containerName="registry-server" containerID="cri-o://7db7e9e44661f4a246aa2ea9c97560ae2ad49fd9ddfe5bedee8f19c2458ed6fb" gracePeriod=2 Mar 11 10:10:49 crc kubenswrapper[4840]: I0311 10:10:49.399266 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wg7kx" Mar 11 10:10:49 crc kubenswrapper[4840]: I0311 10:10:49.780557 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7jkg4" Mar 11 10:10:49 crc kubenswrapper[4840]: I0311 10:10:49.959887 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2t9q\" (UniqueName: \"kubernetes.io/projected/b1c37326-3a09-4c06-a110-ec6382c90ebd-kube-api-access-g2t9q\") pod \"b1c37326-3a09-4c06-a110-ec6382c90ebd\" (UID: \"b1c37326-3a09-4c06-a110-ec6382c90ebd\") " Mar 11 10:10:49 crc kubenswrapper[4840]: I0311 10:10:49.960009 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1c37326-3a09-4c06-a110-ec6382c90ebd-catalog-content\") pod \"b1c37326-3a09-4c06-a110-ec6382c90ebd\" (UID: \"b1c37326-3a09-4c06-a110-ec6382c90ebd\") " Mar 11 10:10:49 crc kubenswrapper[4840]: I0311 10:10:49.960905 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1c37326-3a09-4c06-a110-ec6382c90ebd-utilities" (OuterVolumeSpecName: "utilities") pod "b1c37326-3a09-4c06-a110-ec6382c90ebd" (UID: "b1c37326-3a09-4c06-a110-ec6382c90ebd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:10:49 crc kubenswrapper[4840]: I0311 10:10:49.960047 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1c37326-3a09-4c06-a110-ec6382c90ebd-utilities\") pod \"b1c37326-3a09-4c06-a110-ec6382c90ebd\" (UID: \"b1c37326-3a09-4c06-a110-ec6382c90ebd\") " Mar 11 10:10:49 crc kubenswrapper[4840]: I0311 10:10:49.962397 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1c37326-3a09-4c06-a110-ec6382c90ebd-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:10:49 crc kubenswrapper[4840]: I0311 10:10:49.965193 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1c37326-3a09-4c06-a110-ec6382c90ebd-kube-api-access-g2t9q" (OuterVolumeSpecName: "kube-api-access-g2t9q") pod "b1c37326-3a09-4c06-a110-ec6382c90ebd" (UID: "b1c37326-3a09-4c06-a110-ec6382c90ebd"). InnerVolumeSpecName "kube-api-access-g2t9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:10:49 crc kubenswrapper[4840]: I0311 10:10:49.995824 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1c37326-3a09-4c06-a110-ec6382c90ebd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1c37326-3a09-4c06-a110-ec6382c90ebd" (UID: "b1c37326-3a09-4c06-a110-ec6382c90ebd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:10:50 crc kubenswrapper[4840]: I0311 10:10:50.062056 4840 scope.go:117] "RemoveContainer" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" Mar 11 10:10:50 crc kubenswrapper[4840]: E0311 10:10:50.062307 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:10:50 crc kubenswrapper[4840]: I0311 10:10:50.063236 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2t9q\" (UniqueName: \"kubernetes.io/projected/b1c37326-3a09-4c06-a110-ec6382c90ebd-kube-api-access-g2t9q\") on node \"crc\" DevicePath \"\"" Mar 11 10:10:50 crc kubenswrapper[4840]: I0311 10:10:50.063272 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1c37326-3a09-4c06-a110-ec6382c90ebd-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:10:50 crc kubenswrapper[4840]: I0311 10:10:50.365390 4840 generic.go:334] "Generic (PLEG): container finished" podID="b1c37326-3a09-4c06-a110-ec6382c90ebd" containerID="7db7e9e44661f4a246aa2ea9c97560ae2ad49fd9ddfe5bedee8f19c2458ed6fb" exitCode=0 Mar 11 10:10:50 crc kubenswrapper[4840]: I0311 10:10:50.365489 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7jkg4" Mar 11 10:10:50 crc kubenswrapper[4840]: I0311 10:10:50.365502 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jkg4" event={"ID":"b1c37326-3a09-4c06-a110-ec6382c90ebd","Type":"ContainerDied","Data":"7db7e9e44661f4a246aa2ea9c97560ae2ad49fd9ddfe5bedee8f19c2458ed6fb"} Mar 11 10:10:50 crc kubenswrapper[4840]: I0311 10:10:50.365556 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jkg4" event={"ID":"b1c37326-3a09-4c06-a110-ec6382c90ebd","Type":"ContainerDied","Data":"1a0d38affb63058bb270c210d2118e53f87d16b54d8116f35b89e30eab37d7e0"} Mar 11 10:10:50 crc kubenswrapper[4840]: I0311 10:10:50.365579 4840 scope.go:117] "RemoveContainer" containerID="7db7e9e44661f4a246aa2ea9c97560ae2ad49fd9ddfe5bedee8f19c2458ed6fb" Mar 11 10:10:50 crc kubenswrapper[4840]: I0311 10:10:50.391942 4840 scope.go:117] "RemoveContainer" containerID="7b5d5d9f361cc3ffd8353393eb720e146984169d7e4e3c73a823951c4b84fd31" Mar 11 10:10:50 crc kubenswrapper[4840]: I0311 10:10:50.392304 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jkg4"] Mar 11 10:10:50 crc kubenswrapper[4840]: I0311 10:10:50.400131 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jkg4"] Mar 11 10:10:50 crc kubenswrapper[4840]: I0311 10:10:50.413546 4840 scope.go:117] "RemoveContainer" containerID="9244907ce3d8703e1cc3b8f8d4439d01e9cab80c38ebc415908bde88fae371d6" Mar 11 10:10:50 crc kubenswrapper[4840]: I0311 10:10:50.441794 4840 scope.go:117] "RemoveContainer" containerID="7db7e9e44661f4a246aa2ea9c97560ae2ad49fd9ddfe5bedee8f19c2458ed6fb" Mar 11 10:10:50 crc kubenswrapper[4840]: E0311 10:10:50.442552 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7db7e9e44661f4a246aa2ea9c97560ae2ad49fd9ddfe5bedee8f19c2458ed6fb\": container with ID starting with 7db7e9e44661f4a246aa2ea9c97560ae2ad49fd9ddfe5bedee8f19c2458ed6fb not found: ID does not exist" containerID="7db7e9e44661f4a246aa2ea9c97560ae2ad49fd9ddfe5bedee8f19c2458ed6fb" Mar 11 10:10:50 crc kubenswrapper[4840]: I0311 10:10:50.442635 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7db7e9e44661f4a246aa2ea9c97560ae2ad49fd9ddfe5bedee8f19c2458ed6fb"} err="failed to get container status \"7db7e9e44661f4a246aa2ea9c97560ae2ad49fd9ddfe5bedee8f19c2458ed6fb\": rpc error: code = NotFound desc = could not find container \"7db7e9e44661f4a246aa2ea9c97560ae2ad49fd9ddfe5bedee8f19c2458ed6fb\": container with ID starting with 7db7e9e44661f4a246aa2ea9c97560ae2ad49fd9ddfe5bedee8f19c2458ed6fb not found: ID does not exist" Mar 11 10:10:50 crc kubenswrapper[4840]: I0311 10:10:50.442811 4840 scope.go:117] "RemoveContainer" containerID="7b5d5d9f361cc3ffd8353393eb720e146984169d7e4e3c73a823951c4b84fd31" Mar 11 10:10:50 crc kubenswrapper[4840]: E0311 10:10:50.443676 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b5d5d9f361cc3ffd8353393eb720e146984169d7e4e3c73a823951c4b84fd31\": container with ID starting with 7b5d5d9f361cc3ffd8353393eb720e146984169d7e4e3c73a823951c4b84fd31 not found: ID does not exist" containerID="7b5d5d9f361cc3ffd8353393eb720e146984169d7e4e3c73a823951c4b84fd31" Mar 11 10:10:50 crc kubenswrapper[4840]: I0311 10:10:50.443729 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b5d5d9f361cc3ffd8353393eb720e146984169d7e4e3c73a823951c4b84fd31"} err="failed to get container status \"7b5d5d9f361cc3ffd8353393eb720e146984169d7e4e3c73a823951c4b84fd31\": rpc error: code = NotFound desc = could not find container \"7b5d5d9f361cc3ffd8353393eb720e146984169d7e4e3c73a823951c4b84fd31\": container with ID starting with 7b5d5d9f361cc3ffd8353393eb720e146984169d7e4e3c73a823951c4b84fd31 not found: ID does not exist" Mar 11 10:10:50 crc kubenswrapper[4840]: I0311 10:10:50.443759 4840 scope.go:117] "RemoveContainer" containerID="9244907ce3d8703e1cc3b8f8d4439d01e9cab80c38ebc415908bde88fae371d6" Mar 11 10:10:50 crc kubenswrapper[4840]: E0311 10:10:50.444138 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9244907ce3d8703e1cc3b8f8d4439d01e9cab80c38ebc415908bde88fae371d6\": container with ID starting with 9244907ce3d8703e1cc3b8f8d4439d01e9cab80c38ebc415908bde88fae371d6 not found: ID does not exist" containerID="9244907ce3d8703e1cc3b8f8d4439d01e9cab80c38ebc415908bde88fae371d6" Mar 11 10:10:50 crc kubenswrapper[4840]: I0311 10:10:50.444178 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9244907ce3d8703e1cc3b8f8d4439d01e9cab80c38ebc415908bde88fae371d6"} err="failed to get container status \"9244907ce3d8703e1cc3b8f8d4439d01e9cab80c38ebc415908bde88fae371d6\": rpc error: code = NotFound desc = could not find container \"9244907ce3d8703e1cc3b8f8d4439d01e9cab80c38ebc415908bde88fae371d6\": container with ID starting with 9244907ce3d8703e1cc3b8f8d4439d01e9cab80c38ebc415908bde88fae371d6 not found: ID does not exist" Mar 11 10:10:50 crc kubenswrapper[4840]: I0311 10:10:50.643979 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wg7kx"] Mar 11 10:10:51 crc kubenswrapper[4840]: I0311 10:10:51.374140 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wg7kx" podUID="310928c9-4184-4a96-99e4-87f09d867b85" containerName="registry-server" containerID="cri-o://796f2951d38da348e608cebc4e47645e66e4b5b38389cdea1832499851a0e820" gracePeriod=2 Mar 11 10:10:51 crc kubenswrapper[4840]: I0311 10:10:51.757151 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wg7kx" Mar 11 10:10:51 crc kubenswrapper[4840]: I0311 10:10:51.894884 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/310928c9-4184-4a96-99e4-87f09d867b85-utilities\") pod \"310928c9-4184-4a96-99e4-87f09d867b85\" (UID: \"310928c9-4184-4a96-99e4-87f09d867b85\") " Mar 11 10:10:51 crc kubenswrapper[4840]: I0311 10:10:51.895587 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f57rr\" (UniqueName: \"kubernetes.io/projected/310928c9-4184-4a96-99e4-87f09d867b85-kube-api-access-f57rr\") pod \"310928c9-4184-4a96-99e4-87f09d867b85\" (UID: \"310928c9-4184-4a96-99e4-87f09d867b85\") " Mar 11 10:10:51 crc kubenswrapper[4840]: I0311 10:10:51.895632 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/310928c9-4184-4a96-99e4-87f09d867b85-catalog-content\") pod \"310928c9-4184-4a96-99e4-87f09d867b85\" (UID: \"310928c9-4184-4a96-99e4-87f09d867b85\") " Mar 11 10:10:51 crc kubenswrapper[4840]: I0311 10:10:51.896083 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/310928c9-4184-4a96-99e4-87f09d867b85-utilities" (OuterVolumeSpecName: "utilities") pod "310928c9-4184-4a96-99e4-87f09d867b85" (UID: "310928c9-4184-4a96-99e4-87f09d867b85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:10:51 crc kubenswrapper[4840]: I0311 10:10:51.902190 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/310928c9-4184-4a96-99e4-87f09d867b85-kube-api-access-f57rr" (OuterVolumeSpecName: "kube-api-access-f57rr") pod "310928c9-4184-4a96-99e4-87f09d867b85" (UID: "310928c9-4184-4a96-99e4-87f09d867b85"). InnerVolumeSpecName "kube-api-access-f57rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:10:51 crc kubenswrapper[4840]: I0311 10:10:51.954120 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/310928c9-4184-4a96-99e4-87f09d867b85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "310928c9-4184-4a96-99e4-87f09d867b85" (UID: "310928c9-4184-4a96-99e4-87f09d867b85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:10:51 crc kubenswrapper[4840]: I0311 10:10:51.997411 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/310928c9-4184-4a96-99e4-87f09d867b85-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:10:51 crc kubenswrapper[4840]: I0311 10:10:51.997448 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f57rr\" (UniqueName: \"kubernetes.io/projected/310928c9-4184-4a96-99e4-87f09d867b85-kube-api-access-f57rr\") on node \"crc\" DevicePath \"\"" Mar 11 10:10:51 crc kubenswrapper[4840]: I0311 10:10:51.997460 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/310928c9-4184-4a96-99e4-87f09d867b85-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:10:52 crc kubenswrapper[4840]: I0311 10:10:52.070802 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1c37326-3a09-4c06-a110-ec6382c90ebd" path="/var/lib/kubelet/pods/b1c37326-3a09-4c06-a110-ec6382c90ebd/volumes" Mar 11 10:10:52 crc kubenswrapper[4840]: I0311 10:10:52.383240 4840 generic.go:334] "Generic (PLEG): container finished" podID="310928c9-4184-4a96-99e4-87f09d867b85" containerID="796f2951d38da348e608cebc4e47645e66e4b5b38389cdea1832499851a0e820" exitCode=0 Mar 11 10:10:52 crc kubenswrapper[4840]: I0311 10:10:52.383286 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wg7kx" event={"ID":"310928c9-4184-4a96-99e4-87f09d867b85","Type":"ContainerDied","Data":"796f2951d38da348e608cebc4e47645e66e4b5b38389cdea1832499851a0e820"} Mar 11 10:10:52 crc kubenswrapper[4840]: I0311 10:10:52.383327 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wg7kx" Mar 11 10:10:52 crc kubenswrapper[4840]: I0311 10:10:52.383346 4840 scope.go:117] "RemoveContainer" containerID="796f2951d38da348e608cebc4e47645e66e4b5b38389cdea1832499851a0e820" Mar 11 10:10:52 crc kubenswrapper[4840]: I0311 10:10:52.383331 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wg7kx" event={"ID":"310928c9-4184-4a96-99e4-87f09d867b85","Type":"ContainerDied","Data":"ad9e7153ae5679979415950e4f7e9d70a1aeb019ba34c7d0b61767da35a30ccd"} Mar 11 10:10:52 crc kubenswrapper[4840]: I0311 10:10:52.402021 4840 scope.go:117] "RemoveContainer" containerID="db5c63a45433cf6a05c571aa1ec5c36ea300bd65b3ef4a7e2636d68152b46b6f" Mar 11 10:10:52 crc kubenswrapper[4840]: I0311 10:10:52.410325 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wg7kx"] Mar 11 10:10:52 crc kubenswrapper[4840]: I0311 10:10:52.415964 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wg7kx"] Mar 11 10:10:52 crc kubenswrapper[4840]: I0311 10:10:52.423879 4840 scope.go:117] "RemoveContainer" containerID="e8d6097e01891ca510d8afa3da734c3271c75f2bbca5c621624e1e439236624a" Mar 11 10:10:52 crc kubenswrapper[4840]: I0311 10:10:52.448150 4840 scope.go:117] "RemoveContainer" containerID="796f2951d38da348e608cebc4e47645e66e4b5b38389cdea1832499851a0e820" Mar 11 10:10:52 crc kubenswrapper[4840]: E0311 10:10:52.448662 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"796f2951d38da348e608cebc4e47645e66e4b5b38389cdea1832499851a0e820\": container with ID starting with 796f2951d38da348e608cebc4e47645e66e4b5b38389cdea1832499851a0e820 not found: ID does not exist" containerID="796f2951d38da348e608cebc4e47645e66e4b5b38389cdea1832499851a0e820" Mar 11 10:10:52 crc kubenswrapper[4840]: I0311 10:10:52.448696 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"796f2951d38da348e608cebc4e47645e66e4b5b38389cdea1832499851a0e820"} err="failed to get container status \"796f2951d38da348e608cebc4e47645e66e4b5b38389cdea1832499851a0e820\": rpc error: code = NotFound desc = could not find container \"796f2951d38da348e608cebc4e47645e66e4b5b38389cdea1832499851a0e820\": container with ID starting with 796f2951d38da348e608cebc4e47645e66e4b5b38389cdea1832499851a0e820 not found: ID does not exist" Mar 11 10:10:52 crc kubenswrapper[4840]: I0311 10:10:52.448719 4840 scope.go:117] "RemoveContainer" containerID="db5c63a45433cf6a05c571aa1ec5c36ea300bd65b3ef4a7e2636d68152b46b6f" Mar 11 10:10:52 crc kubenswrapper[4840]: E0311 10:10:52.449015 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db5c63a45433cf6a05c571aa1ec5c36ea300bd65b3ef4a7e2636d68152b46b6f\": container with ID starting with db5c63a45433cf6a05c571aa1ec5c36ea300bd65b3ef4a7e2636d68152b46b6f not found: ID does not exist" containerID="db5c63a45433cf6a05c571aa1ec5c36ea300bd65b3ef4a7e2636d68152b46b6f" Mar 11 10:10:52 crc kubenswrapper[4840]: I0311 10:10:52.449076 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db5c63a45433cf6a05c571aa1ec5c36ea300bd65b3ef4a7e2636d68152b46b6f"} err="failed to get container status \"db5c63a45433cf6a05c571aa1ec5c36ea300bd65b3ef4a7e2636d68152b46b6f\": rpc error: code = NotFound desc = could not find container \"db5c63a45433cf6a05c571aa1ec5c36ea300bd65b3ef4a7e2636d68152b46b6f\": container with ID starting with db5c63a45433cf6a05c571aa1ec5c36ea300bd65b3ef4a7e2636d68152b46b6f not found: ID does not exist" Mar 11 10:10:52 crc kubenswrapper[4840]: I0311 10:10:52.449090 4840 scope.go:117] "RemoveContainer" containerID="e8d6097e01891ca510d8afa3da734c3271c75f2bbca5c621624e1e439236624a" Mar 11 10:10:52 crc kubenswrapper[4840]: E0311 10:10:52.449399 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8d6097e01891ca510d8afa3da734c3271c75f2bbca5c621624e1e439236624a\": container with ID starting with e8d6097e01891ca510d8afa3da734c3271c75f2bbca5c621624e1e439236624a not found: ID does not exist" containerID="e8d6097e01891ca510d8afa3da734c3271c75f2bbca5c621624e1e439236624a" Mar 11 10:10:52 crc kubenswrapper[4840]: I0311 10:10:52.449419 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8d6097e01891ca510d8afa3da734c3271c75f2bbca5c621624e1e439236624a"} err="failed to get container status \"e8d6097e01891ca510d8afa3da734c3271c75f2bbca5c621624e1e439236624a\": rpc error: code = NotFound desc = could not find container \"e8d6097e01891ca510d8afa3da734c3271c75f2bbca5c621624e1e439236624a\": container with ID starting with e8d6097e01891ca510d8afa3da734c3271c75f2bbca5c621624e1e439236624a not found: ID does not exist" Mar 11 10:10:54 crc kubenswrapper[4840]: I0311 10:10:54.069063 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="310928c9-4184-4a96-99e4-87f09d867b85" path="/var/lib/kubelet/pods/310928c9-4184-4a96-99e4-87f09d867b85/volumes" Mar 11 10:11:05 crc kubenswrapper[4840]: I0311 10:11:05.060176 4840 scope.go:117] "RemoveContainer" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" Mar 11 10:11:05 crc kubenswrapper[4840]: E0311 10:11:05.060971 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:11:20 crc kubenswrapper[4840]: I0311 10:11:20.060229 4840 scope.go:117] "RemoveContainer" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" Mar 11 10:11:20 crc kubenswrapper[4840]: E0311 10:11:20.061247 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:11:31 crc kubenswrapper[4840]: I0311 10:11:31.059970 4840 scope.go:117] "RemoveContainer" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" Mar 11 10:11:31 crc kubenswrapper[4840]: E0311 10:11:31.062585 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:11:46 crc kubenswrapper[4840]: I0311 10:11:46.060829 4840 scope.go:117] "RemoveContainer" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" Mar 11 10:11:46 crc kubenswrapper[4840]: E0311 10:11:46.061668 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:11:58 crc kubenswrapper[4840]: I0311 10:11:58.060668 4840 scope.go:117] "RemoveContainer" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" Mar 11 10:11:58 crc kubenswrapper[4840]: E0311 10:11:58.061561 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:12:00 crc kubenswrapper[4840]: I0311 10:12:00.147652 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553732-z9df8"] Mar 11 10:12:00 crc kubenswrapper[4840]: E0311 10:12:00.148374 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1c37326-3a09-4c06-a110-ec6382c90ebd" containerName="registry-server" Mar 11 10:12:00 crc kubenswrapper[4840]: I0311 10:12:00.148395 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1c37326-3a09-4c06-a110-ec6382c90ebd" containerName="registry-server" Mar 11 10:12:00 crc kubenswrapper[4840]: E0311 10:12:00.148412 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1c37326-3a09-4c06-a110-ec6382c90ebd" containerName="extract-content" Mar 11 10:12:00 crc kubenswrapper[4840]: I0311 10:12:00.148421 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1c37326-3a09-4c06-a110-ec6382c90ebd" containerName="extract-content" Mar 11 10:12:00 crc kubenswrapper[4840]: E0311 10:12:00.148444 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="310928c9-4184-4a96-99e4-87f09d867b85" containerName="extract-utilities" Mar 11 10:12:00 crc kubenswrapper[4840]: I0311 10:12:00.148452 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="310928c9-4184-4a96-99e4-87f09d867b85" containerName="extract-utilities" Mar 11 10:12:00 crc kubenswrapper[4840]: E0311 10:12:00.148530 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1c37326-3a09-4c06-a110-ec6382c90ebd" containerName="extract-utilities" Mar 11 10:12:00 crc kubenswrapper[4840]: I0311 10:12:00.148539 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1c37326-3a09-4c06-a110-ec6382c90ebd" containerName="extract-utilities" Mar 11 10:12:00 crc kubenswrapper[4840]: E0311 10:12:00.148554 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="310928c9-4184-4a96-99e4-87f09d867b85" containerName="registry-server" Mar 11 10:12:00 crc kubenswrapper[4840]: I0311 10:12:00.148562 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="310928c9-4184-4a96-99e4-87f09d867b85" containerName="registry-server" Mar 11 10:12:00 crc kubenswrapper[4840]: E0311 10:12:00.148574 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="310928c9-4184-4a96-99e4-87f09d867b85" containerName="extract-content" Mar 11 10:12:00 crc kubenswrapper[4840]: I0311 10:12:00.148582 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="310928c9-4184-4a96-99e4-87f09d867b85" containerName="extract-content" Mar 11 10:12:00 crc kubenswrapper[4840]: I0311 10:12:00.148800 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="310928c9-4184-4a96-99e4-87f09d867b85" containerName="registry-server" Mar 11 10:12:00 crc kubenswrapper[4840]: I0311 10:12:00.148823 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1c37326-3a09-4c06-a110-ec6382c90ebd" containerName="registry-server" Mar 11 10:12:00 crc kubenswrapper[4840]: I0311 10:12:00.149658 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553732-z9df8" Mar 11 10:12:00 crc kubenswrapper[4840]: I0311 10:12:00.156632 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553732-z9df8"] Mar 11 10:12:00 crc kubenswrapper[4840]: I0311 10:12:00.161816 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:12:00 crc kubenswrapper[4840]: I0311 10:12:00.162071 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:12:00 crc kubenswrapper[4840]: I0311 10:12:00.162095 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:12:00 crc kubenswrapper[4840]: I0311 10:12:00.322153 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsdr7\" (UniqueName: \"kubernetes.io/projected/963a8f23-824f-4e68-ab6e-f2443920087d-kube-api-access-bsdr7\") pod \"auto-csr-approver-29553732-z9df8\" (UID: \"963a8f23-824f-4e68-ab6e-f2443920087d\") " pod="openshift-infra/auto-csr-approver-29553732-z9df8" Mar 11 10:12:00 crc kubenswrapper[4840]: I0311 10:12:00.423507 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsdr7\" (UniqueName: \"kubernetes.io/projected/963a8f23-824f-4e68-ab6e-f2443920087d-kube-api-access-bsdr7\") pod \"auto-csr-approver-29553732-z9df8\" (UID: \"963a8f23-824f-4e68-ab6e-f2443920087d\") " pod="openshift-infra/auto-csr-approver-29553732-z9df8" Mar 11 10:12:00 crc kubenswrapper[4840]: I0311 10:12:00.447893 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsdr7\" (UniqueName: \"kubernetes.io/projected/963a8f23-824f-4e68-ab6e-f2443920087d-kube-api-access-bsdr7\") pod \"auto-csr-approver-29553732-z9df8\" (UID: \"963a8f23-824f-4e68-ab6e-f2443920087d\") " pod="openshift-infra/auto-csr-approver-29553732-z9df8" Mar 11 10:12:00 crc kubenswrapper[4840]: I0311 10:12:00.472363 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553732-z9df8" Mar 11 10:12:01 crc kubenswrapper[4840]: I0311 10:12:01.602634 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553732-z9df8"] Mar 11 10:12:01 crc kubenswrapper[4840]: I0311 10:12:01.893865 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553732-z9df8" event={"ID":"963a8f23-824f-4e68-ab6e-f2443920087d","Type":"ContainerStarted","Data":"dcd75e1f9eee8ff5e7edaf73085f6d1414be86e860e0b2aadf218000cc287358"} Mar 11 10:12:03 crc kubenswrapper[4840]: I0311 10:12:03.908400 4840 generic.go:334] "Generic (PLEG): container finished" podID="963a8f23-824f-4e68-ab6e-f2443920087d" containerID="8f8d95f3a3654ccfea14a0647a35721582c995687605c5b40df2315641d0fe12" exitCode=0 Mar 11 10:12:03 crc kubenswrapper[4840]: I0311 10:12:03.908554 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553732-z9df8" event={"ID":"963a8f23-824f-4e68-ab6e-f2443920087d","Type":"ContainerDied","Data":"8f8d95f3a3654ccfea14a0647a35721582c995687605c5b40df2315641d0fe12"} Mar 11 10:12:05 crc kubenswrapper[4840]: I0311 10:12:05.164646 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553732-z9df8" Mar 11 10:12:05 crc kubenswrapper[4840]: I0311 10:12:05.296976 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsdr7\" (UniqueName: \"kubernetes.io/projected/963a8f23-824f-4e68-ab6e-f2443920087d-kube-api-access-bsdr7\") pod \"963a8f23-824f-4e68-ab6e-f2443920087d\" (UID: \"963a8f23-824f-4e68-ab6e-f2443920087d\") " Mar 11 10:12:05 crc kubenswrapper[4840]: I0311 10:12:05.304545 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/963a8f23-824f-4e68-ab6e-f2443920087d-kube-api-access-bsdr7" (OuterVolumeSpecName: "kube-api-access-bsdr7") pod "963a8f23-824f-4e68-ab6e-f2443920087d" (UID: "963a8f23-824f-4e68-ab6e-f2443920087d"). InnerVolumeSpecName "kube-api-access-bsdr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:12:05 crc kubenswrapper[4840]: I0311 10:12:05.399218 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsdr7\" (UniqueName: \"kubernetes.io/projected/963a8f23-824f-4e68-ab6e-f2443920087d-kube-api-access-bsdr7\") on node \"crc\" DevicePath \"\"" Mar 11 10:12:05 crc kubenswrapper[4840]: I0311 10:12:05.925674 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553732-z9df8" event={"ID":"963a8f23-824f-4e68-ab6e-f2443920087d","Type":"ContainerDied","Data":"dcd75e1f9eee8ff5e7edaf73085f6d1414be86e860e0b2aadf218000cc287358"} Mar 11 10:12:05 crc kubenswrapper[4840]: I0311 10:12:05.925984 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcd75e1f9eee8ff5e7edaf73085f6d1414be86e860e0b2aadf218000cc287358" Mar 11 10:12:05 crc kubenswrapper[4840]: I0311 10:12:05.925735 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553732-z9df8" Mar 11 10:12:06 crc kubenswrapper[4840]: I0311 10:12:06.233317 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553726-z5vml"] Mar 11 10:12:06 crc kubenswrapper[4840]: I0311 10:12:06.241998 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553726-z5vml"] Mar 11 10:12:08 crc kubenswrapper[4840]: I0311 10:12:08.069231 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa936ca8-e59f-46ed-be26-b6924bfe65f7" path="/var/lib/kubelet/pods/fa936ca8-e59f-46ed-be26-b6924bfe65f7/volumes" Mar 11 10:12:10 crc kubenswrapper[4840]: I0311 10:12:10.060931 4840 scope.go:117] "RemoveContainer" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" Mar 11 10:12:10 crc kubenswrapper[4840]: E0311 10:12:10.061672 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:12:11 crc kubenswrapper[4840]: I0311 10:12:11.530503 4840 scope.go:117] "RemoveContainer" containerID="126a24073c250442a4f9fef6bd693ca371696efcc7b4a6e6bf75367a0244121e" Mar 11 10:12:25 crc kubenswrapper[4840]: I0311 10:12:25.060896 4840 scope.go:117] "RemoveContainer" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" Mar 11 10:12:25 crc kubenswrapper[4840]: E0311 10:12:25.061848 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:12:37 crc kubenswrapper[4840]: I0311 10:12:37.060891 4840 scope.go:117] "RemoveContainer" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" Mar 11 10:12:37 crc kubenswrapper[4840]: E0311 10:12:37.061810 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:12:51 crc kubenswrapper[4840]: I0311 10:12:51.061205 4840 scope.go:117] "RemoveContainer" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" Mar 11 10:12:51 crc kubenswrapper[4840]: E0311 10:12:51.063054 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:13:03 crc kubenswrapper[4840]: I0311 10:13:03.061080 4840 scope.go:117] "RemoveContainer" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" Mar 11 10:13:03 crc kubenswrapper[4840]: E0311 10:13:03.062222 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:13:17 crc kubenswrapper[4840]: I0311 10:13:17.061164 4840 scope.go:117] "RemoveContainer" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" Mar 11 10:13:17 crc kubenswrapper[4840]: E0311 10:13:17.061998 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:13:28 crc kubenswrapper[4840]: I0311 10:13:28.060297 4840 scope.go:117] "RemoveContainer" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" Mar 11 10:13:28 crc kubenswrapper[4840]: E0311 10:13:28.061125 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:13:41 crc kubenswrapper[4840]: I0311 10:13:41.060631 4840 scope.go:117] "RemoveContainer" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" Mar 11 10:13:41 crc kubenswrapper[4840]: E0311 10:13:41.061522 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:13:55 crc kubenswrapper[4840]: I0311 10:13:55.061157 4840 scope.go:117] "RemoveContainer" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" Mar 11 10:13:55 crc kubenswrapper[4840]: E0311 10:13:55.062285 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:14:00 crc kubenswrapper[4840]: I0311 10:14:00.155163 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553734-94kp7"] Mar 11 10:14:00 crc kubenswrapper[4840]: E0311 10:14:00.156348 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963a8f23-824f-4e68-ab6e-f2443920087d" containerName="oc" Mar 11 10:14:00 crc kubenswrapper[4840]: I0311 10:14:00.156372 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="963a8f23-824f-4e68-ab6e-f2443920087d" containerName="oc" Mar 11 10:14:00 crc kubenswrapper[4840]: I0311 10:14:00.156635 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="963a8f23-824f-4e68-ab6e-f2443920087d" containerName="oc" Mar 11 10:14:00 crc kubenswrapper[4840]: I0311 10:14:00.157393 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553734-94kp7" Mar 11 10:14:00 crc kubenswrapper[4840]: I0311 10:14:00.162567 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:14:00 crc kubenswrapper[4840]: I0311 10:14:00.162851 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:14:00 crc kubenswrapper[4840]: I0311 10:14:00.162898 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:14:00 crc kubenswrapper[4840]: I0311 10:14:00.166378 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553734-94kp7"] Mar 11 10:14:00 crc kubenswrapper[4840]: I0311 10:14:00.295406 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq45l\" (UniqueName: \"kubernetes.io/projected/50348c92-87ca-4e4d-b358-8815efeb49b6-kube-api-access-wq45l\") pod \"auto-csr-approver-29553734-94kp7\" (UID: \"50348c92-87ca-4e4d-b358-8815efeb49b6\") " pod="openshift-infra/auto-csr-approver-29553734-94kp7" Mar 11 10:14:00 crc kubenswrapper[4840]: I0311 10:14:00.396732 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq45l\" (UniqueName: \"kubernetes.io/projected/50348c92-87ca-4e4d-b358-8815efeb49b6-kube-api-access-wq45l\") pod \"auto-csr-approver-29553734-94kp7\" (UID: \"50348c92-87ca-4e4d-b358-8815efeb49b6\") " pod="openshift-infra/auto-csr-approver-29553734-94kp7" Mar 11 10:14:00 crc kubenswrapper[4840]: I0311 10:14:00.426249 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq45l\" (UniqueName: \"kubernetes.io/projected/50348c92-87ca-4e4d-b358-8815efeb49b6-kube-api-access-wq45l\") pod \"auto-csr-approver-29553734-94kp7\" (UID: \"50348c92-87ca-4e4d-b358-8815efeb49b6\") " pod="openshift-infra/auto-csr-approver-29553734-94kp7" Mar 11 10:14:00 crc kubenswrapper[4840]: I0311 10:14:00.480422 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553734-94kp7" Mar 11 10:14:00 crc kubenswrapper[4840]: I0311 10:14:00.891570 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553734-94kp7"] Mar 11 10:14:01 crc kubenswrapper[4840]: I0311 10:14:01.734354 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553734-94kp7" event={"ID":"50348c92-87ca-4e4d-b358-8815efeb49b6","Type":"ContainerStarted","Data":"333f1c4ec5d85a4494c9d573cd8e9d390ba409a9016bdd157c8038362b1e1999"} Mar 11 10:14:02 crc kubenswrapper[4840]: I0311 10:14:02.743959 4840 generic.go:334] "Generic (PLEG): container finished" podID="50348c92-87ca-4e4d-b358-8815efeb49b6" containerID="22c7fb8009726a3b1329676f436a0b905fb219715c632fb98f2ad1a9f9e4ee6f" exitCode=0 Mar 11 10:14:02 crc kubenswrapper[4840]: I0311 10:14:02.744012 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553734-94kp7" event={"ID":"50348c92-87ca-4e4d-b358-8815efeb49b6","Type":"ContainerDied","Data":"22c7fb8009726a3b1329676f436a0b905fb219715c632fb98f2ad1a9f9e4ee6f"} Mar 11 10:14:04 crc kubenswrapper[4840]: I0311 10:14:04.022858 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553734-94kp7" Mar 11 10:14:04 crc kubenswrapper[4840]: I0311 10:14:04.155337 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq45l\" (UniqueName: \"kubernetes.io/projected/50348c92-87ca-4e4d-b358-8815efeb49b6-kube-api-access-wq45l\") pod \"50348c92-87ca-4e4d-b358-8815efeb49b6\" (UID: \"50348c92-87ca-4e4d-b358-8815efeb49b6\") " Mar 11 10:14:04 crc kubenswrapper[4840]: I0311 10:14:04.161048 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50348c92-87ca-4e4d-b358-8815efeb49b6-kube-api-access-wq45l" (OuterVolumeSpecName: "kube-api-access-wq45l") pod "50348c92-87ca-4e4d-b358-8815efeb49b6" (UID: "50348c92-87ca-4e4d-b358-8815efeb49b6"). InnerVolumeSpecName "kube-api-access-wq45l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:14:04 crc kubenswrapper[4840]: I0311 10:14:04.258841 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wq45l\" (UniqueName: \"kubernetes.io/projected/50348c92-87ca-4e4d-b358-8815efeb49b6-kube-api-access-wq45l\") on node \"crc\" DevicePath \"\"" Mar 11 10:14:04 crc kubenswrapper[4840]: I0311 10:14:04.761440 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553734-94kp7" event={"ID":"50348c92-87ca-4e4d-b358-8815efeb49b6","Type":"ContainerDied","Data":"333f1c4ec5d85a4494c9d573cd8e9d390ba409a9016bdd157c8038362b1e1999"} Mar 11 10:14:04 crc kubenswrapper[4840]: I0311 10:14:04.761892 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="333f1c4ec5d85a4494c9d573cd8e9d390ba409a9016bdd157c8038362b1e1999" Mar 11 10:14:04 crc kubenswrapper[4840]: I0311 10:14:04.761538 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553734-94kp7" Mar 11 10:14:05 crc kubenswrapper[4840]: I0311 10:14:05.105527 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553728-bvbsk"] Mar 11 10:14:05 crc kubenswrapper[4840]: I0311 10:14:05.110813 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553728-bvbsk"] Mar 11 10:14:06 crc kubenswrapper[4840]: I0311 10:14:06.069741 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02e3c919-5404-4cb9-b9ed-abc028140136" path="/var/lib/kubelet/pods/02e3c919-5404-4cb9-b9ed-abc028140136/volumes" Mar 11 10:14:10 crc kubenswrapper[4840]: I0311 10:14:10.061196 4840 scope.go:117] "RemoveContainer" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" Mar 11 10:14:10 crc kubenswrapper[4840]: I0311 10:14:10.821693 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"32974af3682f2327823dccca8b0498904823622932f646c199fcd97fe88fc627"} Mar 11 10:14:11 crc kubenswrapper[4840]: I0311 10:14:11.615077 4840 scope.go:117] "RemoveContainer" containerID="ea3a9ed30c9a1e109fd9faed77b886fa13419888dedd8925db722967556a3231" Mar 11 10:15:00 crc kubenswrapper[4840]: I0311 10:15:00.148891 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553735-vphjn"] Mar 11 10:15:00 crc kubenswrapper[4840]: E0311 10:15:00.149937 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50348c92-87ca-4e4d-b358-8815efeb49b6" containerName="oc" Mar 11 10:15:00 crc kubenswrapper[4840]: I0311 10:15:00.149956 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="50348c92-87ca-4e4d-b358-8815efeb49b6" containerName="oc" Mar 11 10:15:00 crc kubenswrapper[4840]: I0311 10:15:00.150136 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="50348c92-87ca-4e4d-b358-8815efeb49b6" containerName="oc" Mar 11 10:15:00 crc kubenswrapper[4840]: I0311 10:15:00.150805 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-vphjn" Mar 11 10:15:00 crc kubenswrapper[4840]: I0311 10:15:00.156005 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553735-vphjn"] Mar 11 10:15:00 crc kubenswrapper[4840]: I0311 10:15:00.166321 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 11 10:15:00 crc kubenswrapper[4840]: I0311 10:15:00.167347 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 11 10:15:00 crc kubenswrapper[4840]: I0311 10:15:00.172727 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8e45e1c-ddf2-47db-93e4-15e76ac05aad-secret-volume\") pod \"collect-profiles-29553735-vphjn\" (UID: \"e8e45e1c-ddf2-47db-93e4-15e76ac05aad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-vphjn" Mar 11 10:15:00 crc kubenswrapper[4840]: I0311 10:15:00.172834 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5dg7\" (UniqueName: \"kubernetes.io/projected/e8e45e1c-ddf2-47db-93e4-15e76ac05aad-kube-api-access-j5dg7\") pod \"collect-profiles-29553735-vphjn\" (UID: \"e8e45e1c-ddf2-47db-93e4-15e76ac05aad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-vphjn" Mar 11 10:15:00 crc kubenswrapper[4840]: I0311 10:15:00.172869 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8e45e1c-ddf2-47db-93e4-15e76ac05aad-config-volume\") pod \"collect-profiles-29553735-vphjn\" (UID: \"e8e45e1c-ddf2-47db-93e4-15e76ac05aad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-vphjn" Mar 11 10:15:00 crc kubenswrapper[4840]: I0311 10:15:00.274416 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8e45e1c-ddf2-47db-93e4-15e76ac05aad-secret-volume\") pod \"collect-profiles-29553735-vphjn\" (UID: \"e8e45e1c-ddf2-47db-93e4-15e76ac05aad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-vphjn" Mar 11 10:15:00 crc kubenswrapper[4840]: I0311 10:15:00.274932 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5dg7\" (UniqueName: \"kubernetes.io/projected/e8e45e1c-ddf2-47db-93e4-15e76ac05aad-kube-api-access-j5dg7\") pod \"collect-profiles-29553735-vphjn\" (UID: \"e8e45e1c-ddf2-47db-93e4-15e76ac05aad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-vphjn" Mar 11 10:15:00 crc kubenswrapper[4840]: I0311 10:15:00.275065 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8e45e1c-ddf2-47db-93e4-15e76ac05aad-config-volume\") pod \"collect-profiles-29553735-vphjn\" (UID: \"e8e45e1c-ddf2-47db-93e4-15e76ac05aad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-vphjn" Mar 11 10:15:00 crc kubenswrapper[4840]: I0311 10:15:00.275988 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8e45e1c-ddf2-47db-93e4-15e76ac05aad-config-volume\") pod \"collect-profiles-29553735-vphjn\" (UID: \"e8e45e1c-ddf2-47db-93e4-15e76ac05aad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-vphjn" Mar 11 10:15:00 crc kubenswrapper[4840]: I0311 10:15:00.281242 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8e45e1c-ddf2-47db-93e4-15e76ac05aad-secret-volume\") pod \"collect-profiles-29553735-vphjn\" (UID: \"e8e45e1c-ddf2-47db-93e4-15e76ac05aad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-vphjn" Mar 11 10:15:00 crc kubenswrapper[4840]: I0311 10:15:00.295510 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5dg7\" (UniqueName: \"kubernetes.io/projected/e8e45e1c-ddf2-47db-93e4-15e76ac05aad-kube-api-access-j5dg7\") pod \"collect-profiles-29553735-vphjn\" (UID: \"e8e45e1c-ddf2-47db-93e4-15e76ac05aad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-vphjn" Mar 11 10:15:00 crc kubenswrapper[4840]: I0311 10:15:00.488095 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-vphjn" Mar 11 10:15:00 crc kubenswrapper[4840]: I0311 10:15:00.922642 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553735-vphjn"] Mar 11 10:15:01 crc kubenswrapper[4840]: I0311 10:15:01.195941 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-vphjn" event={"ID":"e8e45e1c-ddf2-47db-93e4-15e76ac05aad","Type":"ContainerStarted","Data":"1ad51f591b84fd32bf0a60fe922ffa45cda868e1b2f43981902434ae159c3801"} Mar 11 10:15:01 crc kubenswrapper[4840]: I0311 10:15:01.195991 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-vphjn" event={"ID":"e8e45e1c-ddf2-47db-93e4-15e76ac05aad","Type":"ContainerStarted","Data":"aa110d11db02582f082814ae00112837fe61a994a0d8cf85de360b7e07004ec6"} Mar 11 10:15:01 crc kubenswrapper[4840]: I0311 10:15:01.221359 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-vphjn" podStartSLOduration=1.221332152 podStartE2EDuration="1.221332152s" podCreationTimestamp="2026-03-11 10:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:15:01.220519221 +0000 UTC m=+4699.886189046" watchObservedRunningTime="2026-03-11 10:15:01.221332152 +0000 UTC m=+4699.887001967" Mar 11 10:15:02 crc kubenswrapper[4840]: I0311 10:15:02.204437 4840 generic.go:334] "Generic (PLEG): container finished" podID="e8e45e1c-ddf2-47db-93e4-15e76ac05aad" containerID="1ad51f591b84fd32bf0a60fe922ffa45cda868e1b2f43981902434ae159c3801" exitCode=0 Mar 11 10:15:02 crc kubenswrapper[4840]: I0311 10:15:02.204673 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-vphjn" event={"ID":"e8e45e1c-ddf2-47db-93e4-15e76ac05aad","Type":"ContainerDied","Data":"1ad51f591b84fd32bf0a60fe922ffa45cda868e1b2f43981902434ae159c3801"} Mar 11 10:15:03 crc kubenswrapper[4840]: I0311 10:15:03.479665 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-vphjn" Mar 11 10:15:03 crc kubenswrapper[4840]: I0311 10:15:03.625606 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5dg7\" (UniqueName: \"kubernetes.io/projected/e8e45e1c-ddf2-47db-93e4-15e76ac05aad-kube-api-access-j5dg7\") pod \"e8e45e1c-ddf2-47db-93e4-15e76ac05aad\" (UID: \"e8e45e1c-ddf2-47db-93e4-15e76ac05aad\") " Mar 11 10:15:03 crc kubenswrapper[4840]: I0311 10:15:03.625665 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8e45e1c-ddf2-47db-93e4-15e76ac05aad-secret-volume\") pod \"e8e45e1c-ddf2-47db-93e4-15e76ac05aad\" (UID: \"e8e45e1c-ddf2-47db-93e4-15e76ac05aad\") " Mar 11 10:15:03 crc kubenswrapper[4840]: I0311 10:15:03.625696 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8e45e1c-ddf2-47db-93e4-15e76ac05aad-config-volume\") pod \"e8e45e1c-ddf2-47db-93e4-15e76ac05aad\" (UID: \"e8e45e1c-ddf2-47db-93e4-15e76ac05aad\") " Mar 11 10:15:03 crc kubenswrapper[4840]: I0311 10:15:03.626640 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8e45e1c-ddf2-47db-93e4-15e76ac05aad-config-volume" (OuterVolumeSpecName: "config-volume") pod "e8e45e1c-ddf2-47db-93e4-15e76ac05aad" (UID: "e8e45e1c-ddf2-47db-93e4-15e76ac05aad"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:15:03 crc kubenswrapper[4840]: I0311 10:15:03.635667 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8e45e1c-ddf2-47db-93e4-15e76ac05aad-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e8e45e1c-ddf2-47db-93e4-15e76ac05aad" (UID: "e8e45e1c-ddf2-47db-93e4-15e76ac05aad"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:15:03 crc kubenswrapper[4840]: I0311 10:15:03.635718 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8e45e1c-ddf2-47db-93e4-15e76ac05aad-kube-api-access-j5dg7" (OuterVolumeSpecName: "kube-api-access-j5dg7") pod "e8e45e1c-ddf2-47db-93e4-15e76ac05aad" (UID: "e8e45e1c-ddf2-47db-93e4-15e76ac05aad"). InnerVolumeSpecName "kube-api-access-j5dg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:15:03 crc kubenswrapper[4840]: I0311 10:15:03.727300 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5dg7\" (UniqueName: \"kubernetes.io/projected/e8e45e1c-ddf2-47db-93e4-15e76ac05aad-kube-api-access-j5dg7\") on node \"crc\" DevicePath \"\"" Mar 11 10:15:03 crc kubenswrapper[4840]: I0311 10:15:03.727345 4840 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e8e45e1c-ddf2-47db-93e4-15e76ac05aad-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 11 10:15:03 crc kubenswrapper[4840]: I0311 10:15:03.727360 4840 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8e45e1c-ddf2-47db-93e4-15e76ac05aad-config-volume\") on node \"crc\" DevicePath \"\"" Mar 11 10:15:04 crc kubenswrapper[4840]: I0311 10:15:04.219149 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-vphjn" event={"ID":"e8e45e1c-ddf2-47db-93e4-15e76ac05aad","Type":"ContainerDied","Data":"aa110d11db02582f082814ae00112837fe61a994a0d8cf85de360b7e07004ec6"} Mar 11 10:15:04 crc kubenswrapper[4840]: I0311 10:15:04.219192 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa110d11db02582f082814ae00112837fe61a994a0d8cf85de360b7e07004ec6" Mar 11 10:15:04 crc kubenswrapper[4840]: I0311 10:15:04.219224 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553735-vphjn" Mar 11 10:15:04 crc kubenswrapper[4840]: I0311 10:15:04.307675 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553690-xj2xk"] Mar 11 10:15:04 crc kubenswrapper[4840]: I0311 10:15:04.313619 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553690-xj2xk"] Mar 11 10:15:06 crc kubenswrapper[4840]: I0311 10:15:06.069747 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c55cdb33-3e8f-4e53-94e4-e6355c63f05f" path="/var/lib/kubelet/pods/c55cdb33-3e8f-4e53-94e4-e6355c63f05f/volumes" Mar 11 10:15:11 crc kubenswrapper[4840]: I0311 10:15:11.705878 4840 scope.go:117] "RemoveContainer" containerID="1d4a46590805a5d818600a0dc78f39d1f2dbb075c4638ec1fdc3f4967fb8f548" Mar 11 10:16:00 crc kubenswrapper[4840]: I0311 10:16:00.145794 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553736-lwkg5"] Mar 11 10:16:00 crc kubenswrapper[4840]: E0311 10:16:00.146839 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8e45e1c-ddf2-47db-93e4-15e76ac05aad" containerName="collect-profiles" Mar 11 10:16:00 crc kubenswrapper[4840]: I0311 10:16:00.146859 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8e45e1c-ddf2-47db-93e4-15e76ac05aad" containerName="collect-profiles" Mar 11 10:16:00 crc kubenswrapper[4840]: I0311 10:16:00.147052 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8e45e1c-ddf2-47db-93e4-15e76ac05aad" containerName="collect-profiles" Mar 11 10:16:00 crc kubenswrapper[4840]: I0311 10:16:00.147677 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553736-lwkg5" Mar 11 10:16:00 crc kubenswrapper[4840]: I0311 10:16:00.150774 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:16:00 crc kubenswrapper[4840]: I0311 10:16:00.151355 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:16:00 crc kubenswrapper[4840]: I0311 10:16:00.151524 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:16:00 crc kubenswrapper[4840]: I0311 10:16:00.155914 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553736-lwkg5"] Mar 11 10:16:00 crc kubenswrapper[4840]: I0311 10:16:00.252656 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xvqf\" (UniqueName: \"kubernetes.io/projected/6590ca4e-9b8d-474d-8aef-d6b3a5bf54b7-kube-api-access-9xvqf\") pod \"auto-csr-approver-29553736-lwkg5\" (UID: \"6590ca4e-9b8d-474d-8aef-d6b3a5bf54b7\") " pod="openshift-infra/auto-csr-approver-29553736-lwkg5" Mar 11 10:16:00 crc kubenswrapper[4840]: I0311 10:16:00.353668 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xvqf\" (UniqueName: \"kubernetes.io/projected/6590ca4e-9b8d-474d-8aef-d6b3a5bf54b7-kube-api-access-9xvqf\") pod \"auto-csr-approver-29553736-lwkg5\" (UID: \"6590ca4e-9b8d-474d-8aef-d6b3a5bf54b7\") " pod="openshift-infra/auto-csr-approver-29553736-lwkg5" Mar 11 10:16:00 crc kubenswrapper[4840]: I0311 10:16:00.682099 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xvqf\" (UniqueName: \"kubernetes.io/projected/6590ca4e-9b8d-474d-8aef-d6b3a5bf54b7-kube-api-access-9xvqf\") pod \"auto-csr-approver-29553736-lwkg5\" (UID: \"6590ca4e-9b8d-474d-8aef-d6b3a5bf54b7\") " pod="openshift-infra/auto-csr-approver-29553736-lwkg5" Mar 11 10:16:00 crc kubenswrapper[4840]: I0311 10:16:00.867145 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553736-lwkg5" Mar 11 10:16:01 crc kubenswrapper[4840]: I0311 10:16:01.364137 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553736-lwkg5"] Mar 11 10:16:01 crc kubenswrapper[4840]: W0311 10:16:01.376813 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6590ca4e_9b8d_474d_8aef_d6b3a5bf54b7.slice/crio-ada159f5dc9d9be5afe28ff02d7be5c6ddf7be45f715dbbb702d567ef2c7b3fc WatchSource:0}: Error finding container ada159f5dc9d9be5afe28ff02d7be5c6ddf7be45f715dbbb702d567ef2c7b3fc: Status 404 returned error can't find the container with id ada159f5dc9d9be5afe28ff02d7be5c6ddf7be45f715dbbb702d567ef2c7b3fc Mar 11 10:16:01 crc kubenswrapper[4840]: I0311 10:16:01.379847 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 10:16:01 crc kubenswrapper[4840]: I0311 10:16:01.683487 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553736-lwkg5" event={"ID":"6590ca4e-9b8d-474d-8aef-d6b3a5bf54b7","Type":"ContainerStarted","Data":"ada159f5dc9d9be5afe28ff02d7be5c6ddf7be45f715dbbb702d567ef2c7b3fc"} Mar 11 10:16:02 crc kubenswrapper[4840]: I0311 10:16:02.692567 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553736-lwkg5" event={"ID":"6590ca4e-9b8d-474d-8aef-d6b3a5bf54b7","Type":"ContainerStarted","Data":"d642ec9544691ec500cebd88a599552d6407c816c56c9252a4ff6edcbe768fe4"} Mar 11 10:16:02 crc kubenswrapper[4840]: I0311 10:16:02.723322 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553736-lwkg5" podStartSLOduration=1.87596998 podStartE2EDuration="2.723296325s" podCreationTimestamp="2026-03-11 10:16:00 +0000 UTC" firstStartedPulling="2026-03-11 10:16:01.379374878 +0000 UTC m=+4760.045044703" lastFinishedPulling="2026-03-11 10:16:02.226701233 +0000 UTC m=+4760.892371048" observedRunningTime="2026-03-11 10:16:02.716983526 +0000 UTC m=+4761.382653361" watchObservedRunningTime="2026-03-11 10:16:02.723296325 +0000 UTC m=+4761.388966150" Mar 11 10:16:03 crc kubenswrapper[4840]: I0311 10:16:03.706575 4840 generic.go:334] "Generic (PLEG): container finished" podID="6590ca4e-9b8d-474d-8aef-d6b3a5bf54b7" containerID="d642ec9544691ec500cebd88a599552d6407c816c56c9252a4ff6edcbe768fe4" exitCode=0 Mar 11 10:16:03 crc kubenswrapper[4840]: I0311 10:16:03.706684 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553736-lwkg5" event={"ID":"6590ca4e-9b8d-474d-8aef-d6b3a5bf54b7","Type":"ContainerDied","Data":"d642ec9544691ec500cebd88a599552d6407c816c56c9252a4ff6edcbe768fe4"} Mar 11 10:16:05 crc kubenswrapper[4840]: I0311 10:16:05.011778 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553736-lwkg5" Mar 11 10:16:05 crc kubenswrapper[4840]: I0311 10:16:05.124552 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xvqf\" (UniqueName: \"kubernetes.io/projected/6590ca4e-9b8d-474d-8aef-d6b3a5bf54b7-kube-api-access-9xvqf\") pod \"6590ca4e-9b8d-474d-8aef-d6b3a5bf54b7\" (UID: \"6590ca4e-9b8d-474d-8aef-d6b3a5bf54b7\") " Mar 11 10:16:05 crc kubenswrapper[4840]: I0311 10:16:05.132365 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6590ca4e-9b8d-474d-8aef-d6b3a5bf54b7-kube-api-access-9xvqf" (OuterVolumeSpecName: "kube-api-access-9xvqf") pod "6590ca4e-9b8d-474d-8aef-d6b3a5bf54b7" (UID: "6590ca4e-9b8d-474d-8aef-d6b3a5bf54b7"). InnerVolumeSpecName "kube-api-access-9xvqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:16:05 crc kubenswrapper[4840]: I0311 10:16:05.227715 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xvqf\" (UniqueName: \"kubernetes.io/projected/6590ca4e-9b8d-474d-8aef-d6b3a5bf54b7-kube-api-access-9xvqf\") on node \"crc\" DevicePath \"\"" Mar 11 10:16:05 crc kubenswrapper[4840]: I0311 10:16:05.742642 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553736-lwkg5" event={"ID":"6590ca4e-9b8d-474d-8aef-d6b3a5bf54b7","Type":"ContainerDied","Data":"ada159f5dc9d9be5afe28ff02d7be5c6ddf7be45f715dbbb702d567ef2c7b3fc"} Mar 11 10:16:05 crc kubenswrapper[4840]: I0311 10:16:05.742710 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ada159f5dc9d9be5afe28ff02d7be5c6ddf7be45f715dbbb702d567ef2c7b3fc" Mar 11 10:16:05 crc kubenswrapper[4840]: I0311 10:16:05.742777 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553736-lwkg5" Mar 11 10:16:05 crc kubenswrapper[4840]: I0311 10:16:05.792593 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553730-tcxxk"] Mar 11 10:16:05 crc kubenswrapper[4840]: I0311 10:16:05.800459 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553730-tcxxk"] Mar 11 10:16:06 crc kubenswrapper[4840]: I0311 10:16:06.073303 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e259382-393b-46c3-bc59-5be19c0c9fc4" path="/var/lib/kubelet/pods/2e259382-393b-46c3-bc59-5be19c0c9fc4/volumes" Mar 11 10:16:11 crc kubenswrapper[4840]: I0311 10:16:11.757065 4840 scope.go:117] "RemoveContainer" containerID="70aa32a809d369b51115f4ae50b895173716b2e4000431a17327f9dabf7e0b90" Mar 11 10:16:27 crc kubenswrapper[4840]: I0311 10:16:27.446449 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:16:27 crc kubenswrapper[4840]: I0311 10:16:27.447397 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:16:57 crc kubenswrapper[4840]: I0311 10:16:57.446099 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:16:57 crc kubenswrapper[4840]: I0311 10:16:57.446663 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:17:14 crc kubenswrapper[4840]: I0311 10:17:14.193701 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-nxfhp"] Mar 11 10:17:14 crc kubenswrapper[4840]: I0311 10:17:14.199421 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-nxfhp"] Mar 11 10:17:14 crc kubenswrapper[4840]: I0311 10:17:14.302187 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-6xpjj"] Mar 11 10:17:14 crc kubenswrapper[4840]: E0311 10:17:14.302507 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6590ca4e-9b8d-474d-8aef-d6b3a5bf54b7" containerName="oc" Mar 11 10:17:14 crc kubenswrapper[4840]: I0311 10:17:14.302521 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="6590ca4e-9b8d-474d-8aef-d6b3a5bf54b7" containerName="oc" Mar 11 10:17:14 crc kubenswrapper[4840]: I0311 10:17:14.302671 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="6590ca4e-9b8d-474d-8aef-d6b3a5bf54b7" containerName="oc" Mar 11 10:17:14 crc kubenswrapper[4840]: I0311 10:17:14.303148 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-6xpjj" Mar 11 10:17:14 crc kubenswrapper[4840]: I0311 10:17:14.306724 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Mar 11 10:17:14 crc kubenswrapper[4840]: I0311 10:17:14.308698 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Mar 11 10:17:14 crc kubenswrapper[4840]: I0311 10:17:14.308798 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Mar 11 10:17:14 crc kubenswrapper[4840]: I0311 10:17:14.309014 4840 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-lsg87" Mar 11 10:17:14 crc kubenswrapper[4840]: I0311 10:17:14.316371 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-6xpjj"] Mar 11 10:17:14 crc kubenswrapper[4840]: I0311 10:17:14.448675 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/622e66d4-16cd-41ce-a857-2e74b64c493a-node-mnt\") pod \"crc-storage-crc-6xpjj\" (UID: \"622e66d4-16cd-41ce-a857-2e74b64c493a\") " pod="crc-storage/crc-storage-crc-6xpjj" Mar 11 10:17:14 crc kubenswrapper[4840]: I0311 10:17:14.448729 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/622e66d4-16cd-41ce-a857-2e74b64c493a-crc-storage\") pod \"crc-storage-crc-6xpjj\" (UID: \"622e66d4-16cd-41ce-a857-2e74b64c493a\") " pod="crc-storage/crc-storage-crc-6xpjj" Mar 11 10:17:14 crc kubenswrapper[4840]: I0311 10:17:14.448779 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swn8s\" (UniqueName: \"kubernetes.io/projected/622e66d4-16cd-41ce-a857-2e74b64c493a-kube-api-access-swn8s\") pod \"crc-storage-crc-6xpjj\" (UID: \"622e66d4-16cd-41ce-a857-2e74b64c493a\") " pod="crc-storage/crc-storage-crc-6xpjj" Mar 11 10:17:14 crc kubenswrapper[4840]: I0311 10:17:14.550220 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/622e66d4-16cd-41ce-a857-2e74b64c493a-node-mnt\") pod \"crc-storage-crc-6xpjj\" (UID: \"622e66d4-16cd-41ce-a857-2e74b64c493a\") " pod="crc-storage/crc-storage-crc-6xpjj" Mar 11 10:17:14 crc kubenswrapper[4840]: I0311 10:17:14.550298 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/622e66d4-16cd-41ce-a857-2e74b64c493a-crc-storage\") pod \"crc-storage-crc-6xpjj\" (UID: \"622e66d4-16cd-41ce-a857-2e74b64c493a\") " pod="crc-storage/crc-storage-crc-6xpjj" Mar 11 10:17:14 crc kubenswrapper[4840]: I0311 10:17:14.550490 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swn8s\" (UniqueName: \"kubernetes.io/projected/622e66d4-16cd-41ce-a857-2e74b64c493a-kube-api-access-swn8s\") pod \"crc-storage-crc-6xpjj\" (UID: \"622e66d4-16cd-41ce-a857-2e74b64c493a\") " pod="crc-storage/crc-storage-crc-6xpjj" Mar 11 10:17:14 crc kubenswrapper[4840]: I0311 10:17:14.551331 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/622e66d4-16cd-41ce-a857-2e74b64c493a-node-mnt\") pod \"crc-storage-crc-6xpjj\" (UID: \"622e66d4-16cd-41ce-a857-2e74b64c493a\") " pod="crc-storage/crc-storage-crc-6xpjj" Mar 11 10:17:14 crc kubenswrapper[4840]: I0311 10:17:14.551845 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/622e66d4-16cd-41ce-a857-2e74b64c493a-crc-storage\") pod \"crc-storage-crc-6xpjj\" (UID: \"622e66d4-16cd-41ce-a857-2e74b64c493a\") " pod="crc-storage/crc-storage-crc-6xpjj" Mar 11 10:17:14 crc kubenswrapper[4840]: I0311 10:17:14.572816 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swn8s\" (UniqueName: \"kubernetes.io/projected/622e66d4-16cd-41ce-a857-2e74b64c493a-kube-api-access-swn8s\") pod \"crc-storage-crc-6xpjj\" (UID: \"622e66d4-16cd-41ce-a857-2e74b64c493a\") " pod="crc-storage/crc-storage-crc-6xpjj" Mar 11 10:17:14 crc kubenswrapper[4840]: I0311 10:17:14.661901 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-6xpjj" Mar 11 10:17:15 crc kubenswrapper[4840]: I0311 10:17:15.078087 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-6xpjj"] Mar 11 10:17:15 crc kubenswrapper[4840]: I0311 10:17:15.398220 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-6xpjj" event={"ID":"622e66d4-16cd-41ce-a857-2e74b64c493a","Type":"ContainerStarted","Data":"ddaa02b1b250af4a3be2ed2099b15d7a0bba7f855f4145dfb7f007d70106e34f"} Mar 11 10:17:16 crc kubenswrapper[4840]: I0311 10:17:16.074946 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="135fe828-cf04-41c5-9fa6-4e7cbc011252" path="/var/lib/kubelet/pods/135fe828-cf04-41c5-9fa6-4e7cbc011252/volumes" Mar 11 10:17:16 crc kubenswrapper[4840]: I0311 10:17:16.408318 4840 generic.go:334] "Generic (PLEG): container finished" podID="622e66d4-16cd-41ce-a857-2e74b64c493a" containerID="5bdf42074f3582c3af387a3dcd78ec21ade2cd471791b224242b4bfadbaa9446" exitCode=0 Mar 11 10:17:16 crc kubenswrapper[4840]: I0311 10:17:16.408362 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-6xpjj" event={"ID":"622e66d4-16cd-41ce-a857-2e74b64c493a","Type":"ContainerDied","Data":"5bdf42074f3582c3af387a3dcd78ec21ade2cd471791b224242b4bfadbaa9446"} Mar 11 10:17:17 crc kubenswrapper[4840]: I0311 10:17:17.728785 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-6xpjj" Mar 11 10:17:17 crc kubenswrapper[4840]: I0311 10:17:17.893033 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swn8s\" (UniqueName: \"kubernetes.io/projected/622e66d4-16cd-41ce-a857-2e74b64c493a-kube-api-access-swn8s\") pod \"622e66d4-16cd-41ce-a857-2e74b64c493a\" (UID: \"622e66d4-16cd-41ce-a857-2e74b64c493a\") " Mar 11 10:17:17 crc kubenswrapper[4840]: I0311 10:17:17.893149 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/622e66d4-16cd-41ce-a857-2e74b64c493a-crc-storage\") pod \"622e66d4-16cd-41ce-a857-2e74b64c493a\" (UID: \"622e66d4-16cd-41ce-a857-2e74b64c493a\") " Mar 11 10:17:17 crc kubenswrapper[4840]: I0311 10:17:17.893322 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/622e66d4-16cd-41ce-a857-2e74b64c493a-node-mnt\") pod \"622e66d4-16cd-41ce-a857-2e74b64c493a\" (UID: \"622e66d4-16cd-41ce-a857-2e74b64c493a\") " Mar 11 10:17:17 crc kubenswrapper[4840]: I0311 10:17:17.893418 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/622e66d4-16cd-41ce-a857-2e74b64c493a-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "622e66d4-16cd-41ce-a857-2e74b64c493a" (UID: "622e66d4-16cd-41ce-a857-2e74b64c493a"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 10:17:17 crc kubenswrapper[4840]: I0311 10:17:17.893634 4840 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/622e66d4-16cd-41ce-a857-2e74b64c493a-node-mnt\") on node \"crc\" DevicePath \"\"" Mar 11 10:17:17 crc kubenswrapper[4840]: I0311 10:17:17.898602 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/622e66d4-16cd-41ce-a857-2e74b64c493a-kube-api-access-swn8s" (OuterVolumeSpecName: "kube-api-access-swn8s") pod "622e66d4-16cd-41ce-a857-2e74b64c493a" (UID: "622e66d4-16cd-41ce-a857-2e74b64c493a"). InnerVolumeSpecName "kube-api-access-swn8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:17:17 crc kubenswrapper[4840]: I0311 10:17:17.919346 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/622e66d4-16cd-41ce-a857-2e74b64c493a-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "622e66d4-16cd-41ce-a857-2e74b64c493a" (UID: "622e66d4-16cd-41ce-a857-2e74b64c493a"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:17:17 crc kubenswrapper[4840]: I0311 10:17:17.995291 4840 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/622e66d4-16cd-41ce-a857-2e74b64c493a-crc-storage\") on node \"crc\" DevicePath \"\"" Mar 11 10:17:17 crc kubenswrapper[4840]: I0311 10:17:17.995335 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swn8s\" (UniqueName: \"kubernetes.io/projected/622e66d4-16cd-41ce-a857-2e74b64c493a-kube-api-access-swn8s\") on node \"crc\" DevicePath \"\"" Mar 11 10:17:18 crc kubenswrapper[4840]: I0311 10:17:18.423501 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-6xpjj" event={"ID":"622e66d4-16cd-41ce-a857-2e74b64c493a","Type":"ContainerDied","Data":"ddaa02b1b250af4a3be2ed2099b15d7a0bba7f855f4145dfb7f007d70106e34f"} Mar 11 10:17:18 crc kubenswrapper[4840]: I0311 10:17:18.423547 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddaa02b1b250af4a3be2ed2099b15d7a0bba7f855f4145dfb7f007d70106e34f" Mar 11 10:17:18 crc kubenswrapper[4840]: I0311 10:17:18.423658 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-6xpjj" Mar 11 10:17:20 crc kubenswrapper[4840]: I0311 10:17:20.101918 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-6xpjj"] Mar 11 10:17:20 crc kubenswrapper[4840]: I0311 10:17:20.106776 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-6xpjj"] Mar 11 10:17:20 crc kubenswrapper[4840]: I0311 10:17:20.210008 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-hjddz"] Mar 11 10:17:20 crc kubenswrapper[4840]: E0311 10:17:20.210434 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="622e66d4-16cd-41ce-a857-2e74b64c493a" containerName="storage" Mar 11 10:17:20 crc kubenswrapper[4840]: I0311 10:17:20.210462 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="622e66d4-16cd-41ce-a857-2e74b64c493a" containerName="storage" Mar 11 10:17:20 crc kubenswrapper[4840]: I0311 10:17:20.210748 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="622e66d4-16cd-41ce-a857-2e74b64c493a" containerName="storage" Mar 11 10:17:20 crc kubenswrapper[4840]: I0311 10:17:20.211381 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hjddz" Mar 11 10:17:20 crc kubenswrapper[4840]: I0311 10:17:20.215177 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Mar 11 10:17:20 crc kubenswrapper[4840]: I0311 10:17:20.215847 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Mar 11 10:17:20 crc kubenswrapper[4840]: I0311 10:17:20.216133 4840 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-lsg87" Mar 11 10:17:20 crc kubenswrapper[4840]: I0311 10:17:20.216363 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Mar 11 10:17:20 crc kubenswrapper[4840]: I0311 10:17:20.233342 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-hjddz"] Mar 11 10:17:20 crc kubenswrapper[4840]: I0311 10:17:20.332517 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd2rf\" (UniqueName: \"kubernetes.io/projected/2ad7057e-c777-4b97-a893-9dc8ead6f0ca-kube-api-access-qd2rf\") pod \"crc-storage-crc-hjddz\" (UID: \"2ad7057e-c777-4b97-a893-9dc8ead6f0ca\") " pod="crc-storage/crc-storage-crc-hjddz" Mar 11 10:17:20 crc kubenswrapper[4840]: I0311 10:17:20.332660 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/2ad7057e-c777-4b97-a893-9dc8ead6f0ca-node-mnt\") pod \"crc-storage-crc-hjddz\" (UID: \"2ad7057e-c777-4b97-a893-9dc8ead6f0ca\") " pod="crc-storage/crc-storage-crc-hjddz" Mar 11 10:17:20 crc kubenswrapper[4840]: I0311 10:17:20.332704 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/2ad7057e-c777-4b97-a893-9dc8ead6f0ca-crc-storage\") pod \"crc-storage-crc-hjddz\" (UID: \"2ad7057e-c777-4b97-a893-9dc8ead6f0ca\") " pod="crc-storage/crc-storage-crc-hjddz" Mar 11 10:17:20 crc kubenswrapper[4840]: I0311 10:17:20.433616 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd2rf\" (UniqueName: \"kubernetes.io/projected/2ad7057e-c777-4b97-a893-9dc8ead6f0ca-kube-api-access-qd2rf\") pod \"crc-storage-crc-hjddz\" (UID: \"2ad7057e-c777-4b97-a893-9dc8ead6f0ca\") " pod="crc-storage/crc-storage-crc-hjddz" Mar 11 10:17:20 crc kubenswrapper[4840]: I0311 10:17:20.433791 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/2ad7057e-c777-4b97-a893-9dc8ead6f0ca-node-mnt\") pod \"crc-storage-crc-hjddz\" (UID: \"2ad7057e-c777-4b97-a893-9dc8ead6f0ca\") " pod="crc-storage/crc-storage-crc-hjddz" Mar 11 10:17:20 crc kubenswrapper[4840]: I0311 10:17:20.433840 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/2ad7057e-c777-4b97-a893-9dc8ead6f0ca-crc-storage\") pod \"crc-storage-crc-hjddz\" (UID: \"2ad7057e-c777-4b97-a893-9dc8ead6f0ca\") " pod="crc-storage/crc-storage-crc-hjddz" Mar 11 10:17:20 crc kubenswrapper[4840]: I0311 10:17:20.434302 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/2ad7057e-c777-4b97-a893-9dc8ead6f0ca-node-mnt\") pod \"crc-storage-crc-hjddz\" (UID: \"2ad7057e-c777-4b97-a893-9dc8ead6f0ca\") " pod="crc-storage/crc-storage-crc-hjddz" Mar 11 10:17:20 crc kubenswrapper[4840]: I0311 10:17:20.435107 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/2ad7057e-c777-4b97-a893-9dc8ead6f0ca-crc-storage\") pod \"crc-storage-crc-hjddz\" (UID: \"2ad7057e-c777-4b97-a893-9dc8ead6f0ca\") " pod="crc-storage/crc-storage-crc-hjddz" Mar 11 10:17:20 crc kubenswrapper[4840]: I0311 10:17:20.459244 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd2rf\" (UniqueName: \"kubernetes.io/projected/2ad7057e-c777-4b97-a893-9dc8ead6f0ca-kube-api-access-qd2rf\") pod \"crc-storage-crc-hjddz\" (UID: \"2ad7057e-c777-4b97-a893-9dc8ead6f0ca\") " pod="crc-storage/crc-storage-crc-hjddz" Mar 11 10:17:20 crc kubenswrapper[4840]: I0311 10:17:20.534834 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hjddz" Mar 11 10:17:21 crc kubenswrapper[4840]: I0311 10:17:21.006954 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-hjddz"] Mar 11 10:17:21 crc kubenswrapper[4840]: I0311 10:17:21.452105 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hjddz" event={"ID":"2ad7057e-c777-4b97-a893-9dc8ead6f0ca","Type":"ContainerStarted","Data":"a7f6ccdd068b0d9d6e249cb06ebcf169e8991c62340a0cb2c3433d88d9eb47ab"} Mar 11 10:17:22 crc kubenswrapper[4840]: I0311 10:17:22.081124 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="622e66d4-16cd-41ce-a857-2e74b64c493a" path="/var/lib/kubelet/pods/622e66d4-16cd-41ce-a857-2e74b64c493a/volumes" Mar 11 10:17:22 crc kubenswrapper[4840]: I0311 10:17:22.462428 4840 generic.go:334] "Generic (PLEG): container finished" podID="2ad7057e-c777-4b97-a893-9dc8ead6f0ca" containerID="6f14322d7766d4fd1c920c4fc39f78725ba62f67a6f665eff272a46937d6fb69" exitCode=0 Mar 11 10:17:22 crc kubenswrapper[4840]: I0311 10:17:22.462490 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hjddz" event={"ID":"2ad7057e-c777-4b97-a893-9dc8ead6f0ca","Type":"ContainerDied","Data":"6f14322d7766d4fd1c920c4fc39f78725ba62f67a6f665eff272a46937d6fb69"} Mar 11 10:17:23 crc kubenswrapper[4840]: I0311 10:17:23.789634 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hjddz" Mar 11 10:17:23 crc kubenswrapper[4840]: I0311 10:17:23.888854 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd2rf\" (UniqueName: \"kubernetes.io/projected/2ad7057e-c777-4b97-a893-9dc8ead6f0ca-kube-api-access-qd2rf\") pod \"2ad7057e-c777-4b97-a893-9dc8ead6f0ca\" (UID: \"2ad7057e-c777-4b97-a893-9dc8ead6f0ca\") " Mar 11 10:17:23 crc kubenswrapper[4840]: I0311 10:17:23.888978 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/2ad7057e-c777-4b97-a893-9dc8ead6f0ca-crc-storage\") pod \"2ad7057e-c777-4b97-a893-9dc8ead6f0ca\" (UID: \"2ad7057e-c777-4b97-a893-9dc8ead6f0ca\") " Mar 11 10:17:23 crc kubenswrapper[4840]: I0311 10:17:23.889011 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/2ad7057e-c777-4b97-a893-9dc8ead6f0ca-node-mnt\") pod \"2ad7057e-c777-4b97-a893-9dc8ead6f0ca\" (UID: \"2ad7057e-c777-4b97-a893-9dc8ead6f0ca\") " Mar 11 10:17:23 crc kubenswrapper[4840]: I0311 10:17:23.889248 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ad7057e-c777-4b97-a893-9dc8ead6f0ca-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "2ad7057e-c777-4b97-a893-9dc8ead6f0ca" (UID: "2ad7057e-c777-4b97-a893-9dc8ead6f0ca"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 10:17:23 crc kubenswrapper[4840]: I0311 10:17:23.894512 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ad7057e-c777-4b97-a893-9dc8ead6f0ca-kube-api-access-qd2rf" (OuterVolumeSpecName: "kube-api-access-qd2rf") pod "2ad7057e-c777-4b97-a893-9dc8ead6f0ca" (UID: "2ad7057e-c777-4b97-a893-9dc8ead6f0ca"). InnerVolumeSpecName "kube-api-access-qd2rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:17:23 crc kubenswrapper[4840]: I0311 10:17:23.908956 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ad7057e-c777-4b97-a893-9dc8ead6f0ca-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "2ad7057e-c777-4b97-a893-9dc8ead6f0ca" (UID: "2ad7057e-c777-4b97-a893-9dc8ead6f0ca"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:17:23 crc kubenswrapper[4840]: I0311 10:17:23.991071 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd2rf\" (UniqueName: \"kubernetes.io/projected/2ad7057e-c777-4b97-a893-9dc8ead6f0ca-kube-api-access-qd2rf\") on node \"crc\" DevicePath \"\"" Mar 11 10:17:23 crc kubenswrapper[4840]: I0311 10:17:23.991109 4840 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/2ad7057e-c777-4b97-a893-9dc8ead6f0ca-crc-storage\") on node \"crc\" DevicePath \"\"" Mar 11 10:17:23 crc kubenswrapper[4840]: I0311 10:17:23.991118 4840 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/2ad7057e-c777-4b97-a893-9dc8ead6f0ca-node-mnt\") on node \"crc\" DevicePath \"\"" Mar 11 10:17:24 crc kubenswrapper[4840]: I0311 10:17:24.479365 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-hjddz" event={"ID":"2ad7057e-c777-4b97-a893-9dc8ead6f0ca","Type":"ContainerDied","Data":"a7f6ccdd068b0d9d6e249cb06ebcf169e8991c62340a0cb2c3433d88d9eb47ab"} Mar 11 10:17:24 crc kubenswrapper[4840]: I0311 10:17:24.479412 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7f6ccdd068b0d9d6e249cb06ebcf169e8991c62340a0cb2c3433d88d9eb47ab" Mar 11 10:17:24 crc kubenswrapper[4840]: I0311 10:17:24.479415 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-hjddz" Mar 11 10:17:27 crc kubenswrapper[4840]: I0311 10:17:27.446436 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:17:27 crc kubenswrapper[4840]: I0311 10:17:27.446853 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:17:27 crc kubenswrapper[4840]: I0311 10:17:27.446905 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 10:17:27 crc kubenswrapper[4840]: I0311 10:17:27.447594 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"32974af3682f2327823dccca8b0498904823622932f646c199fcd97fe88fc627"} pod="openshift-machine-config-operator/machine-config-daemon-brtht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 10:17:27 crc kubenswrapper[4840]: I0311 10:17:27.448004 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" containerID="cri-o://32974af3682f2327823dccca8b0498904823622932f646c199fcd97fe88fc627" gracePeriod=600 Mar 11 10:17:28 crc kubenswrapper[4840]: I0311 10:17:28.509439 4840 generic.go:334] "Generic (PLEG): container finished" podID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerID="32974af3682f2327823dccca8b0498904823622932f646c199fcd97fe88fc627" exitCode=0 Mar 11 10:17:28 crc kubenswrapper[4840]: I0311 10:17:28.509499 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerDied","Data":"32974af3682f2327823dccca8b0498904823622932f646c199fcd97fe88fc627"} Mar 11 10:17:28 crc kubenswrapper[4840]: I0311 10:17:28.509849 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b"} Mar 11 10:17:28 crc kubenswrapper[4840]: I0311 10:17:28.509874 4840 scope.go:117] "RemoveContainer" containerID="e5dd56f8fedc37dde5b61acd8e1e840f8e8968f30b4b179b8fa6c41015c74607" Mar 11 10:17:38 crc kubenswrapper[4840]: I0311 10:17:38.595598 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9hb6w"] Mar 11 10:17:38 crc kubenswrapper[4840]: E0311 10:17:38.596864 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ad7057e-c777-4b97-a893-9dc8ead6f0ca" containerName="storage" Mar 11 10:17:38 crc kubenswrapper[4840]: I0311 10:17:38.596889 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ad7057e-c777-4b97-a893-9dc8ead6f0ca" containerName="storage" Mar 11 10:17:38 crc kubenswrapper[4840]: I0311 10:17:38.597210 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ad7057e-c777-4b97-a893-9dc8ead6f0ca" containerName="storage" Mar 11 10:17:38 crc kubenswrapper[4840]: I0311 10:17:38.598668 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9hb6w" Mar 11 10:17:38 crc kubenswrapper[4840]: I0311 10:17:38.617250 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9hb6w"] Mar 11 10:17:38 crc kubenswrapper[4840]: I0311 10:17:38.701242 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f05082ed-9790-4766-839b-1a9015b8715f-catalog-content\") pod \"redhat-operators-9hb6w\" (UID: \"f05082ed-9790-4766-839b-1a9015b8715f\") " pod="openshift-marketplace/redhat-operators-9hb6w" Mar 11 10:17:38 crc kubenswrapper[4840]: I0311 10:17:38.701312 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrwqz\" (UniqueName: \"kubernetes.io/projected/f05082ed-9790-4766-839b-1a9015b8715f-kube-api-access-rrwqz\") pod \"redhat-operators-9hb6w\" (UID: \"f05082ed-9790-4766-839b-1a9015b8715f\") " pod="openshift-marketplace/redhat-operators-9hb6w" Mar 11 10:17:38 crc kubenswrapper[4840]: I0311 10:17:38.701342 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f05082ed-9790-4766-839b-1a9015b8715f-utilities\") pod \"redhat-operators-9hb6w\" (UID: \"f05082ed-9790-4766-839b-1a9015b8715f\") " pod="openshift-marketplace/redhat-operators-9hb6w" Mar 11 10:17:38 crc kubenswrapper[4840]: I0311 10:17:38.803238 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f05082ed-9790-4766-839b-1a9015b8715f-catalog-content\") pod \"redhat-operators-9hb6w\" (UID: \"f05082ed-9790-4766-839b-1a9015b8715f\") " pod="openshift-marketplace/redhat-operators-9hb6w" Mar 11 10:17:38 crc kubenswrapper[4840]: I0311 10:17:38.803313 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrwqz\" (UniqueName: \"kubernetes.io/projected/f05082ed-9790-4766-839b-1a9015b8715f-kube-api-access-rrwqz\") pod \"redhat-operators-9hb6w\" (UID: \"f05082ed-9790-4766-839b-1a9015b8715f\") " pod="openshift-marketplace/redhat-operators-9hb6w" Mar 11 10:17:38 crc kubenswrapper[4840]: I0311 10:17:38.803343 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f05082ed-9790-4766-839b-1a9015b8715f-utilities\") pod \"redhat-operators-9hb6w\" (UID: \"f05082ed-9790-4766-839b-1a9015b8715f\") " pod="openshift-marketplace/redhat-operators-9hb6w" Mar 11 10:17:38 crc kubenswrapper[4840]: I0311 10:17:38.803889 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f05082ed-9790-4766-839b-1a9015b8715f-catalog-content\") pod \"redhat-operators-9hb6w\" (UID: \"f05082ed-9790-4766-839b-1a9015b8715f\") " pod="openshift-marketplace/redhat-operators-9hb6w" Mar 11 10:17:38 crc kubenswrapper[4840]: I0311 10:17:38.803936 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f05082ed-9790-4766-839b-1a9015b8715f-utilities\") pod \"redhat-operators-9hb6w\" (UID: \"f05082ed-9790-4766-839b-1a9015b8715f\") " pod="openshift-marketplace/redhat-operators-9hb6w" Mar 11 10:17:38 crc kubenswrapper[4840]: I0311 10:17:38.828109 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrwqz\" (UniqueName: \"kubernetes.io/projected/f05082ed-9790-4766-839b-1a9015b8715f-kube-api-access-rrwqz\") pod \"redhat-operators-9hb6w\" (UID: \"f05082ed-9790-4766-839b-1a9015b8715f\") " pod="openshift-marketplace/redhat-operators-9hb6w" Mar 11 10:17:38 crc kubenswrapper[4840]: I0311 10:17:38.918373 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9hb6w" Mar 11 10:17:39 crc kubenswrapper[4840]: I0311 10:17:39.145485 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9hb6w"] Mar 11 10:17:39 crc kubenswrapper[4840]: I0311 10:17:39.603400 4840 generic.go:334] "Generic (PLEG): container finished" podID="f05082ed-9790-4766-839b-1a9015b8715f" containerID="496732aba3265d36bf401f2ca36bd69fb951510ec7b78a7fc437cd13ad5850c2" exitCode=0 Mar 11 10:17:39 crc kubenswrapper[4840]: I0311 10:17:39.603503 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hb6w" event={"ID":"f05082ed-9790-4766-839b-1a9015b8715f","Type":"ContainerDied","Data":"496732aba3265d36bf401f2ca36bd69fb951510ec7b78a7fc437cd13ad5850c2"} Mar 11 10:17:39 crc kubenswrapper[4840]: I0311 10:17:39.603712 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hb6w" event={"ID":"f05082ed-9790-4766-839b-1a9015b8715f","Type":"ContainerStarted","Data":"a1efc9a62f6a286a71154933980d1866d0a817f80f4307bcbe824fc740a2e3e4"} Mar 11 10:17:40 crc kubenswrapper[4840]: I0311 10:17:40.612644 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hb6w" event={"ID":"f05082ed-9790-4766-839b-1a9015b8715f","Type":"ContainerStarted","Data":"720b11ee200f5daacc5c547852ee33a8875aab054d0d799de7744a829699ffbd"} Mar 11 10:17:41 crc kubenswrapper[4840]: I0311 10:17:41.631734 4840 generic.go:334] "Generic (PLEG): container finished" podID="f05082ed-9790-4766-839b-1a9015b8715f" containerID="720b11ee200f5daacc5c547852ee33a8875aab054d0d799de7744a829699ffbd" exitCode=0 Mar 11 10:17:41 crc kubenswrapper[4840]: I0311 10:17:41.631787 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hb6w" event={"ID":"f05082ed-9790-4766-839b-1a9015b8715f","Type":"ContainerDied","Data":"720b11ee200f5daacc5c547852ee33a8875aab054d0d799de7744a829699ffbd"} Mar 11 10:17:42 crc kubenswrapper[4840]: I0311 10:17:42.641410 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hb6w" event={"ID":"f05082ed-9790-4766-839b-1a9015b8715f","Type":"ContainerStarted","Data":"40b59a999a140111e10eb651319b8cb57db7641ab9f4cc468e2860e6a6036662"} Mar 11 10:17:42 crc kubenswrapper[4840]: I0311 10:17:42.662668 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9hb6w" podStartSLOduration=2.046017492 podStartE2EDuration="4.662649966s" podCreationTimestamp="2026-03-11 10:17:38 +0000 UTC" firstStartedPulling="2026-03-11 10:17:39.605458987 +0000 UTC m=+4858.271128812" lastFinishedPulling="2026-03-11 10:17:42.222091451 +0000 UTC m=+4860.887761286" observedRunningTime="2026-03-11 10:17:42.658280685 +0000 UTC m=+4861.323950500" watchObservedRunningTime="2026-03-11 10:17:42.662649966 +0000 UTC m=+4861.328319781" Mar 11 10:17:48 crc kubenswrapper[4840]: I0311 10:17:48.918820 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9hb6w" Mar 11 10:17:48 crc kubenswrapper[4840]: I0311 10:17:48.919422 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9hb6w" Mar 11 10:17:49 crc kubenswrapper[4840]: I0311 10:17:49.956805 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9hb6w" podUID="f05082ed-9790-4766-839b-1a9015b8715f" containerName="registry-server" probeResult="failure" output=< Mar 11 10:17:49 crc kubenswrapper[4840]: timeout: failed to connect service ":50051" within 1s Mar 11 10:17:49 crc kubenswrapper[4840]: > Mar 11 10:17:58 crc kubenswrapper[4840]: I0311 10:17:58.968103 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9hb6w" Mar 11 10:17:59 crc kubenswrapper[4840]: I0311 10:17:59.012130 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9hb6w" Mar 11 10:17:59 crc kubenswrapper[4840]: I0311 10:17:59.209664 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9hb6w"] Mar 11 10:18:00 crc kubenswrapper[4840]: I0311 10:18:00.159207 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553738-d5xp4"] Mar 11 10:18:00 crc kubenswrapper[4840]: I0311 10:18:00.161156 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553738-d5xp4" Mar 11 10:18:00 crc kubenswrapper[4840]: I0311 10:18:00.164065 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:18:00 crc kubenswrapper[4840]: I0311 10:18:00.165756 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:18:00 crc kubenswrapper[4840]: I0311 10:18:00.167894 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:18:00 crc kubenswrapper[4840]: I0311 10:18:00.170295 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553738-d5xp4"] Mar 11 10:18:00 crc kubenswrapper[4840]: I0311 10:18:00.319111 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k94sf\" (UniqueName: \"kubernetes.io/projected/970dbd8c-cbf5-4050-a910-97ea944101b8-kube-api-access-k94sf\") pod \"auto-csr-approver-29553738-d5xp4\" (UID: \"970dbd8c-cbf5-4050-a910-97ea944101b8\") " pod="openshift-infra/auto-csr-approver-29553738-d5xp4" Mar 11 10:18:00 crc kubenswrapper[4840]: I0311 10:18:00.420529 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k94sf\" (UniqueName: \"kubernetes.io/projected/970dbd8c-cbf5-4050-a910-97ea944101b8-kube-api-access-k94sf\") pod \"auto-csr-approver-29553738-d5xp4\" (UID: \"970dbd8c-cbf5-4050-a910-97ea944101b8\") " pod="openshift-infra/auto-csr-approver-29553738-d5xp4" Mar 11 10:18:00 crc kubenswrapper[4840]: I0311 10:18:00.447957 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k94sf\" (UniqueName: \"kubernetes.io/projected/970dbd8c-cbf5-4050-a910-97ea944101b8-kube-api-access-k94sf\") pod \"auto-csr-approver-29553738-d5xp4\" (UID: \"970dbd8c-cbf5-4050-a910-97ea944101b8\") " pod="openshift-infra/auto-csr-approver-29553738-d5xp4" Mar 11 10:18:00 crc kubenswrapper[4840]: I0311 10:18:00.496164 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553738-d5xp4" Mar 11 10:18:00 crc kubenswrapper[4840]: I0311 10:18:00.782358 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9hb6w" podUID="f05082ed-9790-4766-839b-1a9015b8715f" containerName="registry-server" containerID="cri-o://40b59a999a140111e10eb651319b8cb57db7641ab9f4cc468e2860e6a6036662" gracePeriod=2 Mar 11 10:18:00 crc kubenswrapper[4840]: I0311 10:18:00.928069 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553738-d5xp4"] Mar 11 10:18:01 crc kubenswrapper[4840]: I0311 10:18:01.514962 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9hb6w" Mar 11 10:18:01 crc kubenswrapper[4840]: I0311 10:18:01.662047 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f05082ed-9790-4766-839b-1a9015b8715f-catalog-content\") pod \"f05082ed-9790-4766-839b-1a9015b8715f\" (UID: \"f05082ed-9790-4766-839b-1a9015b8715f\") " Mar 11 10:18:01 crc kubenswrapper[4840]: I0311 10:18:01.662116 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f05082ed-9790-4766-839b-1a9015b8715f-utilities\") pod \"f05082ed-9790-4766-839b-1a9015b8715f\" (UID: \"f05082ed-9790-4766-839b-1a9015b8715f\") " Mar 11 10:18:01 crc kubenswrapper[4840]: I0311 10:18:01.662272 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrwqz\" (UniqueName: \"kubernetes.io/projected/f05082ed-9790-4766-839b-1a9015b8715f-kube-api-access-rrwqz\") pod \"f05082ed-9790-4766-839b-1a9015b8715f\" (UID: \"f05082ed-9790-4766-839b-1a9015b8715f\") " Mar 11 10:18:01 crc kubenswrapper[4840]: I0311 10:18:01.663120 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f05082ed-9790-4766-839b-1a9015b8715f-utilities" (OuterVolumeSpecName: "utilities") pod "f05082ed-9790-4766-839b-1a9015b8715f" (UID: "f05082ed-9790-4766-839b-1a9015b8715f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:18:01 crc kubenswrapper[4840]: I0311 10:18:01.667844 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f05082ed-9790-4766-839b-1a9015b8715f-kube-api-access-rrwqz" (OuterVolumeSpecName: "kube-api-access-rrwqz") pod "f05082ed-9790-4766-839b-1a9015b8715f" (UID: "f05082ed-9790-4766-839b-1a9015b8715f"). InnerVolumeSpecName "kube-api-access-rrwqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:18:01 crc kubenswrapper[4840]: I0311 10:18:01.764695 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrwqz\" (UniqueName: \"kubernetes.io/projected/f05082ed-9790-4766-839b-1a9015b8715f-kube-api-access-rrwqz\") on node \"crc\" DevicePath \"\"" Mar 11 10:18:01 crc kubenswrapper[4840]: I0311 10:18:01.764777 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f05082ed-9790-4766-839b-1a9015b8715f-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:18:01 crc kubenswrapper[4840]: I0311 10:18:01.792704 4840 generic.go:334] "Generic (PLEG): container finished" podID="f05082ed-9790-4766-839b-1a9015b8715f" containerID="40b59a999a140111e10eb651319b8cb57db7641ab9f4cc468e2860e6a6036662" exitCode=0 Mar 11 10:18:01 crc kubenswrapper[4840]: I0311 10:18:01.792767 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hb6w" event={"ID":"f05082ed-9790-4766-839b-1a9015b8715f","Type":"ContainerDied","Data":"40b59a999a140111e10eb651319b8cb57db7641ab9f4cc468e2860e6a6036662"} Mar 11 10:18:01 crc kubenswrapper[4840]: I0311 10:18:01.792840 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hb6w" event={"ID":"f05082ed-9790-4766-839b-1a9015b8715f","Type":"ContainerDied","Data":"a1efc9a62f6a286a71154933980d1866d0a817f80f4307bcbe824fc740a2e3e4"} Mar 11 10:18:01 crc kubenswrapper[4840]: I0311 10:18:01.792844 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9hb6w" Mar 11 10:18:01 crc kubenswrapper[4840]: I0311 10:18:01.792873 4840 scope.go:117] "RemoveContainer" containerID="40b59a999a140111e10eb651319b8cb57db7641ab9f4cc468e2860e6a6036662" Mar 11 10:18:01 crc kubenswrapper[4840]: I0311 10:18:01.794226 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553738-d5xp4" event={"ID":"970dbd8c-cbf5-4050-a910-97ea944101b8","Type":"ContainerStarted","Data":"373d7ce249b54cee665923a4f8e762698c2c1ab685fb5279cf637e32f61399c2"} Mar 11 10:18:01 crc kubenswrapper[4840]: I0311 10:18:01.816577 4840 scope.go:117] "RemoveContainer" containerID="720b11ee200f5daacc5c547852ee33a8875aab054d0d799de7744a829699ffbd" Mar 11 10:18:01 crc kubenswrapper[4840]: I0311 10:18:01.831739 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f05082ed-9790-4766-839b-1a9015b8715f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f05082ed-9790-4766-839b-1a9015b8715f" (UID: "f05082ed-9790-4766-839b-1a9015b8715f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:18:01 crc kubenswrapper[4840]: I0311 10:18:01.839303 4840 scope.go:117] "RemoveContainer" containerID="496732aba3265d36bf401f2ca36bd69fb951510ec7b78a7fc437cd13ad5850c2" Mar 11 10:18:01 crc kubenswrapper[4840]: I0311 10:18:01.866384 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f05082ed-9790-4766-839b-1a9015b8715f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:18:01 crc kubenswrapper[4840]: I0311 10:18:01.868510 4840 scope.go:117] "RemoveContainer" containerID="40b59a999a140111e10eb651319b8cb57db7641ab9f4cc468e2860e6a6036662" Mar 11 10:18:01 crc kubenswrapper[4840]: E0311 10:18:01.872914 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40b59a999a140111e10eb651319b8cb57db7641ab9f4cc468e2860e6a6036662\": container with ID starting with 40b59a999a140111e10eb651319b8cb57db7641ab9f4cc468e2860e6a6036662 not found: ID does not exist" containerID="40b59a999a140111e10eb651319b8cb57db7641ab9f4cc468e2860e6a6036662" Mar 11 10:18:01 crc kubenswrapper[4840]: I0311 10:18:01.872974 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40b59a999a140111e10eb651319b8cb57db7641ab9f4cc468e2860e6a6036662"} err="failed to get container status \"40b59a999a140111e10eb651319b8cb57db7641ab9f4cc468e2860e6a6036662\": rpc error: code = NotFound desc = could not find container \"40b59a999a140111e10eb651319b8cb57db7641ab9f4cc468e2860e6a6036662\": container with ID starting with 40b59a999a140111e10eb651319b8cb57db7641ab9f4cc468e2860e6a6036662 not found: ID does not exist" Mar 11 10:18:01 crc kubenswrapper[4840]: I0311 10:18:01.873014 4840 scope.go:117] "RemoveContainer" containerID="720b11ee200f5daacc5c547852ee33a8875aab054d0d799de7744a829699ffbd" Mar 11 10:18:01 crc kubenswrapper[4840]: E0311 10:18:01.873927 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"720b11ee200f5daacc5c547852ee33a8875aab054d0d799de7744a829699ffbd\": container with ID starting with 720b11ee200f5daacc5c547852ee33a8875aab054d0d799de7744a829699ffbd not found: ID does not exist" containerID="720b11ee200f5daacc5c547852ee33a8875aab054d0d799de7744a829699ffbd" Mar 11 10:18:01 crc kubenswrapper[4840]: I0311 10:18:01.873999 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"720b11ee200f5daacc5c547852ee33a8875aab054d0d799de7744a829699ffbd"} err="failed to get container status \"720b11ee200f5daacc5c547852ee33a8875aab054d0d799de7744a829699ffbd\": rpc error: code = NotFound desc = could not find container \"720b11ee200f5daacc5c547852ee33a8875aab054d0d799de7744a829699ffbd\": container with ID starting with 720b11ee200f5daacc5c547852ee33a8875aab054d0d799de7744a829699ffbd not found: ID does not exist" Mar 11 10:18:01 crc kubenswrapper[4840]: I0311 10:18:01.874042 4840 scope.go:117] "RemoveContainer" containerID="496732aba3265d36bf401f2ca36bd69fb951510ec7b78a7fc437cd13ad5850c2" Mar 11 10:18:01 crc kubenswrapper[4840]: E0311 10:18:01.874448 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"496732aba3265d36bf401f2ca36bd69fb951510ec7b78a7fc437cd13ad5850c2\": container with ID starting with 496732aba3265d36bf401f2ca36bd69fb951510ec7b78a7fc437cd13ad5850c2 not found: ID does not exist" containerID="496732aba3265d36bf401f2ca36bd69fb951510ec7b78a7fc437cd13ad5850c2" Mar 11 10:18:01 crc kubenswrapper[4840]: I0311 10:18:01.874545 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"496732aba3265d36bf401f2ca36bd69fb951510ec7b78a7fc437cd13ad5850c2"} err="failed to get container status \"496732aba3265d36bf401f2ca36bd69fb951510ec7b78a7fc437cd13ad5850c2\": rpc error: code = NotFound desc = could not find container \"496732aba3265d36bf401f2ca36bd69fb951510ec7b78a7fc437cd13ad5850c2\": container with ID starting with 496732aba3265d36bf401f2ca36bd69fb951510ec7b78a7fc437cd13ad5850c2 not found: ID does not exist" Mar 11 10:18:02 crc kubenswrapper[4840]: I0311 10:18:02.114881 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9hb6w"] Mar 11 10:18:02 crc kubenswrapper[4840]: I0311 10:18:02.121017 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9hb6w"] Mar 11 10:18:02 crc kubenswrapper[4840]: I0311 10:18:02.805759 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553738-d5xp4" event={"ID":"970dbd8c-cbf5-4050-a910-97ea944101b8","Type":"ContainerStarted","Data":"3474ec326266b4f62ee3c7845d45d33b39e7789d0992660c14c9ae8b8b4744f1"} Mar 11 10:18:02 crc kubenswrapper[4840]: I0311 10:18:02.824719 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553738-d5xp4" podStartSLOduration=1.3699625260000001 podStartE2EDuration="2.824699341s" podCreationTimestamp="2026-03-11 10:18:00 +0000 UTC" firstStartedPulling="2026-03-11 10:18:00.932309605 +0000 UTC m=+4879.597979420" lastFinishedPulling="2026-03-11 10:18:02.38704642 +0000 UTC m=+4881.052716235" observedRunningTime="2026-03-11 10:18:02.818337409 +0000 UTC m=+4881.484007224" watchObservedRunningTime="2026-03-11 10:18:02.824699341 +0000 UTC m=+4881.490369166" Mar 11 10:18:03 crc kubenswrapper[4840]: I0311 10:18:03.816610 4840 generic.go:334] "Generic (PLEG): container finished" podID="970dbd8c-cbf5-4050-a910-97ea944101b8" containerID="3474ec326266b4f62ee3c7845d45d33b39e7789d0992660c14c9ae8b8b4744f1" exitCode=0 Mar 11 10:18:03 crc kubenswrapper[4840]: I0311 10:18:03.816664 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553738-d5xp4" event={"ID":"970dbd8c-cbf5-4050-a910-97ea944101b8","Type":"ContainerDied","Data":"3474ec326266b4f62ee3c7845d45d33b39e7789d0992660c14c9ae8b8b4744f1"} Mar 11 10:18:04 crc kubenswrapper[4840]: I0311 10:18:04.075245 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f05082ed-9790-4766-839b-1a9015b8715f" path="/var/lib/kubelet/pods/f05082ed-9790-4766-839b-1a9015b8715f/volumes" Mar 11 10:18:05 crc kubenswrapper[4840]: I0311 10:18:05.068670 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553738-d5xp4" Mar 11 10:18:05 crc kubenswrapper[4840]: I0311 10:18:05.143579 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553732-z9df8"] Mar 11 10:18:05 crc kubenswrapper[4840]: I0311 10:18:05.150169 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553732-z9df8"] Mar 11 10:18:05 crc kubenswrapper[4840]: I0311 10:18:05.216373 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k94sf\" (UniqueName: \"kubernetes.io/projected/970dbd8c-cbf5-4050-a910-97ea944101b8-kube-api-access-k94sf\") pod \"970dbd8c-cbf5-4050-a910-97ea944101b8\" (UID: \"970dbd8c-cbf5-4050-a910-97ea944101b8\") " Mar 11 10:18:05 crc kubenswrapper[4840]: I0311 10:18:05.227385 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/970dbd8c-cbf5-4050-a910-97ea944101b8-kube-api-access-k94sf" (OuterVolumeSpecName: "kube-api-access-k94sf") pod "970dbd8c-cbf5-4050-a910-97ea944101b8" (UID: "970dbd8c-cbf5-4050-a910-97ea944101b8"). InnerVolumeSpecName "kube-api-access-k94sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:18:05 crc kubenswrapper[4840]: I0311 10:18:05.318072 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k94sf\" (UniqueName: \"kubernetes.io/projected/970dbd8c-cbf5-4050-a910-97ea944101b8-kube-api-access-k94sf\") on node \"crc\" DevicePath \"\"" Mar 11 10:18:05 crc kubenswrapper[4840]: I0311 10:18:05.833317 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553738-d5xp4" event={"ID":"970dbd8c-cbf5-4050-a910-97ea944101b8","Type":"ContainerDied","Data":"373d7ce249b54cee665923a4f8e762698c2c1ab685fb5279cf637e32f61399c2"} Mar 11 10:18:05 crc kubenswrapper[4840]: I0311 10:18:05.833741 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="373d7ce249b54cee665923a4f8e762698c2c1ab685fb5279cf637e32f61399c2" Mar 11 10:18:05 crc kubenswrapper[4840]: I0311 10:18:05.833384 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553738-d5xp4" Mar 11 10:18:06 crc kubenswrapper[4840]: I0311 10:18:06.069806 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="963a8f23-824f-4e68-ab6e-f2443920087d" path="/var/lib/kubelet/pods/963a8f23-824f-4e68-ab6e-f2443920087d/volumes" Mar 11 10:18:11 crc kubenswrapper[4840]: I0311 10:18:11.842400 4840 scope.go:117] "RemoveContainer" containerID="c8fe2e5935d84e08a7e8d303c01bbcb02d7affcfd61968899a43fcaa601a0490" Mar 11 10:18:11 crc kubenswrapper[4840]: I0311 10:18:11.872534 4840 scope.go:117] "RemoveContainer" containerID="8f8d95f3a3654ccfea14a0647a35721582c995687605c5b40df2315641d0fe12" Mar 11 10:19:08 crc kubenswrapper[4840]: I0311 10:19:08.345864 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l62xq"] Mar 11 10:19:08 crc kubenswrapper[4840]: E0311 10:19:08.346922 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="970dbd8c-cbf5-4050-a910-97ea944101b8" containerName="oc" Mar 11 10:19:08 crc kubenswrapper[4840]: I0311 10:19:08.346943 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="970dbd8c-cbf5-4050-a910-97ea944101b8" containerName="oc" Mar 11 10:19:08 crc kubenswrapper[4840]: E0311 10:19:08.346970 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f05082ed-9790-4766-839b-1a9015b8715f" containerName="extract-content" Mar 11 10:19:08 crc kubenswrapper[4840]: I0311 10:19:08.346981 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f05082ed-9790-4766-839b-1a9015b8715f" containerName="extract-content" Mar 11 10:19:08 crc kubenswrapper[4840]: E0311 10:19:08.347027 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f05082ed-9790-4766-839b-1a9015b8715f" containerName="registry-server" Mar 11 10:19:08 crc kubenswrapper[4840]: I0311 10:19:08.347040 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f05082ed-9790-4766-839b-1a9015b8715f" containerName="registry-server" Mar 11 10:19:08 crc kubenswrapper[4840]: E0311 10:19:08.347075 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f05082ed-9790-4766-839b-1a9015b8715f" containerName="extract-utilities" Mar 11 10:19:08 crc kubenswrapper[4840]: I0311 10:19:08.347086 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f05082ed-9790-4766-839b-1a9015b8715f" containerName="extract-utilities" Mar 11 10:19:08 crc kubenswrapper[4840]: I0311 10:19:08.347290 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="970dbd8c-cbf5-4050-a910-97ea944101b8" containerName="oc" Mar 11 10:19:08 crc kubenswrapper[4840]: I0311 10:19:08.347315 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f05082ed-9790-4766-839b-1a9015b8715f" containerName="registry-server" Mar 11 10:19:08 crc kubenswrapper[4840]: I0311 10:19:08.348550 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l62xq" Mar 11 10:19:08 crc kubenswrapper[4840]: I0311 10:19:08.364388 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l62xq"] Mar 11 10:19:08 crc kubenswrapper[4840]: I0311 10:19:08.450595 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7-catalog-content\") pod \"certified-operators-l62xq\" (UID: \"00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7\") " pod="openshift-marketplace/certified-operators-l62xq" Mar 11 10:19:08 crc kubenswrapper[4840]: I0311 10:19:08.450756 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsb7m\" (UniqueName: \"kubernetes.io/projected/00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7-kube-api-access-rsb7m\") pod \"certified-operators-l62xq\" (UID: \"00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7\") " pod="openshift-marketplace/certified-operators-l62xq" Mar 11 10:19:08 crc kubenswrapper[4840]: I0311 10:19:08.450795 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7-utilities\") pod \"certified-operators-l62xq\" (UID: \"00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7\") " pod="openshift-marketplace/certified-operators-l62xq" Mar 11 10:19:08 crc kubenswrapper[4840]: I0311 10:19:08.551855 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsb7m\" (UniqueName: \"kubernetes.io/projected/00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7-kube-api-access-rsb7m\") pod \"certified-operators-l62xq\" (UID: \"00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7\") " pod="openshift-marketplace/certified-operators-l62xq" Mar 11 10:19:08 crc kubenswrapper[4840]: I0311 10:19:08.551899 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7-utilities\") pod \"certified-operators-l62xq\" (UID: \"00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7\") " pod="openshift-marketplace/certified-operators-l62xq" Mar 11 10:19:08 crc kubenswrapper[4840]: I0311 10:19:08.551955 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7-catalog-content\") pod \"certified-operators-l62xq\" (UID: \"00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7\") " pod="openshift-marketplace/certified-operators-l62xq" Mar 11 10:19:08 crc kubenswrapper[4840]: I0311 10:19:08.552487 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7-catalog-content\") pod \"certified-operators-l62xq\" (UID: \"00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7\") " pod="openshift-marketplace/certified-operators-l62xq" Mar 11 10:19:08 crc kubenswrapper[4840]: I0311 10:19:08.552504 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7-utilities\") pod \"certified-operators-l62xq\" (UID: \"00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7\") " pod="openshift-marketplace/certified-operators-l62xq" Mar 11 10:19:08 crc kubenswrapper[4840]: I0311 10:19:08.573253 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsb7m\" (UniqueName: \"kubernetes.io/projected/00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7-kube-api-access-rsb7m\") pod \"certified-operators-l62xq\" (UID: \"00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7\") " pod="openshift-marketplace/certified-operators-l62xq" Mar 11 10:19:08 crc kubenswrapper[4840]: I0311 10:19:08.664601 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l62xq" Mar 11 10:19:08 crc kubenswrapper[4840]: I0311 10:19:08.894642 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l62xq"] Mar 11 10:19:09 crc kubenswrapper[4840]: I0311 10:19:09.443783 4840 generic.go:334] "Generic (PLEG): container finished" podID="00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7" containerID="4dcf18413f5fbc5d327fe54b44cbe9a3b6588428b4e3ee9ae6e10afcd74c4749" exitCode=0 Mar 11 10:19:09 crc kubenswrapper[4840]: I0311 10:19:09.443826 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l62xq" event={"ID":"00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7","Type":"ContainerDied","Data":"4dcf18413f5fbc5d327fe54b44cbe9a3b6588428b4e3ee9ae6e10afcd74c4749"} Mar 11 10:19:09 crc kubenswrapper[4840]: I0311 10:19:09.443872 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l62xq" event={"ID":"00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7","Type":"ContainerStarted","Data":"67ea385a28d9125abc825e19109a48efa5d640ab5a2f43f651e5ffe38e2cd993"} Mar 11 10:19:10 crc kubenswrapper[4840]: I0311 10:19:10.454855 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l62xq" event={"ID":"00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7","Type":"ContainerStarted","Data":"4a0e4d7828e8b2bb8a4e634fcccc61b4a0815a750132c8a9727f6e357fa1b513"} Mar 11 10:19:11 crc kubenswrapper[4840]: I0311 10:19:11.465106 4840 generic.go:334] "Generic (PLEG): container finished" podID="00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7" containerID="4a0e4d7828e8b2bb8a4e634fcccc61b4a0815a750132c8a9727f6e357fa1b513" exitCode=0 Mar 11 10:19:11 crc kubenswrapper[4840]: I0311 10:19:11.465132 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l62xq" event={"ID":"00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7","Type":"ContainerDied","Data":"4a0e4d7828e8b2bb8a4e634fcccc61b4a0815a750132c8a9727f6e357fa1b513"} Mar 11 10:19:12 crc kubenswrapper[4840]: I0311 10:19:12.474707 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l62xq" event={"ID":"00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7","Type":"ContainerStarted","Data":"1e2af9f3decb2fe43b10bdbaf415cc54fb647a6dfae3770ab587ffd8296686a1"} Mar 11 10:19:12 crc kubenswrapper[4840]: I0311 10:19:12.495683 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l62xq" podStartSLOduration=1.927486196 podStartE2EDuration="4.495662909s" podCreationTimestamp="2026-03-11 10:19:08 +0000 UTC" firstStartedPulling="2026-03-11 10:19:09.445814456 +0000 UTC m=+4948.111484271" lastFinishedPulling="2026-03-11 10:19:12.013991179 +0000 UTC m=+4950.679660984" observedRunningTime="2026-03-11 10:19:12.494081338 +0000 UTC m=+4951.159751193" watchObservedRunningTime="2026-03-11 10:19:12.495662909 +0000 UTC m=+4951.161332744" Mar 11 10:19:18 crc kubenswrapper[4840]: I0311 10:19:18.665028 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l62xq" Mar 11 10:19:18 crc kubenswrapper[4840]: I0311 10:19:18.665581 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l62xq" Mar 11 10:19:18 crc kubenswrapper[4840]: I0311 10:19:18.703374 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l62xq" Mar 11 10:19:19 crc kubenswrapper[4840]: I0311 10:19:19.562901 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l62xq" Mar 11 10:19:19 crc kubenswrapper[4840]: I0311 10:19:19.626763 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l62xq"] Mar 11 10:19:21 crc kubenswrapper[4840]: I0311 10:19:21.528961 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l62xq" podUID="00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7" containerName="registry-server" containerID="cri-o://1e2af9f3decb2fe43b10bdbaf415cc54fb647a6dfae3770ab587ffd8296686a1" gracePeriod=2 Mar 11 10:19:21 crc kubenswrapper[4840]: I0311 10:19:21.953143 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l62xq" Mar 11 10:19:22 crc kubenswrapper[4840]: I0311 10:19:22.049801 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7-utilities\") pod \"00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7\" (UID: \"00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7\") " Mar 11 10:19:22 crc kubenswrapper[4840]: I0311 10:19:22.049877 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7-catalog-content\") pod \"00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7\" (UID: \"00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7\") " Mar 11 10:19:22 crc kubenswrapper[4840]: I0311 10:19:22.049929 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsb7m\" (UniqueName: \"kubernetes.io/projected/00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7-kube-api-access-rsb7m\") pod \"00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7\" (UID: \"00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7\") " Mar 11 10:19:22 crc kubenswrapper[4840]: I0311 10:19:22.050915 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7-utilities" (OuterVolumeSpecName: "utilities") pod "00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7" (UID: "00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:19:22 crc kubenswrapper[4840]: I0311 10:19:22.056369 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7-kube-api-access-rsb7m" (OuterVolumeSpecName: "kube-api-access-rsb7m") pod "00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7" (UID: "00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7"). InnerVolumeSpecName "kube-api-access-rsb7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:19:22 crc kubenswrapper[4840]: I0311 10:19:22.114602 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7" (UID: "00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:19:22 crc kubenswrapper[4840]: I0311 10:19:22.151453 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:19:22 crc kubenswrapper[4840]: I0311 10:19:22.151502 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:19:22 crc kubenswrapper[4840]: I0311 10:19:22.151518 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsb7m\" (UniqueName: \"kubernetes.io/projected/00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7-kube-api-access-rsb7m\") on node \"crc\" DevicePath \"\"" Mar 11 10:19:22 crc kubenswrapper[4840]: I0311 10:19:22.540189 4840 generic.go:334] "Generic (PLEG): container finished" podID="00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7" containerID="1e2af9f3decb2fe43b10bdbaf415cc54fb647a6dfae3770ab587ffd8296686a1" exitCode=0 Mar 11 10:19:22 crc kubenswrapper[4840]: I0311 10:19:22.540224 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l62xq" Mar 11 10:19:22 crc kubenswrapper[4840]: I0311 10:19:22.540235 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l62xq" event={"ID":"00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7","Type":"ContainerDied","Data":"1e2af9f3decb2fe43b10bdbaf415cc54fb647a6dfae3770ab587ffd8296686a1"} Mar 11 10:19:22 crc kubenswrapper[4840]: I0311 10:19:22.540270 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l62xq" event={"ID":"00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7","Type":"ContainerDied","Data":"67ea385a28d9125abc825e19109a48efa5d640ab5a2f43f651e5ffe38e2cd993"} Mar 11 10:19:22 crc kubenswrapper[4840]: I0311 10:19:22.540292 4840 scope.go:117] "RemoveContainer" containerID="1e2af9f3decb2fe43b10bdbaf415cc54fb647a6dfae3770ab587ffd8296686a1" Mar 11 10:19:22 crc kubenswrapper[4840]: I0311 10:19:22.560254 4840 scope.go:117] "RemoveContainer" containerID="4a0e4d7828e8b2bb8a4e634fcccc61b4a0815a750132c8a9727f6e357fa1b513" Mar 11 10:19:22 crc kubenswrapper[4840]: I0311 10:19:22.579746 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l62xq"] Mar 11 10:19:22 crc kubenswrapper[4840]: I0311 10:19:22.584279 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l62xq"] Mar 11 10:19:22 crc kubenswrapper[4840]: I0311 10:19:22.601666 4840 scope.go:117] "RemoveContainer" containerID="4dcf18413f5fbc5d327fe54b44cbe9a3b6588428b4e3ee9ae6e10afcd74c4749" Mar 11 10:19:22 crc kubenswrapper[4840]: I0311 10:19:22.618314 4840 scope.go:117] "RemoveContainer" containerID="1e2af9f3decb2fe43b10bdbaf415cc54fb647a6dfae3770ab587ffd8296686a1" Mar 11 10:19:22 crc kubenswrapper[4840]: E0311 10:19:22.618812 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e2af9f3decb2fe43b10bdbaf415cc54fb647a6dfae3770ab587ffd8296686a1\": container with ID starting with 1e2af9f3decb2fe43b10bdbaf415cc54fb647a6dfae3770ab587ffd8296686a1 not found: ID does not exist" containerID="1e2af9f3decb2fe43b10bdbaf415cc54fb647a6dfae3770ab587ffd8296686a1" Mar 11 10:19:22 crc kubenswrapper[4840]: I0311 10:19:22.618860 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e2af9f3decb2fe43b10bdbaf415cc54fb647a6dfae3770ab587ffd8296686a1"} err="failed to get container status \"1e2af9f3decb2fe43b10bdbaf415cc54fb647a6dfae3770ab587ffd8296686a1\": rpc error: code = NotFound desc = could not find container \"1e2af9f3decb2fe43b10bdbaf415cc54fb647a6dfae3770ab587ffd8296686a1\": container with ID starting with 1e2af9f3decb2fe43b10bdbaf415cc54fb647a6dfae3770ab587ffd8296686a1 not found: ID does not exist" Mar 11 10:19:22 crc kubenswrapper[4840]: I0311 10:19:22.618886 4840 scope.go:117] "RemoveContainer" containerID="4a0e4d7828e8b2bb8a4e634fcccc61b4a0815a750132c8a9727f6e357fa1b513" Mar 11 10:19:22 crc kubenswrapper[4840]: E0311 10:19:22.619413 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a0e4d7828e8b2bb8a4e634fcccc61b4a0815a750132c8a9727f6e357fa1b513\": container with ID starting with 4a0e4d7828e8b2bb8a4e634fcccc61b4a0815a750132c8a9727f6e357fa1b513 not found: ID does not exist" containerID="4a0e4d7828e8b2bb8a4e634fcccc61b4a0815a750132c8a9727f6e357fa1b513" Mar 11 10:19:22 crc kubenswrapper[4840]: I0311 10:19:22.619432 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a0e4d7828e8b2bb8a4e634fcccc61b4a0815a750132c8a9727f6e357fa1b513"} err="failed to get container status \"4a0e4d7828e8b2bb8a4e634fcccc61b4a0815a750132c8a9727f6e357fa1b513\": rpc error: code = NotFound desc = could not find container \"4a0e4d7828e8b2bb8a4e634fcccc61b4a0815a750132c8a9727f6e357fa1b513\": container with ID starting with 4a0e4d7828e8b2bb8a4e634fcccc61b4a0815a750132c8a9727f6e357fa1b513 not found: ID does not exist" Mar 11 10:19:22 crc kubenswrapper[4840]: I0311 10:19:22.619446 4840 scope.go:117] "RemoveContainer" containerID="4dcf18413f5fbc5d327fe54b44cbe9a3b6588428b4e3ee9ae6e10afcd74c4749" Mar 11 10:19:22 crc kubenswrapper[4840]: E0311 10:19:22.620004 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dcf18413f5fbc5d327fe54b44cbe9a3b6588428b4e3ee9ae6e10afcd74c4749\": container with ID starting with 4dcf18413f5fbc5d327fe54b44cbe9a3b6588428b4e3ee9ae6e10afcd74c4749 not found: ID does not exist" containerID="4dcf18413f5fbc5d327fe54b44cbe9a3b6588428b4e3ee9ae6e10afcd74c4749" Mar 11 10:19:22 crc kubenswrapper[4840]: I0311 10:19:22.620042 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dcf18413f5fbc5d327fe54b44cbe9a3b6588428b4e3ee9ae6e10afcd74c4749"} err="failed to get container status \"4dcf18413f5fbc5d327fe54b44cbe9a3b6588428b4e3ee9ae6e10afcd74c4749\": rpc error: code = NotFound desc = could not find container \"4dcf18413f5fbc5d327fe54b44cbe9a3b6588428b4e3ee9ae6e10afcd74c4749\": container with ID starting with 4dcf18413f5fbc5d327fe54b44cbe9a3b6588428b4e3ee9ae6e10afcd74c4749 not found: ID does not exist" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.067650 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7" path="/var/lib/kubelet/pods/00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7/volumes" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.084564 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c44667757-bkvgc"] Mar 11 10:19:24 crc kubenswrapper[4840]: E0311 10:19:24.085105 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7" containerName="registry-server" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.085168 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7" containerName="registry-server" Mar 11 10:19:24 crc kubenswrapper[4840]: E0311 10:19:24.085241 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7" containerName="extract-utilities" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.085300 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7" containerName="extract-utilities" Mar 11 10:19:24 crc kubenswrapper[4840]: E0311 10:19:24.085359 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7" containerName="extract-content" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.085406 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7" containerName="extract-content" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.085592 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="00d6c78d-537c-4d23-8dc9-e8d0f0ebe0e7" containerName="registry-server" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.086314 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c44667757-bkvgc" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.088139 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.088511 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.088921 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-jdjf6" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.089087 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.152796 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55c76fd6b7-2r87m"] Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.155901 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55c76fd6b7-2r87m" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.159724 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.164544 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c44667757-bkvgc"] Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.177240 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55c76fd6b7-2r87m"] Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.182323 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34fc7f66-e5e7-4750-bb88-2eb9545326d1-config\") pod \"dnsmasq-dns-c44667757-bkvgc\" (UID: \"34fc7f66-e5e7-4750-bb88-2eb9545326d1\") " pod="openstack/dnsmasq-dns-c44667757-bkvgc" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.182495 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glzdw\" (UniqueName: \"kubernetes.io/projected/34fc7f66-e5e7-4750-bb88-2eb9545326d1-kube-api-access-glzdw\") pod \"dnsmasq-dns-c44667757-bkvgc\" (UID: \"34fc7f66-e5e7-4750-bb88-2eb9545326d1\") " pod="openstack/dnsmasq-dns-c44667757-bkvgc" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.283486 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x5zl\" (UniqueName: \"kubernetes.io/projected/950d829a-6daf-4e24-b446-618ef56dda95-kube-api-access-8x5zl\") pod \"dnsmasq-dns-55c76fd6b7-2r87m\" (UID: \"950d829a-6daf-4e24-b446-618ef56dda95\") " pod="openstack/dnsmasq-dns-55c76fd6b7-2r87m" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.284141 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34fc7f66-e5e7-4750-bb88-2eb9545326d1-config\") pod \"dnsmasq-dns-c44667757-bkvgc\" (UID: \"34fc7f66-e5e7-4750-bb88-2eb9545326d1\") " pod="openstack/dnsmasq-dns-c44667757-bkvgc" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.284254 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/950d829a-6daf-4e24-b446-618ef56dda95-dns-svc\") pod \"dnsmasq-dns-55c76fd6b7-2r87m\" (UID: \"950d829a-6daf-4e24-b446-618ef56dda95\") " pod="openstack/dnsmasq-dns-55c76fd6b7-2r87m" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.284371 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/950d829a-6daf-4e24-b446-618ef56dda95-config\") pod \"dnsmasq-dns-55c76fd6b7-2r87m\" (UID: \"950d829a-6daf-4e24-b446-618ef56dda95\") " pod="openstack/dnsmasq-dns-55c76fd6b7-2r87m" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.284486 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glzdw\" (UniqueName: \"kubernetes.io/projected/34fc7f66-e5e7-4750-bb88-2eb9545326d1-kube-api-access-glzdw\") pod \"dnsmasq-dns-c44667757-bkvgc\" (UID: \"34fc7f66-e5e7-4750-bb88-2eb9545326d1\") " pod="openstack/dnsmasq-dns-c44667757-bkvgc" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.285504 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34fc7f66-e5e7-4750-bb88-2eb9545326d1-config\") pod \"dnsmasq-dns-c44667757-bkvgc\" (UID: \"34fc7f66-e5e7-4750-bb88-2eb9545326d1\") " pod="openstack/dnsmasq-dns-c44667757-bkvgc" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.306552 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glzdw\" (UniqueName: \"kubernetes.io/projected/34fc7f66-e5e7-4750-bb88-2eb9545326d1-kube-api-access-glzdw\") pod \"dnsmasq-dns-c44667757-bkvgc\" (UID: \"34fc7f66-e5e7-4750-bb88-2eb9545326d1\") " pod="openstack/dnsmasq-dns-c44667757-bkvgc" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.386217 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/950d829a-6daf-4e24-b446-618ef56dda95-dns-svc\") pod \"dnsmasq-dns-55c76fd6b7-2r87m\" (UID: \"950d829a-6daf-4e24-b446-618ef56dda95\") " pod="openstack/dnsmasq-dns-55c76fd6b7-2r87m" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.386276 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/950d829a-6daf-4e24-b446-618ef56dda95-config\") pod \"dnsmasq-dns-55c76fd6b7-2r87m\" (UID: \"950d829a-6daf-4e24-b446-618ef56dda95\") " pod="openstack/dnsmasq-dns-55c76fd6b7-2r87m" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.386336 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x5zl\" (UniqueName: \"kubernetes.io/projected/950d829a-6daf-4e24-b446-618ef56dda95-kube-api-access-8x5zl\") pod \"dnsmasq-dns-55c76fd6b7-2r87m\" (UID: \"950d829a-6daf-4e24-b446-618ef56dda95\") " pod="openstack/dnsmasq-dns-55c76fd6b7-2r87m" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.387063 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/950d829a-6daf-4e24-b446-618ef56dda95-dns-svc\") pod \"dnsmasq-dns-55c76fd6b7-2r87m\" (UID: \"950d829a-6daf-4e24-b446-618ef56dda95\") " pod="openstack/dnsmasq-dns-55c76fd6b7-2r87m" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.387090 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/950d829a-6daf-4e24-b446-618ef56dda95-config\") pod \"dnsmasq-dns-55c76fd6b7-2r87m\" (UID: \"950d829a-6daf-4e24-b446-618ef56dda95\") " pod="openstack/dnsmasq-dns-55c76fd6b7-2r87m" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.410166 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x5zl\" (UniqueName: \"kubernetes.io/projected/950d829a-6daf-4e24-b446-618ef56dda95-kube-api-access-8x5zl\") pod \"dnsmasq-dns-55c76fd6b7-2r87m\" (UID: \"950d829a-6daf-4e24-b446-618ef56dda95\") " pod="openstack/dnsmasq-dns-55c76fd6b7-2r87m" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.454797 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c44667757-bkvgc" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.483642 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55c76fd6b7-2r87m" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.738443 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c44667757-bkvgc"] Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.844928 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55c76fd6b7-2r87m"] Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.887501 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fb77f9685-xvhqc"] Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.890204 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fb77f9685-xvhqc" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.938454 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fb77f9685-xvhqc"] Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.998757 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd436b03-5a93-4f6d-a946-8d3c8060f1c2-dns-svc\") pod \"dnsmasq-dns-5fb77f9685-xvhqc\" (UID: \"bd436b03-5a93-4f6d-a946-8d3c8060f1c2\") " pod="openstack/dnsmasq-dns-5fb77f9685-xvhqc" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.999211 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd436b03-5a93-4f6d-a946-8d3c8060f1c2-config\") pod \"dnsmasq-dns-5fb77f9685-xvhqc\" (UID: \"bd436b03-5a93-4f6d-a946-8d3c8060f1c2\") " pod="openstack/dnsmasq-dns-5fb77f9685-xvhqc" Mar 11 10:19:24 crc kubenswrapper[4840]: I0311 10:19:24.999259 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spg9v\" (UniqueName: \"kubernetes.io/projected/bd436b03-5a93-4f6d-a946-8d3c8060f1c2-kube-api-access-spg9v\") pod \"dnsmasq-dns-5fb77f9685-xvhqc\" (UID: \"bd436b03-5a93-4f6d-a946-8d3c8060f1c2\") " pod="openstack/dnsmasq-dns-5fb77f9685-xvhqc" Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.073092 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55c76fd6b7-2r87m"] Mar 11 10:19:25 crc kubenswrapper[4840]: W0311 10:19:25.075579 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod950d829a_6daf_4e24_b446_618ef56dda95.slice/crio-b52596e11442915305e27c4b4e82137e8543915b7f0b4d7c5c3025ad89154246 WatchSource:0}: Error finding container b52596e11442915305e27c4b4e82137e8543915b7f0b4d7c5c3025ad89154246: Status 404 returned error can't find the container with id b52596e11442915305e27c4b4e82137e8543915b7f0b4d7c5c3025ad89154246 Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.100131 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd436b03-5a93-4f6d-a946-8d3c8060f1c2-dns-svc\") pod \"dnsmasq-dns-5fb77f9685-xvhqc\" (UID: \"bd436b03-5a93-4f6d-a946-8d3c8060f1c2\") " pod="openstack/dnsmasq-dns-5fb77f9685-xvhqc" Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.100190 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd436b03-5a93-4f6d-a946-8d3c8060f1c2-config\") pod \"dnsmasq-dns-5fb77f9685-xvhqc\" (UID: \"bd436b03-5a93-4f6d-a946-8d3c8060f1c2\") " pod="openstack/dnsmasq-dns-5fb77f9685-xvhqc" Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.100235 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spg9v\" (UniqueName: \"kubernetes.io/projected/bd436b03-5a93-4f6d-a946-8d3c8060f1c2-kube-api-access-spg9v\") pod \"dnsmasq-dns-5fb77f9685-xvhqc\" (UID: \"bd436b03-5a93-4f6d-a946-8d3c8060f1c2\") " pod="openstack/dnsmasq-dns-5fb77f9685-xvhqc" Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.101263 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd436b03-5a93-4f6d-a946-8d3c8060f1c2-config\") pod \"dnsmasq-dns-5fb77f9685-xvhqc\" (UID: \"bd436b03-5a93-4f6d-a946-8d3c8060f1c2\") " pod="openstack/dnsmasq-dns-5fb77f9685-xvhqc" Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.101411 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd436b03-5a93-4f6d-a946-8d3c8060f1c2-dns-svc\") pod \"dnsmasq-dns-5fb77f9685-xvhqc\" (UID: \"bd436b03-5a93-4f6d-a946-8d3c8060f1c2\") " pod="openstack/dnsmasq-dns-5fb77f9685-xvhqc" Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.118811 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spg9v\" (UniqueName: \"kubernetes.io/projected/bd436b03-5a93-4f6d-a946-8d3c8060f1c2-kube-api-access-spg9v\") pod \"dnsmasq-dns-5fb77f9685-xvhqc\" (UID: \"bd436b03-5a93-4f6d-a946-8d3c8060f1c2\") " pod="openstack/dnsmasq-dns-5fb77f9685-xvhqc" Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.260910 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fb77f9685-xvhqc" Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.511355 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fb77f9685-xvhqc"] Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.544686 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-ff89b6977-nx62v"] Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.545893 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ff89b6977-nx62v" Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.568548 4840 generic.go:334] "Generic (PLEG): container finished" podID="950d829a-6daf-4e24-b446-618ef56dda95" containerID="1268b41694125111928ff697c8af187e888ffd7b616bd8ca5e5af94d6d371a2b" exitCode=0 Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.568978 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55c76fd6b7-2r87m" event={"ID":"950d829a-6daf-4e24-b446-618ef56dda95","Type":"ContainerDied","Data":"1268b41694125111928ff697c8af187e888ffd7b616bd8ca5e5af94d6d371a2b"} Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.569109 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55c76fd6b7-2r87m" event={"ID":"950d829a-6daf-4e24-b446-618ef56dda95","Type":"ContainerStarted","Data":"b52596e11442915305e27c4b4e82137e8543915b7f0b4d7c5c3025ad89154246"} Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.573150 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-ff89b6977-nx62v"] Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.576329 4840 generic.go:334] "Generic (PLEG): container finished" podID="34fc7f66-e5e7-4750-bb88-2eb9545326d1" containerID="5c418a6f2352363da657d363ec937a537329d0c80c790ee70333693a60535a6c" exitCode=0 Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.576367 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c44667757-bkvgc" event={"ID":"34fc7f66-e5e7-4750-bb88-2eb9545326d1","Type":"ContainerDied","Data":"5c418a6f2352363da657d363ec937a537329d0c80c790ee70333693a60535a6c"} Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.576388 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c44667757-bkvgc" event={"ID":"34fc7f66-e5e7-4750-bb88-2eb9545326d1","Type":"ContainerStarted","Data":"41e3ae2db06b439ad4707cadee3836f6011c4d8ddda69fb121394a896bd08f3e"} Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.610370 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjh87\" (UniqueName: \"kubernetes.io/projected/04092458-98a3-458a-acca-d10f849df4dc-kube-api-access-hjh87\") pod \"dnsmasq-dns-ff89b6977-nx62v\" (UID: \"04092458-98a3-458a-acca-d10f849df4dc\") " pod="openstack/dnsmasq-dns-ff89b6977-nx62v" Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.610473 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04092458-98a3-458a-acca-d10f849df4dc-config\") pod \"dnsmasq-dns-ff89b6977-nx62v\" (UID: \"04092458-98a3-458a-acca-d10f849df4dc\") " pod="openstack/dnsmasq-dns-ff89b6977-nx62v" Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.610512 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04092458-98a3-458a-acca-d10f849df4dc-dns-svc\") pod \"dnsmasq-dns-ff89b6977-nx62v\" (UID: \"04092458-98a3-458a-acca-d10f849df4dc\") " pod="openstack/dnsmasq-dns-ff89b6977-nx62v" Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.714243 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04092458-98a3-458a-acca-d10f849df4dc-dns-svc\") pod \"dnsmasq-dns-ff89b6977-nx62v\" (UID: \"04092458-98a3-458a-acca-d10f849df4dc\") " pod="openstack/dnsmasq-dns-ff89b6977-nx62v" Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.714306 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjh87\" (UniqueName: \"kubernetes.io/projected/04092458-98a3-458a-acca-d10f849df4dc-kube-api-access-hjh87\") pod \"dnsmasq-dns-ff89b6977-nx62v\" (UID: \"04092458-98a3-458a-acca-d10f849df4dc\") " pod="openstack/dnsmasq-dns-ff89b6977-nx62v" Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.714379 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04092458-98a3-458a-acca-d10f849df4dc-config\") pod \"dnsmasq-dns-ff89b6977-nx62v\" (UID: \"04092458-98a3-458a-acca-d10f849df4dc\") " pod="openstack/dnsmasq-dns-ff89b6977-nx62v" Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.715211 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04092458-98a3-458a-acca-d10f849df4dc-config\") pod \"dnsmasq-dns-ff89b6977-nx62v\" (UID: \"04092458-98a3-458a-acca-d10f849df4dc\") " pod="openstack/dnsmasq-dns-ff89b6977-nx62v" Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.716571 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04092458-98a3-458a-acca-d10f849df4dc-dns-svc\") pod \"dnsmasq-dns-ff89b6977-nx62v\" (UID: \"04092458-98a3-458a-acca-d10f849df4dc\") " pod="openstack/dnsmasq-dns-ff89b6977-nx62v" Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.758561 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjh87\" (UniqueName: \"kubernetes.io/projected/04092458-98a3-458a-acca-d10f849df4dc-kube-api-access-hjh87\") pod \"dnsmasq-dns-ff89b6977-nx62v\" (UID: \"04092458-98a3-458a-acca-d10f849df4dc\") " pod="openstack/dnsmasq-dns-ff89b6977-nx62v" Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.811961 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fb77f9685-xvhqc"] Mar 11 10:19:25 crc kubenswrapper[4840]: I0311 10:19:25.870781 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ff89b6977-nx62v" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.026022 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55c76fd6b7-2r87m" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.030333 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 10:19:26 crc kubenswrapper[4840]: E0311 10:19:26.030640 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950d829a-6daf-4e24-b446-618ef56dda95" containerName="init" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.030658 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="950d829a-6daf-4e24-b446-618ef56dda95" containerName="init" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.030784 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="950d829a-6daf-4e24-b446-618ef56dda95" containerName="init" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.031458 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.034372 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.034636 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.034749 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.034913 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.035025 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.035226 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.035426 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-fgrqh" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.048401 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.125053 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/950d829a-6daf-4e24-b446-618ef56dda95-config\") pod \"950d829a-6daf-4e24-b446-618ef56dda95\" (UID: \"950d829a-6daf-4e24-b446-618ef56dda95\") " Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.125200 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x5zl\" (UniqueName: \"kubernetes.io/projected/950d829a-6daf-4e24-b446-618ef56dda95-kube-api-access-8x5zl\") pod \"950d829a-6daf-4e24-b446-618ef56dda95\" (UID: \"950d829a-6daf-4e24-b446-618ef56dda95\") " Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.125219 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/950d829a-6daf-4e24-b446-618ef56dda95-dns-svc\") pod \"950d829a-6daf-4e24-b446-618ef56dda95\" (UID: \"950d829a-6daf-4e24-b446-618ef56dda95\") " Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.125449 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0edf1389-2132-4752-8ba0-7aebb76e173f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.125502 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0edf1389-2132-4752-8ba0-7aebb76e173f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.125548 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0edf1389-2132-4752-8ba0-7aebb76e173f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.125597 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0edf1389-2132-4752-8ba0-7aebb76e173f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.125741 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0edf1389-2132-4752-8ba0-7aebb76e173f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.125820 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdq8k\" (UniqueName: \"kubernetes.io/projected/0edf1389-2132-4752-8ba0-7aebb76e173f-kube-api-access-qdq8k\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.125854 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0edf1389-2132-4752-8ba0-7aebb76e173f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.125920 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0edf1389-2132-4752-8ba0-7aebb76e173f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.125954 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b1db2454-5981-4987-9557-8ca778576b93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1db2454-5981-4987-9557-8ca778576b93\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.125984 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0edf1389-2132-4752-8ba0-7aebb76e173f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.126072 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0edf1389-2132-4752-8ba0-7aebb76e173f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.153349 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/950d829a-6daf-4e24-b446-618ef56dda95-kube-api-access-8x5zl" (OuterVolumeSpecName: "kube-api-access-8x5zl") pod "950d829a-6daf-4e24-b446-618ef56dda95" (UID: "950d829a-6daf-4e24-b446-618ef56dda95"). InnerVolumeSpecName "kube-api-access-8x5zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.171799 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/950d829a-6daf-4e24-b446-618ef56dda95-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "950d829a-6daf-4e24-b446-618ef56dda95" (UID: "950d829a-6daf-4e24-b446-618ef56dda95"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.172396 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/950d829a-6daf-4e24-b446-618ef56dda95-config" (OuterVolumeSpecName: "config") pod "950d829a-6daf-4e24-b446-618ef56dda95" (UID: "950d829a-6daf-4e24-b446-618ef56dda95"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.227037 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0edf1389-2132-4752-8ba0-7aebb76e173f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.227115 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0edf1389-2132-4752-8ba0-7aebb76e173f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.227139 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0edf1389-2132-4752-8ba0-7aebb76e173f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.227175 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0edf1389-2132-4752-8ba0-7aebb76e173f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.227213 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0edf1389-2132-4752-8ba0-7aebb76e173f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.227242 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0edf1389-2132-4752-8ba0-7aebb76e173f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.227272 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdq8k\" (UniqueName: \"kubernetes.io/projected/0edf1389-2132-4752-8ba0-7aebb76e173f-kube-api-access-qdq8k\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.227294 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0edf1389-2132-4752-8ba0-7aebb76e173f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.227322 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0edf1389-2132-4752-8ba0-7aebb76e173f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.227350 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b1db2454-5981-4987-9557-8ca778576b93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1db2454-5981-4987-9557-8ca778576b93\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.227378 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0edf1389-2132-4752-8ba0-7aebb76e173f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.227441 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/950d829a-6daf-4e24-b446-618ef56dda95-config\") on node \"crc\" DevicePath \"\"" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.227456 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x5zl\" (UniqueName: \"kubernetes.io/projected/950d829a-6daf-4e24-b446-618ef56dda95-kube-api-access-8x5zl\") on node \"crc\" DevicePath \"\"" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.227493 4840 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/950d829a-6daf-4e24-b446-618ef56dda95-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.228322 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0edf1389-2132-4752-8ba0-7aebb76e173f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.229060 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0edf1389-2132-4752-8ba0-7aebb76e173f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.229641 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0edf1389-2132-4752-8ba0-7aebb76e173f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.231022 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0edf1389-2132-4752-8ba0-7aebb76e173f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.232028 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0edf1389-2132-4752-8ba0-7aebb76e173f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.234049 4840 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.234092 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b1db2454-5981-4987-9557-8ca778576b93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1db2454-5981-4987-9557-8ca778576b93\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e62da913419b14067559a50b70b10a79b4726f4c0c525a999b8e94171cadb4bf/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.234685 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0edf1389-2132-4752-8ba0-7aebb76e173f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.235289 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0edf1389-2132-4752-8ba0-7aebb76e173f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.239692 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0edf1389-2132-4752-8ba0-7aebb76e173f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.243744 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0edf1389-2132-4752-8ba0-7aebb76e173f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.259263 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdq8k\" (UniqueName: \"kubernetes.io/projected/0edf1389-2132-4752-8ba0-7aebb76e173f-kube-api-access-qdq8k\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.298573 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b1db2454-5981-4987-9557-8ca778576b93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1db2454-5981-4987-9557-8ca778576b93\") pod \"rabbitmq-cell1-server-0\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.385888 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.494145 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-ff89b6977-nx62v"] Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.586697 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c44667757-bkvgc" event={"ID":"34fc7f66-e5e7-4750-bb88-2eb9545326d1","Type":"ContainerStarted","Data":"2b45036fb2a19aa5b6917a8cbc42fddccefe04666e973ff17a331b719720299e"} Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.587570 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c44667757-bkvgc" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.589256 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ff89b6977-nx62v" event={"ID":"04092458-98a3-458a-acca-d10f849df4dc","Type":"ContainerStarted","Data":"5bd8790237b778114eebc6325ea065fda4e36ebd1fe70bcfeb37ea924e6abeb6"} Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.590291 4840 generic.go:334] "Generic (PLEG): container finished" podID="bd436b03-5a93-4f6d-a946-8d3c8060f1c2" containerID="e0595562541fc985909bf3ca1d202467aaa588364e5b205bc4f5647d9c1648fd" exitCode=0 Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.590339 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fb77f9685-xvhqc" event={"ID":"bd436b03-5a93-4f6d-a946-8d3c8060f1c2","Type":"ContainerDied","Data":"e0595562541fc985909bf3ca1d202467aaa588364e5b205bc4f5647d9c1648fd"} Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.590357 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fb77f9685-xvhqc" event={"ID":"bd436b03-5a93-4f6d-a946-8d3c8060f1c2","Type":"ContainerStarted","Data":"028ceabd3f11fdea607fd0eb05483b3874a101bdff394d474551e9f84b77819f"} Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.592901 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55c76fd6b7-2r87m" event={"ID":"950d829a-6daf-4e24-b446-618ef56dda95","Type":"ContainerDied","Data":"b52596e11442915305e27c4b4e82137e8543915b7f0b4d7c5c3025ad89154246"} Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.592933 4840 scope.go:117] "RemoveContainer" containerID="1268b41694125111928ff697c8af187e888ffd7b616bd8ca5e5af94d6d371a2b" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.593068 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55c76fd6b7-2r87m" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.608091 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c44667757-bkvgc" podStartSLOduration=2.608066549 podStartE2EDuration="2.608066549s" podCreationTimestamp="2026-03-11 10:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:19:26.607105764 +0000 UTC m=+4965.272775569" watchObservedRunningTime="2026-03-11 10:19:26.608066549 +0000 UTC m=+4965.273736364" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.688828 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55c76fd6b7-2r87m"] Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.689747 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55c76fd6b7-2r87m"] Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.699041 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.700286 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.706275 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.706492 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.706604 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-cnlqc" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.706741 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.706866 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.706931 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.707042 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.728325 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.846221 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c43faf74-3eed-41b3-b43a-1f88b3e61044-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.846269 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6xcn\" (UniqueName: \"kubernetes.io/projected/c43faf74-3eed-41b3-b43a-1f88b3e61044-kube-api-access-x6xcn\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.846356 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c43faf74-3eed-41b3-b43a-1f88b3e61044-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.846375 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c43faf74-3eed-41b3-b43a-1f88b3e61044-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.846401 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c43faf74-3eed-41b3-b43a-1f88b3e61044-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.846417 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c43faf74-3eed-41b3-b43a-1f88b3e61044-config-data\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.846431 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c43faf74-3eed-41b3-b43a-1f88b3e61044-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.846448 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c43faf74-3eed-41b3-b43a-1f88b3e61044-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.846497 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c43faf74-3eed-41b3-b43a-1f88b3e61044-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.846524 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c43faf74-3eed-41b3-b43a-1f88b3e61044-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.846551 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-51a03c95-e0c9-425a-99a1-ac20e58b2984\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51a03c95-e0c9-425a-99a1-ac20e58b2984\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.896150 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 10:19:26 crc kubenswrapper[4840]: W0311 10:19:26.897363 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0edf1389_2132_4752_8ba0_7aebb76e173f.slice/crio-446dd7e5ac291728918bf09b546f03eca8b104f0fe99e4174102df0ea27c5caa WatchSource:0}: Error finding container 446dd7e5ac291728918bf09b546f03eca8b104f0fe99e4174102df0ea27c5caa: Status 404 returned error can't find the container with id 446dd7e5ac291728918bf09b546f03eca8b104f0fe99e4174102df0ea27c5caa Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.911286 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fb77f9685-xvhqc" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.947996 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c43faf74-3eed-41b3-b43a-1f88b3e61044-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.948043 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c43faf74-3eed-41b3-b43a-1f88b3e61044-config-data\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.948062 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c43faf74-3eed-41b3-b43a-1f88b3e61044-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.948084 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c43faf74-3eed-41b3-b43a-1f88b3e61044-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.948119 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c43faf74-3eed-41b3-b43a-1f88b3e61044-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.948145 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c43faf74-3eed-41b3-b43a-1f88b3e61044-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.948177 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-51a03c95-e0c9-425a-99a1-ac20e58b2984\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51a03c95-e0c9-425a-99a1-ac20e58b2984\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.948229 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c43faf74-3eed-41b3-b43a-1f88b3e61044-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.948249 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6xcn\" (UniqueName: \"kubernetes.io/projected/c43faf74-3eed-41b3-b43a-1f88b3e61044-kube-api-access-x6xcn\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.948287 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c43faf74-3eed-41b3-b43a-1f88b3e61044-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.948304 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c43faf74-3eed-41b3-b43a-1f88b3e61044-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.949762 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c43faf74-3eed-41b3-b43a-1f88b3e61044-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.951256 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c43faf74-3eed-41b3-b43a-1f88b3e61044-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.951924 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c43faf74-3eed-41b3-b43a-1f88b3e61044-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.951986 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c43faf74-3eed-41b3-b43a-1f88b3e61044-config-data\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.952359 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c43faf74-3eed-41b3-b43a-1f88b3e61044-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.953262 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c43faf74-3eed-41b3-b43a-1f88b3e61044-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.954958 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c43faf74-3eed-41b3-b43a-1f88b3e61044-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.955504 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c43faf74-3eed-41b3-b43a-1f88b3e61044-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.956414 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c43faf74-3eed-41b3-b43a-1f88b3e61044-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.959638 4840 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.959997 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-51a03c95-e0c9-425a-99a1-ac20e58b2984\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51a03c95-e0c9-425a-99a1-ac20e58b2984\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e661e31f7f828e08a1e2de55395925095e4a489158f296ef6dc6779816098759/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 11 10:19:26 crc kubenswrapper[4840]: I0311 10:19:26.982911 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6xcn\" (UniqueName: \"kubernetes.io/projected/c43faf74-3eed-41b3-b43a-1f88b3e61044-kube-api-access-x6xcn\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:27 crc kubenswrapper[4840]: I0311 10:19:27.002155 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-51a03c95-e0c9-425a-99a1-ac20e58b2984\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51a03c95-e0c9-425a-99a1-ac20e58b2984\") pod \"rabbitmq-server-0\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " pod="openstack/rabbitmq-server-0" Mar 11 10:19:27 crc kubenswrapper[4840]: I0311 10:19:27.049635 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd436b03-5a93-4f6d-a946-8d3c8060f1c2-config\") pod \"bd436b03-5a93-4f6d-a946-8d3c8060f1c2\" (UID: \"bd436b03-5a93-4f6d-a946-8d3c8060f1c2\") " Mar 11 10:19:27 crc kubenswrapper[4840]: I0311 10:19:27.049726 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd436b03-5a93-4f6d-a946-8d3c8060f1c2-dns-svc\") pod \"bd436b03-5a93-4f6d-a946-8d3c8060f1c2\" (UID: \"bd436b03-5a93-4f6d-a946-8d3c8060f1c2\") " Mar 11 10:19:27 crc kubenswrapper[4840]: I0311 10:19:27.049824 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spg9v\" (UniqueName: \"kubernetes.io/projected/bd436b03-5a93-4f6d-a946-8d3c8060f1c2-kube-api-access-spg9v\") pod \"bd436b03-5a93-4f6d-a946-8d3c8060f1c2\" (UID: \"bd436b03-5a93-4f6d-a946-8d3c8060f1c2\") " Mar 11 10:19:27 crc kubenswrapper[4840]: I0311 10:19:27.053484 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd436b03-5a93-4f6d-a946-8d3c8060f1c2-kube-api-access-spg9v" (OuterVolumeSpecName: "kube-api-access-spg9v") pod "bd436b03-5a93-4f6d-a946-8d3c8060f1c2" (UID: "bd436b03-5a93-4f6d-a946-8d3c8060f1c2"). InnerVolumeSpecName "kube-api-access-spg9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:19:27 crc kubenswrapper[4840]: I0311 10:19:27.056301 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 11 10:19:27 crc kubenswrapper[4840]: I0311 10:19:27.067102 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd436b03-5a93-4f6d-a946-8d3c8060f1c2-config" (OuterVolumeSpecName: "config") pod "bd436b03-5a93-4f6d-a946-8d3c8060f1c2" (UID: "bd436b03-5a93-4f6d-a946-8d3c8060f1c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:19:27 crc kubenswrapper[4840]: I0311 10:19:27.070965 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd436b03-5a93-4f6d-a946-8d3c8060f1c2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bd436b03-5a93-4f6d-a946-8d3c8060f1c2" (UID: "bd436b03-5a93-4f6d-a946-8d3c8060f1c2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:19:27 crc kubenswrapper[4840]: I0311 10:19:27.151770 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spg9v\" (UniqueName: \"kubernetes.io/projected/bd436b03-5a93-4f6d-a946-8d3c8060f1c2-kube-api-access-spg9v\") on node \"crc\" DevicePath \"\"" Mar 11 10:19:27 crc kubenswrapper[4840]: I0311 10:19:27.151809 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd436b03-5a93-4f6d-a946-8d3c8060f1c2-config\") on node \"crc\" DevicePath \"\"" Mar 11 10:19:27 crc kubenswrapper[4840]: I0311 10:19:27.151823 4840 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd436b03-5a93-4f6d-a946-8d3c8060f1c2-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 10:19:27 crc kubenswrapper[4840]: I0311 10:19:27.445867 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:19:27 crc kubenswrapper[4840]: I0311 10:19:27.446280 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:19:27 crc kubenswrapper[4840]: I0311 10:19:27.580188 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 10:19:27 crc kubenswrapper[4840]: W0311 10:19:27.586375 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc43faf74_3eed_41b3_b43a_1f88b3e61044.slice/crio-257a8fcb2aa0f460cb98db3c491dd4cd10206df78a539c66594ac721f279e13c WatchSource:0}: Error finding container 257a8fcb2aa0f460cb98db3c491dd4cd10206df78a539c66594ac721f279e13c: Status 404 returned error can't find the container with id 257a8fcb2aa0f460cb98db3c491dd4cd10206df78a539c66594ac721f279e13c Mar 11 10:19:27 crc kubenswrapper[4840]: I0311 10:19:27.616239 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fb77f9685-xvhqc" event={"ID":"bd436b03-5a93-4f6d-a946-8d3c8060f1c2","Type":"ContainerDied","Data":"028ceabd3f11fdea607fd0eb05483b3874a101bdff394d474551e9f84b77819f"} Mar 11 10:19:27 crc kubenswrapper[4840]: I0311 10:19:27.616321 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fb77f9685-xvhqc" Mar 11 10:19:27 crc kubenswrapper[4840]: I0311 10:19:27.616842 4840 scope.go:117] "RemoveContainer" containerID="e0595562541fc985909bf3ca1d202467aaa588364e5b205bc4f5647d9c1648fd" Mar 11 10:19:27 crc kubenswrapper[4840]: I0311 10:19:27.630136 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0edf1389-2132-4752-8ba0-7aebb76e173f","Type":"ContainerStarted","Data":"446dd7e5ac291728918bf09b546f03eca8b104f0fe99e4174102df0ea27c5caa"} Mar 11 10:19:27 crc kubenswrapper[4840]: I0311 10:19:27.632582 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c43faf74-3eed-41b3-b43a-1f88b3e61044","Type":"ContainerStarted","Data":"257a8fcb2aa0f460cb98db3c491dd4cd10206df78a539c66594ac721f279e13c"} Mar 11 10:19:27 crc kubenswrapper[4840]: I0311 10:19:27.635702 4840 generic.go:334] "Generic (PLEG): container finished" podID="04092458-98a3-458a-acca-d10f849df4dc" containerID="59f78d146edc102920d95dbe4074e05e5856b0fba3a215181a547a27b6369101" exitCode=0 Mar 11 10:19:27 crc kubenswrapper[4840]: I0311 10:19:27.637844 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ff89b6977-nx62v" event={"ID":"04092458-98a3-458a-acca-d10f849df4dc","Type":"ContainerDied","Data":"59f78d146edc102920d95dbe4074e05e5856b0fba3a215181a547a27b6369101"} Mar 11 10:19:27 crc kubenswrapper[4840]: I0311 10:19:27.918017 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fb77f9685-xvhqc"] Mar 11 10:19:27 crc kubenswrapper[4840]: I0311 10:19:27.933580 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fb77f9685-xvhqc"] Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.011675 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 11 10:19:28 crc kubenswrapper[4840]: E0311 10:19:28.012056 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd436b03-5a93-4f6d-a946-8d3c8060f1c2" containerName="init" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.012077 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd436b03-5a93-4f6d-a946-8d3c8060f1c2" containerName="init" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.012283 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd436b03-5a93-4f6d-a946-8d3c8060f1c2" containerName="init" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.013982 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.020122 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.020376 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.020422 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.020665 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-b7bxf" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.027963 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.072982 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2c0400ce-cf58-461a-946f-8f341c40fcce-kolla-config\") pod \"openstack-galera-0\" (UID: \"2c0400ce-cf58-461a-946f-8f341c40fcce\") " pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.073041 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c0400ce-cf58-461a-946f-8f341c40fcce-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2c0400ce-cf58-461a-946f-8f341c40fcce\") " pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.073064 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2c0400ce-cf58-461a-946f-8f341c40fcce-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2c0400ce-cf58-461a-946f-8f341c40fcce\") " pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.073093 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2c0400ce-cf58-461a-946f-8f341c40fcce-config-data-default\") pod \"openstack-galera-0\" (UID: \"2c0400ce-cf58-461a-946f-8f341c40fcce\") " pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.073135 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98nv5\" (UniqueName: \"kubernetes.io/projected/2c0400ce-cf58-461a-946f-8f341c40fcce-kube-api-access-98nv5\") pod \"openstack-galera-0\" (UID: \"2c0400ce-cf58-461a-946f-8f341c40fcce\") " pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.073165 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-92201c94-1128-482c-ac9a-b949b8b0fbc7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-92201c94-1128-482c-ac9a-b949b8b0fbc7\") pod \"openstack-galera-0\" (UID: \"2c0400ce-cf58-461a-946f-8f341c40fcce\") " pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.073187 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c0400ce-cf58-461a-946f-8f341c40fcce-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2c0400ce-cf58-461a-946f-8f341c40fcce\") " pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.073218 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0400ce-cf58-461a-946f-8f341c40fcce-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2c0400ce-cf58-461a-946f-8f341c40fcce\") " pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.082874 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="950d829a-6daf-4e24-b446-618ef56dda95" path="/var/lib/kubelet/pods/950d829a-6daf-4e24-b446-618ef56dda95/volumes" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.083814 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd436b03-5a93-4f6d-a946-8d3c8060f1c2" path="/var/lib/kubelet/pods/bd436b03-5a93-4f6d-a946-8d3c8060f1c2/volumes" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.084352 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.174326 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2c0400ce-cf58-461a-946f-8f341c40fcce-kolla-config\") pod \"openstack-galera-0\" (UID: \"2c0400ce-cf58-461a-946f-8f341c40fcce\") " pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.174367 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2c0400ce-cf58-461a-946f-8f341c40fcce-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2c0400ce-cf58-461a-946f-8f341c40fcce\") " pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.174391 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c0400ce-cf58-461a-946f-8f341c40fcce-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2c0400ce-cf58-461a-946f-8f341c40fcce\") " pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.174412 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2c0400ce-cf58-461a-946f-8f341c40fcce-config-data-default\") pod \"openstack-galera-0\" (UID: \"2c0400ce-cf58-461a-946f-8f341c40fcce\") " pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.174446 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98nv5\" (UniqueName: \"kubernetes.io/projected/2c0400ce-cf58-461a-946f-8f341c40fcce-kube-api-access-98nv5\") pod \"openstack-galera-0\" (UID: \"2c0400ce-cf58-461a-946f-8f341c40fcce\") " pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.174484 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-92201c94-1128-482c-ac9a-b949b8b0fbc7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-92201c94-1128-482c-ac9a-b949b8b0fbc7\") pod \"openstack-galera-0\" (UID: \"2c0400ce-cf58-461a-946f-8f341c40fcce\") " pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.174501 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c0400ce-cf58-461a-946f-8f341c40fcce-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2c0400ce-cf58-461a-946f-8f341c40fcce\") " pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.174527 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0400ce-cf58-461a-946f-8f341c40fcce-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2c0400ce-cf58-461a-946f-8f341c40fcce\") " pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.175114 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2c0400ce-cf58-461a-946f-8f341c40fcce-kolla-config\") pod \"openstack-galera-0\" (UID: \"2c0400ce-cf58-461a-946f-8f341c40fcce\") " pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.176074 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/2c0400ce-cf58-461a-946f-8f341c40fcce-config-data-generated\") pod \"openstack-galera-0\" (UID: \"2c0400ce-cf58-461a-946f-8f341c40fcce\") " pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.176262 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/2c0400ce-cf58-461a-946f-8f341c40fcce-config-data-default\") pod \"openstack-galera-0\" (UID: \"2c0400ce-cf58-461a-946f-8f341c40fcce\") " pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.177025 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c0400ce-cf58-461a-946f-8f341c40fcce-operator-scripts\") pod \"openstack-galera-0\" (UID: \"2c0400ce-cf58-461a-946f-8f341c40fcce\") " pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.178692 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c0400ce-cf58-461a-946f-8f341c40fcce-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"2c0400ce-cf58-461a-946f-8f341c40fcce\") " pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.178695 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/2c0400ce-cf58-461a-946f-8f341c40fcce-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"2c0400ce-cf58-461a-946f-8f341c40fcce\") " pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.187509 4840 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.187554 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-92201c94-1128-482c-ac9a-b949b8b0fbc7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-92201c94-1128-482c-ac9a-b949b8b0fbc7\") pod \"openstack-galera-0\" (UID: \"2c0400ce-cf58-461a-946f-8f341c40fcce\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/800d940a30fcef10f2d6c1e4f67762e37db6150703cfcdf8844bc7926e733b0d/globalmount\"" pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.197957 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98nv5\" (UniqueName: \"kubernetes.io/projected/2c0400ce-cf58-461a-946f-8f341c40fcce-kube-api-access-98nv5\") pod \"openstack-galera-0\" (UID: \"2c0400ce-cf58-461a-946f-8f341c40fcce\") " pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.220709 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-92201c94-1128-482c-ac9a-b949b8b0fbc7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-92201c94-1128-482c-ac9a-b949b8b0fbc7\") pod \"openstack-galera-0\" (UID: \"2c0400ce-cf58-461a-946f-8f341c40fcce\") " pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.332030 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.645974 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0edf1389-2132-4752-8ba0-7aebb76e173f","Type":"ContainerStarted","Data":"c2a0b5a84a6bfa65674c2de1a2587ea05b454eda5d2a70a4fe5d0ab1cadeea1f"} Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.650226 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ff89b6977-nx62v" event={"ID":"04092458-98a3-458a-acca-d10f849df4dc","Type":"ContainerStarted","Data":"d4bb505935223181b520529d4f9fd452f2d41b55c815d79e191432956598cd21"} Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.650452 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-ff89b6977-nx62v" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.696960 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-ff89b6977-nx62v" podStartSLOduration=3.6969420729999998 podStartE2EDuration="3.696942073s" podCreationTimestamp="2026-03-11 10:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:19:28.689927195 +0000 UTC m=+4967.355597010" watchObservedRunningTime="2026-03-11 10:19:28.696942073 +0000 UTC m=+4967.362611888" Mar 11 10:19:28 crc kubenswrapper[4840]: I0311 10:19:28.842630 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.504159 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.506046 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.507608 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-42vx6" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.507966 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.508177 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.514678 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.516745 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.677191 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c43faf74-3eed-41b3-b43a-1f88b3e61044","Type":"ContainerStarted","Data":"75f42df2c621d765374bb4403a70a1345a3f6d43262771e1fe8435540cdd63df"} Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.686524 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2c0400ce-cf58-461a-946f-8f341c40fcce","Type":"ContainerStarted","Data":"eaba23c2c43278a3c2bb4a8f787d338f7beaee80090291e60ee6bb4104765ee0"} Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.686589 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2c0400ce-cf58-461a-946f-8f341c40fcce","Type":"ContainerStarted","Data":"77ce0c9050e0d225c86f5201c1d04202100a38956c8c41c7b73fccb7c0e6cadf"} Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.701374 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.701427 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.701502 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.701551 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knwzn\" (UniqueName: \"kubernetes.io/projected/6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe-kube-api-access-knwzn\") pod \"openstack-cell1-galera-0\" (UID: \"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.701582 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.701641 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ed9fd228-4ceb-4328-ac41-f2141b050615\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ed9fd228-4ceb-4328-ac41-f2141b050615\") pod \"openstack-cell1-galera-0\" (UID: \"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.701703 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.701740 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.802757 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ed9fd228-4ceb-4328-ac41-f2141b050615\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ed9fd228-4ceb-4328-ac41-f2141b050615\") pod \"openstack-cell1-galera-0\" (UID: \"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.803056 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.803117 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.803311 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.804005 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.804983 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.805301 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knwzn\" (UniqueName: \"kubernetes.io/projected/6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe-kube-api-access-knwzn\") pod \"openstack-cell1-galera-0\" (UID: \"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.805380 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.805436 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.805615 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.805831 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.806653 4840 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.806692 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ed9fd228-4ceb-4328-ac41-f2141b050615\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ed9fd228-4ceb-4328-ac41-f2141b050615\") pod \"openstack-cell1-galera-0\" (UID: \"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/31739c965246a3c8584e6c60293de00c3babed01f58cc978be0de7f2bc49d09a/globalmount\"" pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.806744 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.811564 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.815277 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.833097 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knwzn\" (UniqueName: \"kubernetes.io/projected/6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe-kube-api-access-knwzn\") pod \"openstack-cell1-galera-0\" (UID: \"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.844111 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ed9fd228-4ceb-4328-ac41-f2141b050615\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ed9fd228-4ceb-4328-ac41-f2141b050615\") pod \"openstack-cell1-galera-0\" (UID: \"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe\") " pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:29 crc kubenswrapper[4840]: I0311 10:19:29.865030 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.086744 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.088215 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.090775 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.091001 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-xjxmm" Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.091175 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.112450 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.215128 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9cd09283-efe1-471c-9723-3e2114c910f7-config-data\") pod \"memcached-0\" (UID: \"9cd09283-efe1-471c-9723-3e2114c910f7\") " pod="openstack/memcached-0" Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.215311 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cd09283-efe1-471c-9723-3e2114c910f7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9cd09283-efe1-471c-9723-3e2114c910f7\") " pod="openstack/memcached-0" Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.215359 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cd09283-efe1-471c-9723-3e2114c910f7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9cd09283-efe1-471c-9723-3e2114c910f7\") " pod="openstack/memcached-0" Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.215528 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8zt9\" (UniqueName: \"kubernetes.io/projected/9cd09283-efe1-471c-9723-3e2114c910f7-kube-api-access-v8zt9\") pod \"memcached-0\" (UID: \"9cd09283-efe1-471c-9723-3e2114c910f7\") " pod="openstack/memcached-0" Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.215571 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cd09283-efe1-471c-9723-3e2114c910f7-kolla-config\") pod \"memcached-0\" (UID: \"9cd09283-efe1-471c-9723-3e2114c910f7\") " pod="openstack/memcached-0" Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.317212 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9cd09283-efe1-471c-9723-3e2114c910f7-config-data\") pod \"memcached-0\" (UID: \"9cd09283-efe1-471c-9723-3e2114c910f7\") " pod="openstack/memcached-0" Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.317303 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cd09283-efe1-471c-9723-3e2114c910f7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9cd09283-efe1-471c-9723-3e2114c910f7\") " pod="openstack/memcached-0" Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.317327 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cd09283-efe1-471c-9723-3e2114c910f7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9cd09283-efe1-471c-9723-3e2114c910f7\") " pod="openstack/memcached-0" Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.317382 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8zt9\" (UniqueName: \"kubernetes.io/projected/9cd09283-efe1-471c-9723-3e2114c910f7-kube-api-access-v8zt9\") pod \"memcached-0\" (UID: \"9cd09283-efe1-471c-9723-3e2114c910f7\") " pod="openstack/memcached-0" Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.317414 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cd09283-efe1-471c-9723-3e2114c910f7-kolla-config\") pod \"memcached-0\" (UID: \"9cd09283-efe1-471c-9723-3e2114c910f7\") " pod="openstack/memcached-0" Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.318325 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cd09283-efe1-471c-9723-3e2114c910f7-kolla-config\") pod \"memcached-0\" (UID: \"9cd09283-efe1-471c-9723-3e2114c910f7\") " pod="openstack/memcached-0" Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.318596 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9cd09283-efe1-471c-9723-3e2114c910f7-config-data\") pod \"memcached-0\" (UID: \"9cd09283-efe1-471c-9723-3e2114c910f7\") " pod="openstack/memcached-0" Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.322207 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cd09283-efe1-471c-9723-3e2114c910f7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9cd09283-efe1-471c-9723-3e2114c910f7\") " pod="openstack/memcached-0" Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.322256 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cd09283-efe1-471c-9723-3e2114c910f7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9cd09283-efe1-471c-9723-3e2114c910f7\") " pod="openstack/memcached-0" Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.342810 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8zt9\" (UniqueName: \"kubernetes.io/projected/9cd09283-efe1-471c-9723-3e2114c910f7-kube-api-access-v8zt9\") pod \"memcached-0\" (UID: \"9cd09283-efe1-471c-9723-3e2114c910f7\") " pod="openstack/memcached-0" Mar 11 10:19:30 crc kubenswrapper[4840]: W0311 10:19:30.385799 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a2dcec5_bf72_4b97_b2cf_0af92dae1cbe.slice/crio-85f4d13e1503a354729ac317ff46695a7bf2f5a6517d204f307bf12602e6b76c WatchSource:0}: Error finding container 85f4d13e1503a354729ac317ff46695a7bf2f5a6517d204f307bf12602e6b76c: Status 404 returned error can't find the container with id 85f4d13e1503a354729ac317ff46695a7bf2f5a6517d204f307bf12602e6b76c Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.387489 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.423648 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.696721 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe","Type":"ContainerStarted","Data":"fc92a1facf014891ad482635b4baced02997cdec679e0dd3fd8178314955e8fd"} Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.697201 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe","Type":"ContainerStarted","Data":"85f4d13e1503a354729ac317ff46695a7bf2f5a6517d204f307bf12602e6b76c"} Mar 11 10:19:30 crc kubenswrapper[4840]: I0311 10:19:30.879192 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 11 10:19:30 crc kubenswrapper[4840]: W0311 10:19:30.882942 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cd09283_efe1_471c_9723_3e2114c910f7.slice/crio-315379d5067d943e67e52ae2f380425e1ffc1289be34a086b698ad7808b2d6b2 WatchSource:0}: Error finding container 315379d5067d943e67e52ae2f380425e1ffc1289be34a086b698ad7808b2d6b2: Status 404 returned error can't find the container with id 315379d5067d943e67e52ae2f380425e1ffc1289be34a086b698ad7808b2d6b2 Mar 11 10:19:31 crc kubenswrapper[4840]: I0311 10:19:31.704803 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9cd09283-efe1-471c-9723-3e2114c910f7","Type":"ContainerStarted","Data":"87e8e03bf2b4e796652bf1f5d8cd8e0f56e19ad4ceef690903034b25c19ce65a"} Mar 11 10:19:31 crc kubenswrapper[4840]: I0311 10:19:31.705361 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9cd09283-efe1-471c-9723-3e2114c910f7","Type":"ContainerStarted","Data":"315379d5067d943e67e52ae2f380425e1ffc1289be34a086b698ad7808b2d6b2"} Mar 11 10:19:31 crc kubenswrapper[4840]: I0311 10:19:31.705404 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 11 10:19:31 crc kubenswrapper[4840]: I0311 10:19:31.724138 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.724116871 podStartE2EDuration="1.724116871s" podCreationTimestamp="2026-03-11 10:19:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:19:31.719849132 +0000 UTC m=+4970.385518947" watchObservedRunningTime="2026-03-11 10:19:31.724116871 +0000 UTC m=+4970.389786686" Mar 11 10:19:32 crc kubenswrapper[4840]: I0311 10:19:32.713881 4840 generic.go:334] "Generic (PLEG): container finished" podID="2c0400ce-cf58-461a-946f-8f341c40fcce" containerID="eaba23c2c43278a3c2bb4a8f787d338f7beaee80090291e60ee6bb4104765ee0" exitCode=0 Mar 11 10:19:32 crc kubenswrapper[4840]: I0311 10:19:32.713953 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2c0400ce-cf58-461a-946f-8f341c40fcce","Type":"ContainerDied","Data":"eaba23c2c43278a3c2bb4a8f787d338f7beaee80090291e60ee6bb4104765ee0"} Mar 11 10:19:33 crc kubenswrapper[4840]: I0311 10:19:33.726743 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"2c0400ce-cf58-461a-946f-8f341c40fcce","Type":"ContainerStarted","Data":"6467cfbaf358de808b987aa6a9f0295711e3c4b68ee2761ba2edbe7f9f3f6d3e"} Mar 11 10:19:33 crc kubenswrapper[4840]: I0311 10:19:33.754530 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=7.75450756 podStartE2EDuration="7.75450756s" podCreationTimestamp="2026-03-11 10:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:19:33.752480028 +0000 UTC m=+4972.418149843" watchObservedRunningTime="2026-03-11 10:19:33.75450756 +0000 UTC m=+4972.420177395" Mar 11 10:19:34 crc kubenswrapper[4840]: I0311 10:19:34.456646 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c44667757-bkvgc" Mar 11 10:19:34 crc kubenswrapper[4840]: I0311 10:19:34.736291 4840 generic.go:334] "Generic (PLEG): container finished" podID="6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe" containerID="fc92a1facf014891ad482635b4baced02997cdec679e0dd3fd8178314955e8fd" exitCode=0 Mar 11 10:19:34 crc kubenswrapper[4840]: I0311 10:19:34.736334 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe","Type":"ContainerDied","Data":"fc92a1facf014891ad482635b4baced02997cdec679e0dd3fd8178314955e8fd"} Mar 11 10:19:35 crc kubenswrapper[4840]: I0311 10:19:35.746973 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe","Type":"ContainerStarted","Data":"d30af15c82a43d64b5bd875b2716dafd24ca8de1c39d70aad553c7676f993a1c"} Mar 11 10:19:35 crc kubenswrapper[4840]: I0311 10:19:35.779631 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.779610176 podStartE2EDuration="7.779610176s" podCreationTimestamp="2026-03-11 10:19:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:19:35.774425874 +0000 UTC m=+4974.440095709" watchObservedRunningTime="2026-03-11 10:19:35.779610176 +0000 UTC m=+4974.445279991" Mar 11 10:19:35 crc kubenswrapper[4840]: I0311 10:19:35.872640 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-ff89b6977-nx62v" Mar 11 10:19:35 crc kubenswrapper[4840]: I0311 10:19:35.921016 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c44667757-bkvgc"] Mar 11 10:19:35 crc kubenswrapper[4840]: I0311 10:19:35.921266 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c44667757-bkvgc" podUID="34fc7f66-e5e7-4750-bb88-2eb9545326d1" containerName="dnsmasq-dns" containerID="cri-o://2b45036fb2a19aa5b6917a8cbc42fddccefe04666e973ff17a331b719720299e" gracePeriod=10 Mar 11 10:19:36 crc kubenswrapper[4840]: I0311 10:19:36.467036 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c44667757-bkvgc" Mar 11 10:19:36 crc kubenswrapper[4840]: I0311 10:19:36.567633 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glzdw\" (UniqueName: \"kubernetes.io/projected/34fc7f66-e5e7-4750-bb88-2eb9545326d1-kube-api-access-glzdw\") pod \"34fc7f66-e5e7-4750-bb88-2eb9545326d1\" (UID: \"34fc7f66-e5e7-4750-bb88-2eb9545326d1\") " Mar 11 10:19:36 crc kubenswrapper[4840]: I0311 10:19:36.567725 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34fc7f66-e5e7-4750-bb88-2eb9545326d1-config\") pod \"34fc7f66-e5e7-4750-bb88-2eb9545326d1\" (UID: \"34fc7f66-e5e7-4750-bb88-2eb9545326d1\") " Mar 11 10:19:36 crc kubenswrapper[4840]: I0311 10:19:36.583692 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34fc7f66-e5e7-4750-bb88-2eb9545326d1-kube-api-access-glzdw" (OuterVolumeSpecName: "kube-api-access-glzdw") pod "34fc7f66-e5e7-4750-bb88-2eb9545326d1" (UID: "34fc7f66-e5e7-4750-bb88-2eb9545326d1"). InnerVolumeSpecName "kube-api-access-glzdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:19:36 crc kubenswrapper[4840]: I0311 10:19:36.609641 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34fc7f66-e5e7-4750-bb88-2eb9545326d1-config" (OuterVolumeSpecName: "config") pod "34fc7f66-e5e7-4750-bb88-2eb9545326d1" (UID: "34fc7f66-e5e7-4750-bb88-2eb9545326d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:19:36 crc kubenswrapper[4840]: I0311 10:19:36.669176 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34fc7f66-e5e7-4750-bb88-2eb9545326d1-config\") on node \"crc\" DevicePath \"\"" Mar 11 10:19:36 crc kubenswrapper[4840]: I0311 10:19:36.669214 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glzdw\" (UniqueName: \"kubernetes.io/projected/34fc7f66-e5e7-4750-bb88-2eb9545326d1-kube-api-access-glzdw\") on node \"crc\" DevicePath \"\"" Mar 11 10:19:36 crc kubenswrapper[4840]: I0311 10:19:36.755343 4840 generic.go:334] "Generic (PLEG): container finished" podID="34fc7f66-e5e7-4750-bb88-2eb9545326d1" containerID="2b45036fb2a19aa5b6917a8cbc42fddccefe04666e973ff17a331b719720299e" exitCode=0 Mar 11 10:19:36 crc kubenswrapper[4840]: I0311 10:19:36.755396 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c44667757-bkvgc" event={"ID":"34fc7f66-e5e7-4750-bb88-2eb9545326d1","Type":"ContainerDied","Data":"2b45036fb2a19aa5b6917a8cbc42fddccefe04666e973ff17a331b719720299e"} Mar 11 10:19:36 crc kubenswrapper[4840]: I0311 10:19:36.755405 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c44667757-bkvgc" Mar 11 10:19:36 crc kubenswrapper[4840]: I0311 10:19:36.755442 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c44667757-bkvgc" event={"ID":"34fc7f66-e5e7-4750-bb88-2eb9545326d1","Type":"ContainerDied","Data":"41e3ae2db06b439ad4707cadee3836f6011c4d8ddda69fb121394a896bd08f3e"} Mar 11 10:19:36 crc kubenswrapper[4840]: I0311 10:19:36.755488 4840 scope.go:117] "RemoveContainer" containerID="2b45036fb2a19aa5b6917a8cbc42fddccefe04666e973ff17a331b719720299e" Mar 11 10:19:36 crc kubenswrapper[4840]: I0311 10:19:36.782348 4840 scope.go:117] "RemoveContainer" containerID="5c418a6f2352363da657d363ec937a537329d0c80c790ee70333693a60535a6c" Mar 11 10:19:36 crc kubenswrapper[4840]: I0311 10:19:36.795205 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c44667757-bkvgc"] Mar 11 10:19:36 crc kubenswrapper[4840]: I0311 10:19:36.803375 4840 scope.go:117] "RemoveContainer" containerID="2b45036fb2a19aa5b6917a8cbc42fddccefe04666e973ff17a331b719720299e" Mar 11 10:19:36 crc kubenswrapper[4840]: E0311 10:19:36.803959 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b45036fb2a19aa5b6917a8cbc42fddccefe04666e973ff17a331b719720299e\": container with ID starting with 2b45036fb2a19aa5b6917a8cbc42fddccefe04666e973ff17a331b719720299e not found: ID does not exist" containerID="2b45036fb2a19aa5b6917a8cbc42fddccefe04666e973ff17a331b719720299e" Mar 11 10:19:36 crc kubenswrapper[4840]: I0311 10:19:36.803997 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b45036fb2a19aa5b6917a8cbc42fddccefe04666e973ff17a331b719720299e"} err="failed to get container status \"2b45036fb2a19aa5b6917a8cbc42fddccefe04666e973ff17a331b719720299e\": rpc error: code = NotFound desc = could not find container \"2b45036fb2a19aa5b6917a8cbc42fddccefe04666e973ff17a331b719720299e\": container with ID starting with 2b45036fb2a19aa5b6917a8cbc42fddccefe04666e973ff17a331b719720299e not found: ID does not exist" Mar 11 10:19:36 crc kubenswrapper[4840]: I0311 10:19:36.804023 4840 scope.go:117] "RemoveContainer" containerID="5c418a6f2352363da657d363ec937a537329d0c80c790ee70333693a60535a6c" Mar 11 10:19:36 crc kubenswrapper[4840]: E0311 10:19:36.804293 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c418a6f2352363da657d363ec937a537329d0c80c790ee70333693a60535a6c\": container with ID starting with 5c418a6f2352363da657d363ec937a537329d0c80c790ee70333693a60535a6c not found: ID does not exist" containerID="5c418a6f2352363da657d363ec937a537329d0c80c790ee70333693a60535a6c" Mar 11 10:19:36 crc kubenswrapper[4840]: I0311 10:19:36.804327 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c418a6f2352363da657d363ec937a537329d0c80c790ee70333693a60535a6c"} err="failed to get container status \"5c418a6f2352363da657d363ec937a537329d0c80c790ee70333693a60535a6c\": rpc error: code = NotFound desc = could not find container \"5c418a6f2352363da657d363ec937a537329d0c80c790ee70333693a60535a6c\": container with ID starting with 5c418a6f2352363da657d363ec937a537329d0c80c790ee70333693a60535a6c not found: ID does not exist" Mar 11 10:19:36 crc kubenswrapper[4840]: I0311 10:19:36.808223 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c44667757-bkvgc"] Mar 11 10:19:37 crc kubenswrapper[4840]: E0311 10:19:37.942153 4840 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.30:57122->38.102.83.30:45639: write tcp 38.102.83.30:57122->38.102.83.30:45639: write: broken pipe Mar 11 10:19:38 crc kubenswrapper[4840]: I0311 10:19:38.075238 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34fc7f66-e5e7-4750-bb88-2eb9545326d1" path="/var/lib/kubelet/pods/34fc7f66-e5e7-4750-bb88-2eb9545326d1/volumes" Mar 11 10:19:38 crc kubenswrapper[4840]: I0311 10:19:38.332576 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 11 10:19:38 crc kubenswrapper[4840]: I0311 10:19:38.332629 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 11 10:19:39 crc kubenswrapper[4840]: I0311 10:19:39.865880 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:39 crc kubenswrapper[4840]: I0311 10:19:39.866250 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:40 crc kubenswrapper[4840]: I0311 10:19:40.424895 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 11 10:19:40 crc kubenswrapper[4840]: I0311 10:19:40.737251 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 11 10:19:40 crc kubenswrapper[4840]: I0311 10:19:40.809009 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 11 10:19:42 crc kubenswrapper[4840]: I0311 10:19:42.194751 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:42 crc kubenswrapper[4840]: I0311 10:19:42.298286 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 11 10:19:46 crc kubenswrapper[4840]: I0311 10:19:46.974629 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-46h75"] Mar 11 10:19:46 crc kubenswrapper[4840]: E0311 10:19:46.975444 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34fc7f66-e5e7-4750-bb88-2eb9545326d1" containerName="dnsmasq-dns" Mar 11 10:19:46 crc kubenswrapper[4840]: I0311 10:19:46.975488 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="34fc7f66-e5e7-4750-bb88-2eb9545326d1" containerName="dnsmasq-dns" Mar 11 10:19:46 crc kubenswrapper[4840]: E0311 10:19:46.975517 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34fc7f66-e5e7-4750-bb88-2eb9545326d1" containerName="init" Mar 11 10:19:46 crc kubenswrapper[4840]: I0311 10:19:46.975527 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="34fc7f66-e5e7-4750-bb88-2eb9545326d1" containerName="init" Mar 11 10:19:46 crc kubenswrapper[4840]: I0311 10:19:46.975790 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="34fc7f66-e5e7-4750-bb88-2eb9545326d1" containerName="dnsmasq-dns" Mar 11 10:19:46 crc kubenswrapper[4840]: I0311 10:19:46.976562 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-46h75" Mar 11 10:19:46 crc kubenswrapper[4840]: I0311 10:19:46.978818 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 11 10:19:46 crc kubenswrapper[4840]: I0311 10:19:46.995640 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-46h75"] Mar 11 10:19:47 crc kubenswrapper[4840]: I0311 10:19:47.146478 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-889ck\" (UniqueName: \"kubernetes.io/projected/4a4034d7-09b6-4c21-ba44-57336794fd4e-kube-api-access-889ck\") pod \"root-account-create-update-46h75\" (UID: \"4a4034d7-09b6-4c21-ba44-57336794fd4e\") " pod="openstack/root-account-create-update-46h75" Mar 11 10:19:47 crc kubenswrapper[4840]: I0311 10:19:47.146600 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a4034d7-09b6-4c21-ba44-57336794fd4e-operator-scripts\") pod \"root-account-create-update-46h75\" (UID: \"4a4034d7-09b6-4c21-ba44-57336794fd4e\") " pod="openstack/root-account-create-update-46h75" Mar 11 10:19:47 crc kubenswrapper[4840]: I0311 10:19:47.247866 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a4034d7-09b6-4c21-ba44-57336794fd4e-operator-scripts\") pod \"root-account-create-update-46h75\" (UID: \"4a4034d7-09b6-4c21-ba44-57336794fd4e\") " pod="openstack/root-account-create-update-46h75" Mar 11 10:19:47 crc kubenswrapper[4840]: I0311 10:19:47.247963 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-889ck\" (UniqueName: \"kubernetes.io/projected/4a4034d7-09b6-4c21-ba44-57336794fd4e-kube-api-access-889ck\") pod \"root-account-create-update-46h75\" (UID: \"4a4034d7-09b6-4c21-ba44-57336794fd4e\") " pod="openstack/root-account-create-update-46h75" Mar 11 10:19:47 crc kubenswrapper[4840]: I0311 10:19:47.248628 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a4034d7-09b6-4c21-ba44-57336794fd4e-operator-scripts\") pod \"root-account-create-update-46h75\" (UID: \"4a4034d7-09b6-4c21-ba44-57336794fd4e\") " pod="openstack/root-account-create-update-46h75" Mar 11 10:19:47 crc kubenswrapper[4840]: I0311 10:19:47.287169 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-889ck\" (UniqueName: \"kubernetes.io/projected/4a4034d7-09b6-4c21-ba44-57336794fd4e-kube-api-access-889ck\") pod \"root-account-create-update-46h75\" (UID: \"4a4034d7-09b6-4c21-ba44-57336794fd4e\") " pod="openstack/root-account-create-update-46h75" Mar 11 10:19:47 crc kubenswrapper[4840]: I0311 10:19:47.295533 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-46h75" Mar 11 10:19:47 crc kubenswrapper[4840]: I0311 10:19:47.799765 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-46h75"] Mar 11 10:19:47 crc kubenswrapper[4840]: W0311 10:19:47.803944 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a4034d7_09b6_4c21_ba44_57336794fd4e.slice/crio-880b840e2669756c1e471146e7683fb2ada0aae6d161d0e97b62b07b3b41afe7 WatchSource:0}: Error finding container 880b840e2669756c1e471146e7683fb2ada0aae6d161d0e97b62b07b3b41afe7: Status 404 returned error can't find the container with id 880b840e2669756c1e471146e7683fb2ada0aae6d161d0e97b62b07b3b41afe7 Mar 11 10:19:47 crc kubenswrapper[4840]: I0311 10:19:47.858854 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-46h75" event={"ID":"4a4034d7-09b6-4c21-ba44-57336794fd4e","Type":"ContainerStarted","Data":"880b840e2669756c1e471146e7683fb2ada0aae6d161d0e97b62b07b3b41afe7"} Mar 11 10:19:48 crc kubenswrapper[4840]: I0311 10:19:48.869918 4840 generic.go:334] "Generic (PLEG): container finished" podID="4a4034d7-09b6-4c21-ba44-57336794fd4e" containerID="0193ec3b541f59d58245e25327d5673b7e89484bf5a497e35ea7e201c8ae588f" exitCode=0 Mar 11 10:19:48 crc kubenswrapper[4840]: I0311 10:19:48.870007 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-46h75" event={"ID":"4a4034d7-09b6-4c21-ba44-57336794fd4e","Type":"ContainerDied","Data":"0193ec3b541f59d58245e25327d5673b7e89484bf5a497e35ea7e201c8ae588f"} Mar 11 10:19:50 crc kubenswrapper[4840]: I0311 10:19:50.263101 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-46h75" Mar 11 10:19:50 crc kubenswrapper[4840]: I0311 10:19:50.403536 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-889ck\" (UniqueName: \"kubernetes.io/projected/4a4034d7-09b6-4c21-ba44-57336794fd4e-kube-api-access-889ck\") pod \"4a4034d7-09b6-4c21-ba44-57336794fd4e\" (UID: \"4a4034d7-09b6-4c21-ba44-57336794fd4e\") " Mar 11 10:19:50 crc kubenswrapper[4840]: I0311 10:19:50.403629 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a4034d7-09b6-4c21-ba44-57336794fd4e-operator-scripts\") pod \"4a4034d7-09b6-4c21-ba44-57336794fd4e\" (UID: \"4a4034d7-09b6-4c21-ba44-57336794fd4e\") " Mar 11 10:19:50 crc kubenswrapper[4840]: I0311 10:19:50.405119 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a4034d7-09b6-4c21-ba44-57336794fd4e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a4034d7-09b6-4c21-ba44-57336794fd4e" (UID: "4a4034d7-09b6-4c21-ba44-57336794fd4e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:19:50 crc kubenswrapper[4840]: I0311 10:19:50.409575 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a4034d7-09b6-4c21-ba44-57336794fd4e-kube-api-access-889ck" (OuterVolumeSpecName: "kube-api-access-889ck") pod "4a4034d7-09b6-4c21-ba44-57336794fd4e" (UID: "4a4034d7-09b6-4c21-ba44-57336794fd4e"). InnerVolumeSpecName "kube-api-access-889ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:19:50 crc kubenswrapper[4840]: I0311 10:19:50.505332 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-889ck\" (UniqueName: \"kubernetes.io/projected/4a4034d7-09b6-4c21-ba44-57336794fd4e-kube-api-access-889ck\") on node \"crc\" DevicePath \"\"" Mar 11 10:19:50 crc kubenswrapper[4840]: I0311 10:19:50.505362 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a4034d7-09b6-4c21-ba44-57336794fd4e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 10:19:50 crc kubenswrapper[4840]: I0311 10:19:50.892792 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-46h75" event={"ID":"4a4034d7-09b6-4c21-ba44-57336794fd4e","Type":"ContainerDied","Data":"880b840e2669756c1e471146e7683fb2ada0aae6d161d0e97b62b07b3b41afe7"} Mar 11 10:19:50 crc kubenswrapper[4840]: I0311 10:19:50.892833 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="880b840e2669756c1e471146e7683fb2ada0aae6d161d0e97b62b07b3b41afe7" Mar 11 10:19:50 crc kubenswrapper[4840]: I0311 10:19:50.892884 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-46h75" Mar 11 10:19:51 crc kubenswrapper[4840]: E0311 10:19:51.023112 4840 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a4034d7_09b6_4c21_ba44_57336794fd4e.slice\": RecentStats: unable to find data in memory cache]" Mar 11 10:19:53 crc kubenswrapper[4840]: I0311 10:19:53.501196 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-46h75"] Mar 11 10:19:53 crc kubenswrapper[4840]: I0311 10:19:53.508115 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-46h75"] Mar 11 10:19:54 crc kubenswrapper[4840]: I0311 10:19:54.071326 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a4034d7-09b6-4c21-ba44-57336794fd4e" path="/var/lib/kubelet/pods/4a4034d7-09b6-4c21-ba44-57336794fd4e/volumes" Mar 11 10:19:57 crc kubenswrapper[4840]: I0311 10:19:57.446223 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:19:57 crc kubenswrapper[4840]: I0311 10:19:57.446789 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:19:58 crc kubenswrapper[4840]: I0311 10:19:58.525705 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-6lfc2"] Mar 11 10:19:58 crc kubenswrapper[4840]: E0311 10:19:58.526075 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a4034d7-09b6-4c21-ba44-57336794fd4e" containerName="mariadb-account-create-update" Mar 11 10:19:58 crc kubenswrapper[4840]: I0311 10:19:58.526090 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a4034d7-09b6-4c21-ba44-57336794fd4e" containerName="mariadb-account-create-update" Mar 11 10:19:58 crc kubenswrapper[4840]: I0311 10:19:58.526272 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a4034d7-09b6-4c21-ba44-57336794fd4e" containerName="mariadb-account-create-update" Mar 11 10:19:58 crc kubenswrapper[4840]: I0311 10:19:58.526961 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6lfc2" Mar 11 10:19:58 crc kubenswrapper[4840]: I0311 10:19:58.544983 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 11 10:19:58 crc kubenswrapper[4840]: I0311 10:19:58.549175 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2sbg\" (UniqueName: \"kubernetes.io/projected/e22efbed-3cf1-49ef-8e84-16d5e3b9b066-kube-api-access-s2sbg\") pod \"root-account-create-update-6lfc2\" (UID: \"e22efbed-3cf1-49ef-8e84-16d5e3b9b066\") " pod="openstack/root-account-create-update-6lfc2" Mar 11 10:19:58 crc kubenswrapper[4840]: I0311 10:19:58.549264 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e22efbed-3cf1-49ef-8e84-16d5e3b9b066-operator-scripts\") pod \"root-account-create-update-6lfc2\" (UID: \"e22efbed-3cf1-49ef-8e84-16d5e3b9b066\") " pod="openstack/root-account-create-update-6lfc2" Mar 11 10:19:58 crc kubenswrapper[4840]: I0311 10:19:58.555984 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6lfc2"] Mar 11 10:19:58 crc kubenswrapper[4840]: I0311 10:19:58.651555 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e22efbed-3cf1-49ef-8e84-16d5e3b9b066-operator-scripts\") pod \"root-account-create-update-6lfc2\" (UID: \"e22efbed-3cf1-49ef-8e84-16d5e3b9b066\") " pod="openstack/root-account-create-update-6lfc2" Mar 11 10:19:58 crc kubenswrapper[4840]: I0311 10:19:58.651717 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2sbg\" (UniqueName: \"kubernetes.io/projected/e22efbed-3cf1-49ef-8e84-16d5e3b9b066-kube-api-access-s2sbg\") pod \"root-account-create-update-6lfc2\" (UID: \"e22efbed-3cf1-49ef-8e84-16d5e3b9b066\") " pod="openstack/root-account-create-update-6lfc2" Mar 11 10:19:58 crc kubenswrapper[4840]: I0311 10:19:58.652281 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e22efbed-3cf1-49ef-8e84-16d5e3b9b066-operator-scripts\") pod \"root-account-create-update-6lfc2\" (UID: \"e22efbed-3cf1-49ef-8e84-16d5e3b9b066\") " pod="openstack/root-account-create-update-6lfc2" Mar 11 10:19:58 crc kubenswrapper[4840]: I0311 10:19:58.672781 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2sbg\" (UniqueName: \"kubernetes.io/projected/e22efbed-3cf1-49ef-8e84-16d5e3b9b066-kube-api-access-s2sbg\") pod \"root-account-create-update-6lfc2\" (UID: \"e22efbed-3cf1-49ef-8e84-16d5e3b9b066\") " pod="openstack/root-account-create-update-6lfc2" Mar 11 10:19:58 crc kubenswrapper[4840]: I0311 10:19:58.894651 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6lfc2" Mar 11 10:19:59 crc kubenswrapper[4840]: I0311 10:19:59.408204 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-6lfc2"] Mar 11 10:19:59 crc kubenswrapper[4840]: I0311 10:19:59.987235 4840 generic.go:334] "Generic (PLEG): container finished" podID="e22efbed-3cf1-49ef-8e84-16d5e3b9b066" containerID="e8f9384659d429f1e7cdc8f5b27878c4d32e7138a700eb63bd36002b9815fcb0" exitCode=0 Mar 11 10:19:59 crc kubenswrapper[4840]: I0311 10:19:59.987303 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6lfc2" event={"ID":"e22efbed-3cf1-49ef-8e84-16d5e3b9b066","Type":"ContainerDied","Data":"e8f9384659d429f1e7cdc8f5b27878c4d32e7138a700eb63bd36002b9815fcb0"} Mar 11 10:19:59 crc kubenswrapper[4840]: I0311 10:19:59.987716 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6lfc2" event={"ID":"e22efbed-3cf1-49ef-8e84-16d5e3b9b066","Type":"ContainerStarted","Data":"dbf13a201b852643841418cd34b29d60bb10ed74204bd489d8d499cae7361fb2"} Mar 11 10:20:00 crc kubenswrapper[4840]: I0311 10:20:00.137653 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553740-qdkls"] Mar 11 10:20:00 crc kubenswrapper[4840]: I0311 10:20:00.138992 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553740-qdkls" Mar 11 10:20:00 crc kubenswrapper[4840]: I0311 10:20:00.142410 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:20:00 crc kubenswrapper[4840]: I0311 10:20:00.142704 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:20:00 crc kubenswrapper[4840]: I0311 10:20:00.142844 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:20:00 crc kubenswrapper[4840]: I0311 10:20:00.152051 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553740-qdkls"] Mar 11 10:20:00 crc kubenswrapper[4840]: I0311 10:20:00.289881 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk522\" (UniqueName: \"kubernetes.io/projected/e524ba7e-5fb4-4c6a-964b-6e01712007b5-kube-api-access-sk522\") pod \"auto-csr-approver-29553740-qdkls\" (UID: \"e524ba7e-5fb4-4c6a-964b-6e01712007b5\") " pod="openshift-infra/auto-csr-approver-29553740-qdkls" Mar 11 10:20:00 crc kubenswrapper[4840]: I0311 10:20:00.392823 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk522\" (UniqueName: \"kubernetes.io/projected/e524ba7e-5fb4-4c6a-964b-6e01712007b5-kube-api-access-sk522\") pod \"auto-csr-approver-29553740-qdkls\" (UID: \"e524ba7e-5fb4-4c6a-964b-6e01712007b5\") " pod="openshift-infra/auto-csr-approver-29553740-qdkls" Mar 11 10:20:00 crc kubenswrapper[4840]: I0311 10:20:00.422184 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk522\" (UniqueName: \"kubernetes.io/projected/e524ba7e-5fb4-4c6a-964b-6e01712007b5-kube-api-access-sk522\") pod \"auto-csr-approver-29553740-qdkls\" (UID: \"e524ba7e-5fb4-4c6a-964b-6e01712007b5\") " pod="openshift-infra/auto-csr-approver-29553740-qdkls" Mar 11 10:20:00 crc kubenswrapper[4840]: I0311 10:20:00.474456 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553740-qdkls" Mar 11 10:20:00 crc kubenswrapper[4840]: I0311 10:20:00.705666 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553740-qdkls"] Mar 11 10:20:01 crc kubenswrapper[4840]: I0311 10:20:01.001526 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553740-qdkls" event={"ID":"e524ba7e-5fb4-4c6a-964b-6e01712007b5","Type":"ContainerStarted","Data":"82a2d2015c17b8a69c1ff69fc70e9fa63cff39162f71f87a4b359f4f5bd043a9"} Mar 11 10:20:01 crc kubenswrapper[4840]: I0311 10:20:01.006726 4840 generic.go:334] "Generic (PLEG): container finished" podID="0edf1389-2132-4752-8ba0-7aebb76e173f" containerID="c2a0b5a84a6bfa65674c2de1a2587ea05b454eda5d2a70a4fe5d0ab1cadeea1f" exitCode=0 Mar 11 10:20:01 crc kubenswrapper[4840]: I0311 10:20:01.006837 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0edf1389-2132-4752-8ba0-7aebb76e173f","Type":"ContainerDied","Data":"c2a0b5a84a6bfa65674c2de1a2587ea05b454eda5d2a70a4fe5d0ab1cadeea1f"} Mar 11 10:20:01 crc kubenswrapper[4840]: I0311 10:20:01.360228 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6lfc2" Mar 11 10:20:01 crc kubenswrapper[4840]: I0311 10:20:01.510257 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e22efbed-3cf1-49ef-8e84-16d5e3b9b066-operator-scripts\") pod \"e22efbed-3cf1-49ef-8e84-16d5e3b9b066\" (UID: \"e22efbed-3cf1-49ef-8e84-16d5e3b9b066\") " Mar 11 10:20:01 crc kubenswrapper[4840]: I0311 10:20:01.510448 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2sbg\" (UniqueName: \"kubernetes.io/projected/e22efbed-3cf1-49ef-8e84-16d5e3b9b066-kube-api-access-s2sbg\") pod \"e22efbed-3cf1-49ef-8e84-16d5e3b9b066\" (UID: \"e22efbed-3cf1-49ef-8e84-16d5e3b9b066\") " Mar 11 10:20:01 crc kubenswrapper[4840]: I0311 10:20:01.511445 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e22efbed-3cf1-49ef-8e84-16d5e3b9b066-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e22efbed-3cf1-49ef-8e84-16d5e3b9b066" (UID: "e22efbed-3cf1-49ef-8e84-16d5e3b9b066"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:20:01 crc kubenswrapper[4840]: I0311 10:20:01.515616 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e22efbed-3cf1-49ef-8e84-16d5e3b9b066-kube-api-access-s2sbg" (OuterVolumeSpecName: "kube-api-access-s2sbg") pod "e22efbed-3cf1-49ef-8e84-16d5e3b9b066" (UID: "e22efbed-3cf1-49ef-8e84-16d5e3b9b066"). InnerVolumeSpecName "kube-api-access-s2sbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:20:01 crc kubenswrapper[4840]: I0311 10:20:01.612511 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2sbg\" (UniqueName: \"kubernetes.io/projected/e22efbed-3cf1-49ef-8e84-16d5e3b9b066-kube-api-access-s2sbg\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:01 crc kubenswrapper[4840]: I0311 10:20:01.612557 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e22efbed-3cf1-49ef-8e84-16d5e3b9b066-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:02 crc kubenswrapper[4840]: I0311 10:20:02.015985 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0edf1389-2132-4752-8ba0-7aebb76e173f","Type":"ContainerStarted","Data":"f085459b0890b14c6469d39d59dd5852aa7121c21595a673fd7174fba6b83d02"} Mar 11 10:20:02 crc kubenswrapper[4840]: I0311 10:20:02.016614 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:02 crc kubenswrapper[4840]: I0311 10:20:02.020569 4840 generic.go:334] "Generic (PLEG): container finished" podID="c43faf74-3eed-41b3-b43a-1f88b3e61044" containerID="75f42df2c621d765374bb4403a70a1345a3f6d43262771e1fe8435540cdd63df" exitCode=0 Mar 11 10:20:02 crc kubenswrapper[4840]: I0311 10:20:02.020636 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c43faf74-3eed-41b3-b43a-1f88b3e61044","Type":"ContainerDied","Data":"75f42df2c621d765374bb4403a70a1345a3f6d43262771e1fe8435540cdd63df"} Mar 11 10:20:02 crc kubenswrapper[4840]: I0311 10:20:02.022666 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-6lfc2" event={"ID":"e22efbed-3cf1-49ef-8e84-16d5e3b9b066","Type":"ContainerDied","Data":"dbf13a201b852643841418cd34b29d60bb10ed74204bd489d8d499cae7361fb2"} Mar 11 10:20:02 crc kubenswrapper[4840]: I0311 10:20:02.022699 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbf13a201b852643841418cd34b29d60bb10ed74204bd489d8d499cae7361fb2" Mar 11 10:20:02 crc kubenswrapper[4840]: I0311 10:20:02.022711 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-6lfc2" Mar 11 10:20:02 crc kubenswrapper[4840]: I0311 10:20:02.050271 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.050115367 podStartE2EDuration="37.050115367s" podCreationTimestamp="2026-03-11 10:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:20:02.038820611 +0000 UTC m=+5000.704490426" watchObservedRunningTime="2026-03-11 10:20:02.050115367 +0000 UTC m=+5000.715785192" Mar 11 10:20:03 crc kubenswrapper[4840]: I0311 10:20:03.032487 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c43faf74-3eed-41b3-b43a-1f88b3e61044","Type":"ContainerStarted","Data":"f40e0b1fa05c0a3cc78ac5d9fe3822284c0e888fed4aadf62272ab1b25bff9e9"} Mar 11 10:20:03 crc kubenswrapper[4840]: I0311 10:20:03.033436 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 11 10:20:03 crc kubenswrapper[4840]: I0311 10:20:03.034889 4840 generic.go:334] "Generic (PLEG): container finished" podID="e524ba7e-5fb4-4c6a-964b-6e01712007b5" containerID="d92736068fbb09a50a29c7c36c51cbc8856dbd060f60ddc8177db8baa4b426fb" exitCode=0 Mar 11 10:20:03 crc kubenswrapper[4840]: I0311 10:20:03.034982 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553740-qdkls" event={"ID":"e524ba7e-5fb4-4c6a-964b-6e01712007b5","Type":"ContainerDied","Data":"d92736068fbb09a50a29c7c36c51cbc8856dbd060f60ddc8177db8baa4b426fb"} Mar 11 10:20:03 crc kubenswrapper[4840]: I0311 10:20:03.062320 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.062299236 podStartE2EDuration="38.062299236s" podCreationTimestamp="2026-03-11 10:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:20:03.05654733 +0000 UTC m=+5001.722217145" watchObservedRunningTime="2026-03-11 10:20:03.062299236 +0000 UTC m=+5001.727969051" Mar 11 10:20:04 crc kubenswrapper[4840]: I0311 10:20:04.396394 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553740-qdkls" Mar 11 10:20:04 crc kubenswrapper[4840]: I0311 10:20:04.458364 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk522\" (UniqueName: \"kubernetes.io/projected/e524ba7e-5fb4-4c6a-964b-6e01712007b5-kube-api-access-sk522\") pod \"e524ba7e-5fb4-4c6a-964b-6e01712007b5\" (UID: \"e524ba7e-5fb4-4c6a-964b-6e01712007b5\") " Mar 11 10:20:04 crc kubenswrapper[4840]: I0311 10:20:04.464141 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e524ba7e-5fb4-4c6a-964b-6e01712007b5-kube-api-access-sk522" (OuterVolumeSpecName: "kube-api-access-sk522") pod "e524ba7e-5fb4-4c6a-964b-6e01712007b5" (UID: "e524ba7e-5fb4-4c6a-964b-6e01712007b5"). InnerVolumeSpecName "kube-api-access-sk522". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:20:04 crc kubenswrapper[4840]: I0311 10:20:04.559985 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk522\" (UniqueName: \"kubernetes.io/projected/e524ba7e-5fb4-4c6a-964b-6e01712007b5-kube-api-access-sk522\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:05 crc kubenswrapper[4840]: I0311 10:20:05.051560 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553740-qdkls" event={"ID":"e524ba7e-5fb4-4c6a-964b-6e01712007b5","Type":"ContainerDied","Data":"82a2d2015c17b8a69c1ff69fc70e9fa63cff39162f71f87a4b359f4f5bd043a9"} Mar 11 10:20:05 crc kubenswrapper[4840]: I0311 10:20:05.051954 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82a2d2015c17b8a69c1ff69fc70e9fa63cff39162f71f87a4b359f4f5bd043a9" Mar 11 10:20:05 crc kubenswrapper[4840]: I0311 10:20:05.051795 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553740-qdkls" Mar 11 10:20:05 crc kubenswrapper[4840]: I0311 10:20:05.463371 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553734-94kp7"] Mar 11 10:20:05 crc kubenswrapper[4840]: I0311 10:20:05.470767 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553734-94kp7"] Mar 11 10:20:06 crc kubenswrapper[4840]: I0311 10:20:06.070154 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50348c92-87ca-4e4d-b358-8815efeb49b6" path="/var/lib/kubelet/pods/50348c92-87ca-4e4d-b358-8815efeb49b6/volumes" Mar 11 10:20:11 crc kubenswrapper[4840]: I0311 10:20:11.985492 4840 scope.go:117] "RemoveContainer" containerID="22c7fb8009726a3b1329676f436a0b905fb219715c632fb98f2ad1a9f9e4ee6f" Mar 11 10:20:16 crc kubenswrapper[4840]: I0311 10:20:16.390758 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:17 crc kubenswrapper[4840]: I0311 10:20:17.060813 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 11 10:20:25 crc kubenswrapper[4840]: I0311 10:20:25.894174 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66d5bf7c87-h2f4v"] Mar 11 10:20:25 crc kubenswrapper[4840]: E0311 10:20:25.895816 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e22efbed-3cf1-49ef-8e84-16d5e3b9b066" containerName="mariadb-account-create-update" Mar 11 10:20:25 crc kubenswrapper[4840]: I0311 10:20:25.895855 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="e22efbed-3cf1-49ef-8e84-16d5e3b9b066" containerName="mariadb-account-create-update" Mar 11 10:20:25 crc kubenswrapper[4840]: E0311 10:20:25.895898 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e524ba7e-5fb4-4c6a-964b-6e01712007b5" containerName="oc" Mar 11 10:20:25 crc kubenswrapper[4840]: I0311 10:20:25.895919 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="e524ba7e-5fb4-4c6a-964b-6e01712007b5" containerName="oc" Mar 11 10:20:25 crc kubenswrapper[4840]: I0311 10:20:25.896349 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="e524ba7e-5fb4-4c6a-964b-6e01712007b5" containerName="oc" Mar 11 10:20:25 crc kubenswrapper[4840]: I0311 10:20:25.896386 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="e22efbed-3cf1-49ef-8e84-16d5e3b9b066" containerName="mariadb-account-create-update" Mar 11 10:20:25 crc kubenswrapper[4840]: I0311 10:20:25.898269 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66d5bf7c87-h2f4v" Mar 11 10:20:25 crc kubenswrapper[4840]: I0311 10:20:25.908272 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66d5bf7c87-h2f4v"] Mar 11 10:20:26 crc kubenswrapper[4840]: I0311 10:20:26.037898 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4fj5\" (UniqueName: \"kubernetes.io/projected/cf2b9835-4682-46ed-bc21-509455274aba-kube-api-access-x4fj5\") pod \"dnsmasq-dns-66d5bf7c87-h2f4v\" (UID: \"cf2b9835-4682-46ed-bc21-509455274aba\") " pod="openstack/dnsmasq-dns-66d5bf7c87-h2f4v" Mar 11 10:20:26 crc kubenswrapper[4840]: I0311 10:20:26.038486 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf2b9835-4682-46ed-bc21-509455274aba-config\") pod \"dnsmasq-dns-66d5bf7c87-h2f4v\" (UID: \"cf2b9835-4682-46ed-bc21-509455274aba\") " pod="openstack/dnsmasq-dns-66d5bf7c87-h2f4v" Mar 11 10:20:26 crc kubenswrapper[4840]: I0311 10:20:26.038695 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf2b9835-4682-46ed-bc21-509455274aba-dns-svc\") pod \"dnsmasq-dns-66d5bf7c87-h2f4v\" (UID: \"cf2b9835-4682-46ed-bc21-509455274aba\") " pod="openstack/dnsmasq-dns-66d5bf7c87-h2f4v" Mar 11 10:20:26 crc kubenswrapper[4840]: I0311 10:20:26.139816 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf2b9835-4682-46ed-bc21-509455274aba-config\") pod \"dnsmasq-dns-66d5bf7c87-h2f4v\" (UID: \"cf2b9835-4682-46ed-bc21-509455274aba\") " pod="openstack/dnsmasq-dns-66d5bf7c87-h2f4v" Mar 11 10:20:26 crc kubenswrapper[4840]: I0311 10:20:26.139865 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf2b9835-4682-46ed-bc21-509455274aba-dns-svc\") pod \"dnsmasq-dns-66d5bf7c87-h2f4v\" (UID: \"cf2b9835-4682-46ed-bc21-509455274aba\") " pod="openstack/dnsmasq-dns-66d5bf7c87-h2f4v" Mar 11 10:20:26 crc kubenswrapper[4840]: I0311 10:20:26.139968 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4fj5\" (UniqueName: \"kubernetes.io/projected/cf2b9835-4682-46ed-bc21-509455274aba-kube-api-access-x4fj5\") pod \"dnsmasq-dns-66d5bf7c87-h2f4v\" (UID: \"cf2b9835-4682-46ed-bc21-509455274aba\") " pod="openstack/dnsmasq-dns-66d5bf7c87-h2f4v" Mar 11 10:20:26 crc kubenswrapper[4840]: I0311 10:20:26.141483 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf2b9835-4682-46ed-bc21-509455274aba-config\") pod \"dnsmasq-dns-66d5bf7c87-h2f4v\" (UID: \"cf2b9835-4682-46ed-bc21-509455274aba\") " pod="openstack/dnsmasq-dns-66d5bf7c87-h2f4v" Mar 11 10:20:26 crc kubenswrapper[4840]: I0311 10:20:26.141565 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf2b9835-4682-46ed-bc21-509455274aba-dns-svc\") pod \"dnsmasq-dns-66d5bf7c87-h2f4v\" (UID: \"cf2b9835-4682-46ed-bc21-509455274aba\") " pod="openstack/dnsmasq-dns-66d5bf7c87-h2f4v" Mar 11 10:20:26 crc kubenswrapper[4840]: I0311 10:20:26.164651 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4fj5\" (UniqueName: \"kubernetes.io/projected/cf2b9835-4682-46ed-bc21-509455274aba-kube-api-access-x4fj5\") pod \"dnsmasq-dns-66d5bf7c87-h2f4v\" (UID: \"cf2b9835-4682-46ed-bc21-509455274aba\") " pod="openstack/dnsmasq-dns-66d5bf7c87-h2f4v" Mar 11 10:20:26 crc kubenswrapper[4840]: I0311 10:20:26.242868 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66d5bf7c87-h2f4v" Mar 11 10:20:26 crc kubenswrapper[4840]: I0311 10:20:26.717868 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66d5bf7c87-h2f4v"] Mar 11 10:20:27 crc kubenswrapper[4840]: I0311 10:20:27.131887 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 10:20:27 crc kubenswrapper[4840]: I0311 10:20:27.283100 4840 generic.go:334] "Generic (PLEG): container finished" podID="cf2b9835-4682-46ed-bc21-509455274aba" containerID="d03c5fabda1b6aaf929787c91cfa7498b9698020852cebc78244fb808f9e62f2" exitCode=0 Mar 11 10:20:27 crc kubenswrapper[4840]: I0311 10:20:27.283522 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d5bf7c87-h2f4v" event={"ID":"cf2b9835-4682-46ed-bc21-509455274aba","Type":"ContainerDied","Data":"d03c5fabda1b6aaf929787c91cfa7498b9698020852cebc78244fb808f9e62f2"} Mar 11 10:20:27 crc kubenswrapper[4840]: I0311 10:20:27.283551 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d5bf7c87-h2f4v" event={"ID":"cf2b9835-4682-46ed-bc21-509455274aba","Type":"ContainerStarted","Data":"55aade4e3599d3daa197a6426524215c64e8328157dc849b9c3e9f17426df1e8"} Mar 11 10:20:27 crc kubenswrapper[4840]: I0311 10:20:27.446323 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:20:27 crc kubenswrapper[4840]: I0311 10:20:27.446376 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:20:27 crc kubenswrapper[4840]: I0311 10:20:27.446416 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 10:20:27 crc kubenswrapper[4840]: I0311 10:20:27.447060 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b"} pod="openshift-machine-config-operator/machine-config-daemon-brtht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 10:20:27 crc kubenswrapper[4840]: I0311 10:20:27.447124 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" containerID="cri-o://27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" gracePeriod=600 Mar 11 10:20:27 crc kubenswrapper[4840]: E0311 10:20:27.570248 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:20:28 crc kubenswrapper[4840]: I0311 10:20:28.263781 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 10:20:28 crc kubenswrapper[4840]: I0311 10:20:28.299082 4840 generic.go:334] "Generic (PLEG): container finished" podID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" exitCode=0 Mar 11 10:20:28 crc kubenswrapper[4840]: I0311 10:20:28.299169 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerDied","Data":"27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b"} Mar 11 10:20:28 crc kubenswrapper[4840]: I0311 10:20:28.299210 4840 scope.go:117] "RemoveContainer" containerID="32974af3682f2327823dccca8b0498904823622932f646c199fcd97fe88fc627" Mar 11 10:20:28 crc kubenswrapper[4840]: I0311 10:20:28.299904 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:20:28 crc kubenswrapper[4840]: E0311 10:20:28.300351 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:20:28 crc kubenswrapper[4840]: I0311 10:20:28.309632 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d5bf7c87-h2f4v" event={"ID":"cf2b9835-4682-46ed-bc21-509455274aba","Type":"ContainerStarted","Data":"4ccbbb08acb3cc73b5c3a9ea8acec564de2673b580ca4ab46e97c9eb7921f7db"} Mar 11 10:20:28 crc kubenswrapper[4840]: I0311 10:20:28.310763 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66d5bf7c87-h2f4v" Mar 11 10:20:28 crc kubenswrapper[4840]: I0311 10:20:28.392441 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66d5bf7c87-h2f4v" podStartSLOduration=3.392422564 podStartE2EDuration="3.392422564s" podCreationTimestamp="2026-03-11 10:20:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:20:28.385090167 +0000 UTC m=+5027.050759982" watchObservedRunningTime="2026-03-11 10:20:28.392422564 +0000 UTC m=+5027.058092379" Mar 11 10:20:31 crc kubenswrapper[4840]: I0311 10:20:31.382386 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="c43faf74-3eed-41b3-b43a-1f88b3e61044" containerName="rabbitmq" containerID="cri-o://f40e0b1fa05c0a3cc78ac5d9fe3822284c0e888fed4aadf62272ab1b25bff9e9" gracePeriod=604796 Mar 11 10:20:32 crc kubenswrapper[4840]: I0311 10:20:32.768493 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="0edf1389-2132-4752-8ba0-7aebb76e173f" containerName="rabbitmq" containerID="cri-o://f085459b0890b14c6469d39d59dd5852aa7121c21595a673fd7174fba6b83d02" gracePeriod=604796 Mar 11 10:20:36 crc kubenswrapper[4840]: I0311 10:20:36.244781 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66d5bf7c87-h2f4v" Mar 11 10:20:36 crc kubenswrapper[4840]: I0311 10:20:36.318485 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-ff89b6977-nx62v"] Mar 11 10:20:36 crc kubenswrapper[4840]: I0311 10:20:36.319106 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-ff89b6977-nx62v" podUID="04092458-98a3-458a-acca-d10f849df4dc" containerName="dnsmasq-dns" containerID="cri-o://d4bb505935223181b520529d4f9fd452f2d41b55c815d79e191432956598cd21" gracePeriod=10 Mar 11 10:20:36 crc kubenswrapper[4840]: I0311 10:20:36.387575 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="0edf1389-2132-4752-8ba0-7aebb76e173f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.25:5671: connect: connection refused" Mar 11 10:20:36 crc kubenswrapper[4840]: I0311 10:20:36.790353 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ff89b6977-nx62v" Mar 11 10:20:36 crc kubenswrapper[4840]: I0311 10:20:36.838601 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04092458-98a3-458a-acca-d10f849df4dc-dns-svc\") pod \"04092458-98a3-458a-acca-d10f849df4dc\" (UID: \"04092458-98a3-458a-acca-d10f849df4dc\") " Mar 11 10:20:36 crc kubenswrapper[4840]: I0311 10:20:36.838639 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjh87\" (UniqueName: \"kubernetes.io/projected/04092458-98a3-458a-acca-d10f849df4dc-kube-api-access-hjh87\") pod \"04092458-98a3-458a-acca-d10f849df4dc\" (UID: \"04092458-98a3-458a-acca-d10f849df4dc\") " Mar 11 10:20:36 crc kubenswrapper[4840]: I0311 10:20:36.838673 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04092458-98a3-458a-acca-d10f849df4dc-config\") pod \"04092458-98a3-458a-acca-d10f849df4dc\" (UID: \"04092458-98a3-458a-acca-d10f849df4dc\") " Mar 11 10:20:36 crc kubenswrapper[4840]: I0311 10:20:36.846221 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04092458-98a3-458a-acca-d10f849df4dc-kube-api-access-hjh87" (OuterVolumeSpecName: "kube-api-access-hjh87") pod "04092458-98a3-458a-acca-d10f849df4dc" (UID: "04092458-98a3-458a-acca-d10f849df4dc"). InnerVolumeSpecName "kube-api-access-hjh87". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:20:36 crc kubenswrapper[4840]: I0311 10:20:36.872384 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04092458-98a3-458a-acca-d10f849df4dc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "04092458-98a3-458a-acca-d10f849df4dc" (UID: "04092458-98a3-458a-acca-d10f849df4dc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:20:36 crc kubenswrapper[4840]: I0311 10:20:36.876242 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04092458-98a3-458a-acca-d10f849df4dc-config" (OuterVolumeSpecName: "config") pod "04092458-98a3-458a-acca-d10f849df4dc" (UID: "04092458-98a3-458a-acca-d10f849df4dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:20:36 crc kubenswrapper[4840]: I0311 10:20:36.940823 4840 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/04092458-98a3-458a-acca-d10f849df4dc-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:36 crc kubenswrapper[4840]: I0311 10:20:36.940861 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjh87\" (UniqueName: \"kubernetes.io/projected/04092458-98a3-458a-acca-d10f849df4dc-kube-api-access-hjh87\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:36 crc kubenswrapper[4840]: I0311 10:20:36.940873 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04092458-98a3-458a-acca-d10f849df4dc-config\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:37 crc kubenswrapper[4840]: I0311 10:20:37.058372 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="c43faf74-3eed-41b3-b43a-1f88b3e61044" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.26:5671: connect: connection refused" Mar 11 10:20:37 crc kubenswrapper[4840]: I0311 10:20:37.394804 4840 generic.go:334] "Generic (PLEG): container finished" podID="04092458-98a3-458a-acca-d10f849df4dc" containerID="d4bb505935223181b520529d4f9fd452f2d41b55c815d79e191432956598cd21" exitCode=0 Mar 11 10:20:37 crc kubenswrapper[4840]: I0311 10:20:37.394846 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-ff89b6977-nx62v" Mar 11 10:20:37 crc kubenswrapper[4840]: I0311 10:20:37.394870 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ff89b6977-nx62v" event={"ID":"04092458-98a3-458a-acca-d10f849df4dc","Type":"ContainerDied","Data":"d4bb505935223181b520529d4f9fd452f2d41b55c815d79e191432956598cd21"} Mar 11 10:20:37 crc kubenswrapper[4840]: I0311 10:20:37.394921 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-ff89b6977-nx62v" event={"ID":"04092458-98a3-458a-acca-d10f849df4dc","Type":"ContainerDied","Data":"5bd8790237b778114eebc6325ea065fda4e36ebd1fe70bcfeb37ea924e6abeb6"} Mar 11 10:20:37 crc kubenswrapper[4840]: I0311 10:20:37.394958 4840 scope.go:117] "RemoveContainer" containerID="d4bb505935223181b520529d4f9fd452f2d41b55c815d79e191432956598cd21" Mar 11 10:20:37 crc kubenswrapper[4840]: I0311 10:20:37.432758 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-ff89b6977-nx62v"] Mar 11 10:20:37 crc kubenswrapper[4840]: I0311 10:20:37.437244 4840 scope.go:117] "RemoveContainer" containerID="59f78d146edc102920d95dbe4074e05e5856b0fba3a215181a547a27b6369101" Mar 11 10:20:37 crc kubenswrapper[4840]: I0311 10:20:37.437770 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-ff89b6977-nx62v"] Mar 11 10:20:37 crc kubenswrapper[4840]: I0311 10:20:37.495755 4840 scope.go:117] "RemoveContainer" containerID="d4bb505935223181b520529d4f9fd452f2d41b55c815d79e191432956598cd21" Mar 11 10:20:37 crc kubenswrapper[4840]: E0311 10:20:37.496439 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4bb505935223181b520529d4f9fd452f2d41b55c815d79e191432956598cd21\": container with ID starting with d4bb505935223181b520529d4f9fd452f2d41b55c815d79e191432956598cd21 not found: ID does not exist" containerID="d4bb505935223181b520529d4f9fd452f2d41b55c815d79e191432956598cd21" Mar 11 10:20:37 crc kubenswrapper[4840]: I0311 10:20:37.496481 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4bb505935223181b520529d4f9fd452f2d41b55c815d79e191432956598cd21"} err="failed to get container status \"d4bb505935223181b520529d4f9fd452f2d41b55c815d79e191432956598cd21\": rpc error: code = NotFound desc = could not find container \"d4bb505935223181b520529d4f9fd452f2d41b55c815d79e191432956598cd21\": container with ID starting with d4bb505935223181b520529d4f9fd452f2d41b55c815d79e191432956598cd21 not found: ID does not exist" Mar 11 10:20:37 crc kubenswrapper[4840]: I0311 10:20:37.496503 4840 scope.go:117] "RemoveContainer" containerID="59f78d146edc102920d95dbe4074e05e5856b0fba3a215181a547a27b6369101" Mar 11 10:20:37 crc kubenswrapper[4840]: E0311 10:20:37.496970 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59f78d146edc102920d95dbe4074e05e5856b0fba3a215181a547a27b6369101\": container with ID starting with 59f78d146edc102920d95dbe4074e05e5856b0fba3a215181a547a27b6369101 not found: ID does not exist" containerID="59f78d146edc102920d95dbe4074e05e5856b0fba3a215181a547a27b6369101" Mar 11 10:20:37 crc kubenswrapper[4840]: I0311 10:20:37.497027 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59f78d146edc102920d95dbe4074e05e5856b0fba3a215181a547a27b6369101"} err="failed to get container status \"59f78d146edc102920d95dbe4074e05e5856b0fba3a215181a547a27b6369101\": rpc error: code = NotFound desc = could not find container \"59f78d146edc102920d95dbe4074e05e5856b0fba3a215181a547a27b6369101\": container with ID starting with 59f78d146edc102920d95dbe4074e05e5856b0fba3a215181a547a27b6369101 not found: ID does not exist" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.044952 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.086042 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04092458-98a3-458a-acca-d10f849df4dc" path="/var/lib/kubelet/pods/04092458-98a3-458a-acca-d10f849df4dc/volumes" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.172051 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c43faf74-3eed-41b3-b43a-1f88b3e61044-server-conf\") pod \"c43faf74-3eed-41b3-b43a-1f88b3e61044\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.172373 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51a03c95-e0c9-425a-99a1-ac20e58b2984\") pod \"c43faf74-3eed-41b3-b43a-1f88b3e61044\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.172422 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c43faf74-3eed-41b3-b43a-1f88b3e61044-rabbitmq-erlang-cookie\") pod \"c43faf74-3eed-41b3-b43a-1f88b3e61044\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.172505 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6xcn\" (UniqueName: \"kubernetes.io/projected/c43faf74-3eed-41b3-b43a-1f88b3e61044-kube-api-access-x6xcn\") pod \"c43faf74-3eed-41b3-b43a-1f88b3e61044\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.172636 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c43faf74-3eed-41b3-b43a-1f88b3e61044-plugins-conf\") pod \"c43faf74-3eed-41b3-b43a-1f88b3e61044\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.172685 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c43faf74-3eed-41b3-b43a-1f88b3e61044-config-data\") pod \"c43faf74-3eed-41b3-b43a-1f88b3e61044\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.172746 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c43faf74-3eed-41b3-b43a-1f88b3e61044-rabbitmq-tls\") pod \"c43faf74-3eed-41b3-b43a-1f88b3e61044\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.172779 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c43faf74-3eed-41b3-b43a-1f88b3e61044-rabbitmq-confd\") pod \"c43faf74-3eed-41b3-b43a-1f88b3e61044\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.172821 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c43faf74-3eed-41b3-b43a-1f88b3e61044-rabbitmq-plugins\") pod \"c43faf74-3eed-41b3-b43a-1f88b3e61044\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.172858 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c43faf74-3eed-41b3-b43a-1f88b3e61044-pod-info\") pod \"c43faf74-3eed-41b3-b43a-1f88b3e61044\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.172963 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c43faf74-3eed-41b3-b43a-1f88b3e61044-erlang-cookie-secret\") pod \"c43faf74-3eed-41b3-b43a-1f88b3e61044\" (UID: \"c43faf74-3eed-41b3-b43a-1f88b3e61044\") " Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.175624 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c43faf74-3eed-41b3-b43a-1f88b3e61044-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "c43faf74-3eed-41b3-b43a-1f88b3e61044" (UID: "c43faf74-3eed-41b3-b43a-1f88b3e61044"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.175741 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c43faf74-3eed-41b3-b43a-1f88b3e61044-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "c43faf74-3eed-41b3-b43a-1f88b3e61044" (UID: "c43faf74-3eed-41b3-b43a-1f88b3e61044"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.176088 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c43faf74-3eed-41b3-b43a-1f88b3e61044-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "c43faf74-3eed-41b3-b43a-1f88b3e61044" (UID: "c43faf74-3eed-41b3-b43a-1f88b3e61044"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.179179 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c43faf74-3eed-41b3-b43a-1f88b3e61044-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "c43faf74-3eed-41b3-b43a-1f88b3e61044" (UID: "c43faf74-3eed-41b3-b43a-1f88b3e61044"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.179232 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c43faf74-3eed-41b3-b43a-1f88b3e61044-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "c43faf74-3eed-41b3-b43a-1f88b3e61044" (UID: "c43faf74-3eed-41b3-b43a-1f88b3e61044"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.194267 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/c43faf74-3eed-41b3-b43a-1f88b3e61044-pod-info" (OuterVolumeSpecName: "pod-info") pod "c43faf74-3eed-41b3-b43a-1f88b3e61044" (UID: "c43faf74-3eed-41b3-b43a-1f88b3e61044"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.194359 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c43faf74-3eed-41b3-b43a-1f88b3e61044-kube-api-access-x6xcn" (OuterVolumeSpecName: "kube-api-access-x6xcn") pod "c43faf74-3eed-41b3-b43a-1f88b3e61044" (UID: "c43faf74-3eed-41b3-b43a-1f88b3e61044"). InnerVolumeSpecName "kube-api-access-x6xcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.207562 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51a03c95-e0c9-425a-99a1-ac20e58b2984" (OuterVolumeSpecName: "persistence") pod "c43faf74-3eed-41b3-b43a-1f88b3e61044" (UID: "c43faf74-3eed-41b3-b43a-1f88b3e61044"). InnerVolumeSpecName "pvc-51a03c95-e0c9-425a-99a1-ac20e58b2984". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.210264 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c43faf74-3eed-41b3-b43a-1f88b3e61044-config-data" (OuterVolumeSpecName: "config-data") pod "c43faf74-3eed-41b3-b43a-1f88b3e61044" (UID: "c43faf74-3eed-41b3-b43a-1f88b3e61044"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.228586 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c43faf74-3eed-41b3-b43a-1f88b3e61044-server-conf" (OuterVolumeSpecName: "server-conf") pod "c43faf74-3eed-41b3-b43a-1f88b3e61044" (UID: "c43faf74-3eed-41b3-b43a-1f88b3e61044"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.276222 4840 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-51a03c95-e0c9-425a-99a1-ac20e58b2984\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51a03c95-e0c9-425a-99a1-ac20e58b2984\") on node \"crc\" " Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.276275 4840 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c43faf74-3eed-41b3-b43a-1f88b3e61044-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.276288 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6xcn\" (UniqueName: \"kubernetes.io/projected/c43faf74-3eed-41b3-b43a-1f88b3e61044-kube-api-access-x6xcn\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.276298 4840 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c43faf74-3eed-41b3-b43a-1f88b3e61044-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.276309 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c43faf74-3eed-41b3-b43a-1f88b3e61044-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.276318 4840 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c43faf74-3eed-41b3-b43a-1f88b3e61044-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.276346 4840 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c43faf74-3eed-41b3-b43a-1f88b3e61044-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.276555 4840 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c43faf74-3eed-41b3-b43a-1f88b3e61044-pod-info\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.276603 4840 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c43faf74-3eed-41b3-b43a-1f88b3e61044-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.276623 4840 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c43faf74-3eed-41b3-b43a-1f88b3e61044-server-conf\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.285733 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c43faf74-3eed-41b3-b43a-1f88b3e61044-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "c43faf74-3eed-41b3-b43a-1f88b3e61044" (UID: "c43faf74-3eed-41b3-b43a-1f88b3e61044"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.308017 4840 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.308451 4840 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-51a03c95-e0c9-425a-99a1-ac20e58b2984" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51a03c95-e0c9-425a-99a1-ac20e58b2984") on node "crc" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.378014 4840 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c43faf74-3eed-41b3-b43a-1f88b3e61044-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.378318 4840 reconciler_common.go:293] "Volume detached for volume \"pvc-51a03c95-e0c9-425a-99a1-ac20e58b2984\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51a03c95-e0c9-425a-99a1-ac20e58b2984\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.404714 4840 generic.go:334] "Generic (PLEG): container finished" podID="c43faf74-3eed-41b3-b43a-1f88b3e61044" containerID="f40e0b1fa05c0a3cc78ac5d9fe3822284c0e888fed4aadf62272ab1b25bff9e9" exitCode=0 Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.404771 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c43faf74-3eed-41b3-b43a-1f88b3e61044","Type":"ContainerDied","Data":"f40e0b1fa05c0a3cc78ac5d9fe3822284c0e888fed4aadf62272ab1b25bff9e9"} Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.404797 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c43faf74-3eed-41b3-b43a-1f88b3e61044","Type":"ContainerDied","Data":"257a8fcb2aa0f460cb98db3c491dd4cd10206df78a539c66594ac721f279e13c"} Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.404812 4840 scope.go:117] "RemoveContainer" containerID="f40e0b1fa05c0a3cc78ac5d9fe3822284c0e888fed4aadf62272ab1b25bff9e9" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.404926 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.426376 4840 scope.go:117] "RemoveContainer" containerID="75f42df2c621d765374bb4403a70a1345a3f6d43262771e1fe8435540cdd63df" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.438523 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.443627 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.479310 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 10:20:38 crc kubenswrapper[4840]: E0311 10:20:38.479908 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04092458-98a3-458a-acca-d10f849df4dc" containerName="dnsmasq-dns" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.479933 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="04092458-98a3-458a-acca-d10f849df4dc" containerName="dnsmasq-dns" Mar 11 10:20:38 crc kubenswrapper[4840]: E0311 10:20:38.479973 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c43faf74-3eed-41b3-b43a-1f88b3e61044" containerName="setup-container" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.479984 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="c43faf74-3eed-41b3-b43a-1f88b3e61044" containerName="setup-container" Mar 11 10:20:38 crc kubenswrapper[4840]: E0311 10:20:38.480000 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c43faf74-3eed-41b3-b43a-1f88b3e61044" containerName="rabbitmq" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.480013 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="c43faf74-3eed-41b3-b43a-1f88b3e61044" containerName="rabbitmq" Mar 11 10:20:38 crc kubenswrapper[4840]: E0311 10:20:38.480058 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04092458-98a3-458a-acca-d10f849df4dc" containerName="init" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.480067 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="04092458-98a3-458a-acca-d10f849df4dc" containerName="init" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.480565 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="c43faf74-3eed-41b3-b43a-1f88b3e61044" containerName="rabbitmq" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.480600 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="04092458-98a3-458a-acca-d10f849df4dc" containerName="dnsmasq-dns" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.483102 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.486296 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.486516 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.486757 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.486894 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.487028 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.487062 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-cnlqc" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.487780 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.499934 4840 scope.go:117] "RemoveContainer" containerID="f40e0b1fa05c0a3cc78ac5d9fe3822284c0e888fed4aadf62272ab1b25bff9e9" Mar 11 10:20:38 crc kubenswrapper[4840]: E0311 10:20:38.500631 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f40e0b1fa05c0a3cc78ac5d9fe3822284c0e888fed4aadf62272ab1b25bff9e9\": container with ID starting with f40e0b1fa05c0a3cc78ac5d9fe3822284c0e888fed4aadf62272ab1b25bff9e9 not found: ID does not exist" containerID="f40e0b1fa05c0a3cc78ac5d9fe3822284c0e888fed4aadf62272ab1b25bff9e9" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.500676 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f40e0b1fa05c0a3cc78ac5d9fe3822284c0e888fed4aadf62272ab1b25bff9e9"} err="failed to get container status \"f40e0b1fa05c0a3cc78ac5d9fe3822284c0e888fed4aadf62272ab1b25bff9e9\": rpc error: code = NotFound desc = could not find container \"f40e0b1fa05c0a3cc78ac5d9fe3822284c0e888fed4aadf62272ab1b25bff9e9\": container with ID starting with f40e0b1fa05c0a3cc78ac5d9fe3822284c0e888fed4aadf62272ab1b25bff9e9 not found: ID does not exist" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.500703 4840 scope.go:117] "RemoveContainer" containerID="75f42df2c621d765374bb4403a70a1345a3f6d43262771e1fe8435540cdd63df" Mar 11 10:20:38 crc kubenswrapper[4840]: E0311 10:20:38.501171 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75f42df2c621d765374bb4403a70a1345a3f6d43262771e1fe8435540cdd63df\": container with ID starting with 75f42df2c621d765374bb4403a70a1345a3f6d43262771e1fe8435540cdd63df not found: ID does not exist" containerID="75f42df2c621d765374bb4403a70a1345a3f6d43262771e1fe8435540cdd63df" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.501211 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75f42df2c621d765374bb4403a70a1345a3f6d43262771e1fe8435540cdd63df"} err="failed to get container status \"75f42df2c621d765374bb4403a70a1345a3f6d43262771e1fe8435540cdd63df\": rpc error: code = NotFound desc = could not find container \"75f42df2c621d765374bb4403a70a1345a3f6d43262771e1fe8435540cdd63df\": container with ID starting with 75f42df2c621d765374bb4403a70a1345a3f6d43262771e1fe8435540cdd63df not found: ID does not exist" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.505902 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.681628 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e29f879-7226-4e32-bdac-aa71b0755af5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.681676 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e29f879-7226-4e32-bdac-aa71b0755af5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.681714 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e29f879-7226-4e32-bdac-aa71b0755af5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.681773 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e29f879-7226-4e32-bdac-aa71b0755af5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.681796 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4qqf\" (UniqueName: \"kubernetes.io/projected/0e29f879-7226-4e32-bdac-aa71b0755af5-kube-api-access-r4qqf\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.681821 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e29f879-7226-4e32-bdac-aa71b0755af5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.681846 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e29f879-7226-4e32-bdac-aa71b0755af5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.681903 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e29f879-7226-4e32-bdac-aa71b0755af5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.681939 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-51a03c95-e0c9-425a-99a1-ac20e58b2984\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51a03c95-e0c9-425a-99a1-ac20e58b2984\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.681954 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e29f879-7226-4e32-bdac-aa71b0755af5-config-data\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.682119 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e29f879-7226-4e32-bdac-aa71b0755af5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.784004 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e29f879-7226-4e32-bdac-aa71b0755af5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.784137 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e29f879-7226-4e32-bdac-aa71b0755af5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.784197 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-51a03c95-e0c9-425a-99a1-ac20e58b2984\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51a03c95-e0c9-425a-99a1-ac20e58b2984\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.784230 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e29f879-7226-4e32-bdac-aa71b0755af5-config-data\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.784286 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e29f879-7226-4e32-bdac-aa71b0755af5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.784341 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e29f879-7226-4e32-bdac-aa71b0755af5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.784378 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e29f879-7226-4e32-bdac-aa71b0755af5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.784432 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e29f879-7226-4e32-bdac-aa71b0755af5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.784513 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e29f879-7226-4e32-bdac-aa71b0755af5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.784554 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4qqf\" (UniqueName: \"kubernetes.io/projected/0e29f879-7226-4e32-bdac-aa71b0755af5-kube-api-access-r4qqf\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.784596 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e29f879-7226-4e32-bdac-aa71b0755af5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.785877 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0e29f879-7226-4e32-bdac-aa71b0755af5-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.785927 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0e29f879-7226-4e32-bdac-aa71b0755af5-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.786646 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e29f879-7226-4e32-bdac-aa71b0755af5-config-data\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.787213 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0e29f879-7226-4e32-bdac-aa71b0755af5-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.787634 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0e29f879-7226-4e32-bdac-aa71b0755af5-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.788328 4840 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.788367 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-51a03c95-e0c9-425a-99a1-ac20e58b2984\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51a03c95-e0c9-425a-99a1-ac20e58b2984\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e661e31f7f828e08a1e2de55395925095e4a489158f296ef6dc6779816098759/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.789607 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0e29f879-7226-4e32-bdac-aa71b0755af5-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.790563 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0e29f879-7226-4e32-bdac-aa71b0755af5-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.790743 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0e29f879-7226-4e32-bdac-aa71b0755af5-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.791316 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0e29f879-7226-4e32-bdac-aa71b0755af5-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.818001 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4qqf\" (UniqueName: \"kubernetes.io/projected/0e29f879-7226-4e32-bdac-aa71b0755af5-kube-api-access-r4qqf\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:38 crc kubenswrapper[4840]: I0311 10:20:38.838282 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-51a03c95-e0c9-425a-99a1-ac20e58b2984\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51a03c95-e0c9-425a-99a1-ac20e58b2984\") pod \"rabbitmq-server-0\" (UID: \"0e29f879-7226-4e32-bdac-aa71b0755af5\") " pod="openstack/rabbitmq-server-0" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.136106 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.278152 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.392410 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0edf1389-2132-4752-8ba0-7aebb76e173f-config-data\") pod \"0edf1389-2132-4752-8ba0-7aebb76e173f\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.392958 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0edf1389-2132-4752-8ba0-7aebb76e173f-rabbitmq-erlang-cookie\") pod \"0edf1389-2132-4752-8ba0-7aebb76e173f\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.392990 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0edf1389-2132-4752-8ba0-7aebb76e173f-erlang-cookie-secret\") pod \"0edf1389-2132-4752-8ba0-7aebb76e173f\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.393061 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0edf1389-2132-4752-8ba0-7aebb76e173f-rabbitmq-confd\") pod \"0edf1389-2132-4752-8ba0-7aebb76e173f\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.393126 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0edf1389-2132-4752-8ba0-7aebb76e173f-plugins-conf\") pod \"0edf1389-2132-4752-8ba0-7aebb76e173f\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.393179 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0edf1389-2132-4752-8ba0-7aebb76e173f-rabbitmq-plugins\") pod \"0edf1389-2132-4752-8ba0-7aebb76e173f\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.393296 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0edf1389-2132-4752-8ba0-7aebb76e173f-pod-info\") pod \"0edf1389-2132-4752-8ba0-7aebb76e173f\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.393325 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0edf1389-2132-4752-8ba0-7aebb76e173f-server-conf\") pod \"0edf1389-2132-4752-8ba0-7aebb76e173f\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.393355 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdq8k\" (UniqueName: \"kubernetes.io/projected/0edf1389-2132-4752-8ba0-7aebb76e173f-kube-api-access-qdq8k\") pod \"0edf1389-2132-4752-8ba0-7aebb76e173f\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.393563 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1db2454-5981-4987-9557-8ca778576b93\") pod \"0edf1389-2132-4752-8ba0-7aebb76e173f\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.393645 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0edf1389-2132-4752-8ba0-7aebb76e173f-rabbitmq-tls\") pod \"0edf1389-2132-4752-8ba0-7aebb76e173f\" (UID: \"0edf1389-2132-4752-8ba0-7aebb76e173f\") " Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.394124 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0edf1389-2132-4752-8ba0-7aebb76e173f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0edf1389-2132-4752-8ba0-7aebb76e173f" (UID: "0edf1389-2132-4752-8ba0-7aebb76e173f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.396008 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0edf1389-2132-4752-8ba0-7aebb76e173f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0edf1389-2132-4752-8ba0-7aebb76e173f" (UID: "0edf1389-2132-4752-8ba0-7aebb76e173f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.397326 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0edf1389-2132-4752-8ba0-7aebb76e173f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0edf1389-2132-4752-8ba0-7aebb76e173f" (UID: "0edf1389-2132-4752-8ba0-7aebb76e173f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.397572 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0edf1389-2132-4752-8ba0-7aebb76e173f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0edf1389-2132-4752-8ba0-7aebb76e173f" (UID: "0edf1389-2132-4752-8ba0-7aebb76e173f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.398321 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0edf1389-2132-4752-8ba0-7aebb76e173f-pod-info" (OuterVolumeSpecName: "pod-info") pod "0edf1389-2132-4752-8ba0-7aebb76e173f" (UID: "0edf1389-2132-4752-8ba0-7aebb76e173f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.402460 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0edf1389-2132-4752-8ba0-7aebb76e173f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0edf1389-2132-4752-8ba0-7aebb76e173f" (UID: "0edf1389-2132-4752-8ba0-7aebb76e173f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.403352 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0edf1389-2132-4752-8ba0-7aebb76e173f-kube-api-access-qdq8k" (OuterVolumeSpecName: "kube-api-access-qdq8k") pod "0edf1389-2132-4752-8ba0-7aebb76e173f" (UID: "0edf1389-2132-4752-8ba0-7aebb76e173f"). InnerVolumeSpecName "kube-api-access-qdq8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.453522 4840 generic.go:334] "Generic (PLEG): container finished" podID="0edf1389-2132-4752-8ba0-7aebb76e173f" containerID="f085459b0890b14c6469d39d59dd5852aa7121c21595a673fd7174fba6b83d02" exitCode=0 Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.453608 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0edf1389-2132-4752-8ba0-7aebb76e173f","Type":"ContainerDied","Data":"f085459b0890b14c6469d39d59dd5852aa7121c21595a673fd7174fba6b83d02"} Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.453646 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0edf1389-2132-4752-8ba0-7aebb76e173f","Type":"ContainerDied","Data":"446dd7e5ac291728918bf09b546f03eca8b104f0fe99e4174102df0ea27c5caa"} Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.453925 4840 scope.go:117] "RemoveContainer" containerID="f085459b0890b14c6469d39d59dd5852aa7121c21595a673fd7174fba6b83d02" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.454118 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.464252 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1db2454-5981-4987-9557-8ca778576b93" (OuterVolumeSpecName: "persistence") pod "0edf1389-2132-4752-8ba0-7aebb76e173f" (UID: "0edf1389-2132-4752-8ba0-7aebb76e173f"). InnerVolumeSpecName "pvc-b1db2454-5981-4987-9557-8ca778576b93". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.509211 4840 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b1db2454-5981-4987-9557-8ca778576b93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1db2454-5981-4987-9557-8ca778576b93\") on node \"crc\" " Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.509245 4840 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0edf1389-2132-4752-8ba0-7aebb76e173f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.509258 4840 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0edf1389-2132-4752-8ba0-7aebb76e173f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.509268 4840 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0edf1389-2132-4752-8ba0-7aebb76e173f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.509277 4840 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0edf1389-2132-4752-8ba0-7aebb76e173f-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.509285 4840 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0edf1389-2132-4752-8ba0-7aebb76e173f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.509293 4840 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0edf1389-2132-4752-8ba0-7aebb76e173f-pod-info\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.509301 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdq8k\" (UniqueName: \"kubernetes.io/projected/0edf1389-2132-4752-8ba0-7aebb76e173f-kube-api-access-qdq8k\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.546109 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0edf1389-2132-4752-8ba0-7aebb76e173f-server-conf" (OuterVolumeSpecName: "server-conf") pod "0edf1389-2132-4752-8ba0-7aebb76e173f" (UID: "0edf1389-2132-4752-8ba0-7aebb76e173f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.566049 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0edf1389-2132-4752-8ba0-7aebb76e173f-config-data" (OuterVolumeSpecName: "config-data") pod "0edf1389-2132-4752-8ba0-7aebb76e173f" (UID: "0edf1389-2132-4752-8ba0-7aebb76e173f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.606187 4840 scope.go:117] "RemoveContainer" containerID="c2a0b5a84a6bfa65674c2de1a2587ea05b454eda5d2a70a4fe5d0ab1cadeea1f" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.613285 4840 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0edf1389-2132-4752-8ba0-7aebb76e173f-server-conf\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.613314 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0edf1389-2132-4752-8ba0-7aebb76e173f-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.621768 4840 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.622019 4840 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b1db2454-5981-4987-9557-8ca778576b93" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1db2454-5981-4987-9557-8ca778576b93") on node "crc" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.642826 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0edf1389-2132-4752-8ba0-7aebb76e173f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0edf1389-2132-4752-8ba0-7aebb76e173f" (UID: "0edf1389-2132-4752-8ba0-7aebb76e173f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.653610 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 11 10:20:39 crc kubenswrapper[4840]: W0311 10:20:39.663708 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e29f879_7226_4e32_bdac_aa71b0755af5.slice/crio-86180df475354f6be05cb91fe86501577a3c854ce37d350530d5c3c183f1abd6 WatchSource:0}: Error finding container 86180df475354f6be05cb91fe86501577a3c854ce37d350530d5c3c183f1abd6: Status 404 returned error can't find the container with id 86180df475354f6be05cb91fe86501577a3c854ce37d350530d5c3c183f1abd6 Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.709832 4840 scope.go:117] "RemoveContainer" containerID="f085459b0890b14c6469d39d59dd5852aa7121c21595a673fd7174fba6b83d02" Mar 11 10:20:39 crc kubenswrapper[4840]: E0311 10:20:39.711231 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f085459b0890b14c6469d39d59dd5852aa7121c21595a673fd7174fba6b83d02\": container with ID starting with f085459b0890b14c6469d39d59dd5852aa7121c21595a673fd7174fba6b83d02 not found: ID does not exist" containerID="f085459b0890b14c6469d39d59dd5852aa7121c21595a673fd7174fba6b83d02" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.711273 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f085459b0890b14c6469d39d59dd5852aa7121c21595a673fd7174fba6b83d02"} err="failed to get container status \"f085459b0890b14c6469d39d59dd5852aa7121c21595a673fd7174fba6b83d02\": rpc error: code = NotFound desc = could not find container \"f085459b0890b14c6469d39d59dd5852aa7121c21595a673fd7174fba6b83d02\": container with ID starting with f085459b0890b14c6469d39d59dd5852aa7121c21595a673fd7174fba6b83d02 not found: ID does not exist" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.711299 4840 scope.go:117] "RemoveContainer" containerID="c2a0b5a84a6bfa65674c2de1a2587ea05b454eda5d2a70a4fe5d0ab1cadeea1f" Mar 11 10:20:39 crc kubenswrapper[4840]: E0311 10:20:39.711672 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2a0b5a84a6bfa65674c2de1a2587ea05b454eda5d2a70a4fe5d0ab1cadeea1f\": container with ID starting with c2a0b5a84a6bfa65674c2de1a2587ea05b454eda5d2a70a4fe5d0ab1cadeea1f not found: ID does not exist" containerID="c2a0b5a84a6bfa65674c2de1a2587ea05b454eda5d2a70a4fe5d0ab1cadeea1f" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.711693 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2a0b5a84a6bfa65674c2de1a2587ea05b454eda5d2a70a4fe5d0ab1cadeea1f"} err="failed to get container status \"c2a0b5a84a6bfa65674c2de1a2587ea05b454eda5d2a70a4fe5d0ab1cadeea1f\": rpc error: code = NotFound desc = could not find container \"c2a0b5a84a6bfa65674c2de1a2587ea05b454eda5d2a70a4fe5d0ab1cadeea1f\": container with ID starting with c2a0b5a84a6bfa65674c2de1a2587ea05b454eda5d2a70a4fe5d0ab1cadeea1f not found: ID does not exist" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.715641 4840 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0edf1389-2132-4752-8ba0-7aebb76e173f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.715682 4840 reconciler_common.go:293] "Volume detached for volume \"pvc-b1db2454-5981-4987-9557-8ca778576b93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1db2454-5981-4987-9557-8ca778576b93\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.787259 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.794228 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.811093 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 10:20:39 crc kubenswrapper[4840]: E0311 10:20:39.811498 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0edf1389-2132-4752-8ba0-7aebb76e173f" containerName="setup-container" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.811520 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="0edf1389-2132-4752-8ba0-7aebb76e173f" containerName="setup-container" Mar 11 10:20:39 crc kubenswrapper[4840]: E0311 10:20:39.811538 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0edf1389-2132-4752-8ba0-7aebb76e173f" containerName="rabbitmq" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.811548 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="0edf1389-2132-4752-8ba0-7aebb76e173f" containerName="rabbitmq" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.811732 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="0edf1389-2132-4752-8ba0-7aebb76e173f" containerName="rabbitmq" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.814092 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.816740 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.816741 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-fgrqh" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.818032 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.818055 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.818089 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.818147 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.818184 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.830879 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.917671 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b1db2454-5981-4987-9557-8ca778576b93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1db2454-5981-4987-9557-8ca778576b93\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.917739 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71c42351-b9e3-43cb-b97a-0c28c37b5416-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.917767 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71c42351-b9e3-43cb-b97a-0c28c37b5416-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.917800 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71c42351-b9e3-43cb-b97a-0c28c37b5416-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.917826 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71c42351-b9e3-43cb-b97a-0c28c37b5416-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.917844 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71c42351-b9e3-43cb-b97a-0c28c37b5416-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.918013 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71c42351-b9e3-43cb-b97a-0c28c37b5416-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.918070 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntz7x\" (UniqueName: \"kubernetes.io/projected/71c42351-b9e3-43cb-b97a-0c28c37b5416-kube-api-access-ntz7x\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.918103 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71c42351-b9e3-43cb-b97a-0c28c37b5416-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.918129 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71c42351-b9e3-43cb-b97a-0c28c37b5416-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:39 crc kubenswrapper[4840]: I0311 10:20:39.918418 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71c42351-b9e3-43cb-b97a-0c28c37b5416-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.020978 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71c42351-b9e3-43cb-b97a-0c28c37b5416-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.021557 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71c42351-b9e3-43cb-b97a-0c28c37b5416-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.021598 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71c42351-b9e3-43cb-b97a-0c28c37b5416-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.021662 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntz7x\" (UniqueName: \"kubernetes.io/projected/71c42351-b9e3-43cb-b97a-0c28c37b5416-kube-api-access-ntz7x\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.021709 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71c42351-b9e3-43cb-b97a-0c28c37b5416-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.021740 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71c42351-b9e3-43cb-b97a-0c28c37b5416-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.021769 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71c42351-b9e3-43cb-b97a-0c28c37b5416-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.022204 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/71c42351-b9e3-43cb-b97a-0c28c37b5416-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.022490 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71c42351-b9e3-43cb-b97a-0c28c37b5416-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.022648 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b1db2454-5981-4987-9557-8ca778576b93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1db2454-5981-4987-9557-8ca778576b93\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.022743 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71c42351-b9e3-43cb-b97a-0c28c37b5416-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.022795 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/71c42351-b9e3-43cb-b97a-0c28c37b5416-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.022815 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71c42351-b9e3-43cb-b97a-0c28c37b5416-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.024000 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/71c42351-b9e3-43cb-b97a-0c28c37b5416-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.024210 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71c42351-b9e3-43cb-b97a-0c28c37b5416-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.024605 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/71c42351-b9e3-43cb-b97a-0c28c37b5416-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.026829 4840 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.026870 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b1db2454-5981-4987-9557-8ca778576b93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1db2454-5981-4987-9557-8ca778576b93\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e62da913419b14067559a50b70b10a79b4726f4c0c525a999b8e94171cadb4bf/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.027835 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/71c42351-b9e3-43cb-b97a-0c28c37b5416-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.028185 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/71c42351-b9e3-43cb-b97a-0c28c37b5416-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.030222 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/71c42351-b9e3-43cb-b97a-0c28c37b5416-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.031057 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/71c42351-b9e3-43cb-b97a-0c28c37b5416-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.039727 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntz7x\" (UniqueName: \"kubernetes.io/projected/71c42351-b9e3-43cb-b97a-0c28c37b5416-kube-api-access-ntz7x\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.059304 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b1db2454-5981-4987-9557-8ca778576b93\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1db2454-5981-4987-9557-8ca778576b93\") pod \"rabbitmq-cell1-server-0\" (UID: \"71c42351-b9e3-43cb-b97a-0c28c37b5416\") " pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.060551 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:20:40 crc kubenswrapper[4840]: E0311 10:20:40.060827 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.071388 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0edf1389-2132-4752-8ba0-7aebb76e173f" path="/var/lib/kubelet/pods/0edf1389-2132-4752-8ba0-7aebb76e173f/volumes" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.072514 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c43faf74-3eed-41b3-b43a-1f88b3e61044" path="/var/lib/kubelet/pods/c43faf74-3eed-41b3-b43a-1f88b3e61044/volumes" Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.130979 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:20:40 crc kubenswrapper[4840]: W0311 10:20:40.410542 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71c42351_b9e3_43cb_b97a_0c28c37b5416.slice/crio-04a4f6213b390cab47d87f44b19eb2658eee8d878c24dd2eef25dfcf48255378 WatchSource:0}: Error finding container 04a4f6213b390cab47d87f44b19eb2658eee8d878c24dd2eef25dfcf48255378: Status 404 returned error can't find the container with id 04a4f6213b390cab47d87f44b19eb2658eee8d878c24dd2eef25dfcf48255378 Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.411560 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.486894 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"71c42351-b9e3-43cb-b97a-0c28c37b5416","Type":"ContainerStarted","Data":"04a4f6213b390cab47d87f44b19eb2658eee8d878c24dd2eef25dfcf48255378"} Mar 11 10:20:40 crc kubenswrapper[4840]: I0311 10:20:40.488254 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0e29f879-7226-4e32-bdac-aa71b0755af5","Type":"ContainerStarted","Data":"86180df475354f6be05cb91fe86501577a3c854ce37d350530d5c3c183f1abd6"} Mar 11 10:20:41 crc kubenswrapper[4840]: I0311 10:20:41.503808 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0e29f879-7226-4e32-bdac-aa71b0755af5","Type":"ContainerStarted","Data":"cb6380f8ca6e9f63f4f17d0d36e4477e9cac08a6b26b0f2fc435cf050bdc46b0"} Mar 11 10:20:42 crc kubenswrapper[4840]: I0311 10:20:42.522923 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"71c42351-b9e3-43cb-b97a-0c28c37b5416","Type":"ContainerStarted","Data":"f22781d1b04f3562fa2fa485725efc0df1287abc19843e9822b858ae43704ec7"} Mar 11 10:20:45 crc kubenswrapper[4840]: I0311 10:20:45.083576 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cgkw8"] Mar 11 10:20:45 crc kubenswrapper[4840]: I0311 10:20:45.085377 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cgkw8" Mar 11 10:20:45 crc kubenswrapper[4840]: I0311 10:20:45.099981 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cgkw8"] Mar 11 10:20:45 crc kubenswrapper[4840]: I0311 10:20:45.221651 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgfb8\" (UniqueName: \"kubernetes.io/projected/7c35e1c7-18af-4015-8208-6d75f1dd54a4-kube-api-access-sgfb8\") pod \"redhat-marketplace-cgkw8\" (UID: \"7c35e1c7-18af-4015-8208-6d75f1dd54a4\") " pod="openshift-marketplace/redhat-marketplace-cgkw8" Mar 11 10:20:45 crc kubenswrapper[4840]: I0311 10:20:45.221750 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c35e1c7-18af-4015-8208-6d75f1dd54a4-utilities\") pod \"redhat-marketplace-cgkw8\" (UID: \"7c35e1c7-18af-4015-8208-6d75f1dd54a4\") " pod="openshift-marketplace/redhat-marketplace-cgkw8" Mar 11 10:20:45 crc kubenswrapper[4840]: I0311 10:20:45.221774 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c35e1c7-18af-4015-8208-6d75f1dd54a4-catalog-content\") pod \"redhat-marketplace-cgkw8\" (UID: \"7c35e1c7-18af-4015-8208-6d75f1dd54a4\") " pod="openshift-marketplace/redhat-marketplace-cgkw8" Mar 11 10:20:45 crc kubenswrapper[4840]: I0311 10:20:45.323428 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c35e1c7-18af-4015-8208-6d75f1dd54a4-utilities\") pod \"redhat-marketplace-cgkw8\" (UID: \"7c35e1c7-18af-4015-8208-6d75f1dd54a4\") " pod="openshift-marketplace/redhat-marketplace-cgkw8" Mar 11 10:20:45 crc kubenswrapper[4840]: I0311 10:20:45.323760 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c35e1c7-18af-4015-8208-6d75f1dd54a4-catalog-content\") pod \"redhat-marketplace-cgkw8\" (UID: \"7c35e1c7-18af-4015-8208-6d75f1dd54a4\") " pod="openshift-marketplace/redhat-marketplace-cgkw8" Mar 11 10:20:45 crc kubenswrapper[4840]: I0311 10:20:45.323916 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgfb8\" (UniqueName: \"kubernetes.io/projected/7c35e1c7-18af-4015-8208-6d75f1dd54a4-kube-api-access-sgfb8\") pod \"redhat-marketplace-cgkw8\" (UID: \"7c35e1c7-18af-4015-8208-6d75f1dd54a4\") " pod="openshift-marketplace/redhat-marketplace-cgkw8" Mar 11 10:20:45 crc kubenswrapper[4840]: I0311 10:20:45.323972 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c35e1c7-18af-4015-8208-6d75f1dd54a4-utilities\") pod \"redhat-marketplace-cgkw8\" (UID: \"7c35e1c7-18af-4015-8208-6d75f1dd54a4\") " pod="openshift-marketplace/redhat-marketplace-cgkw8" Mar 11 10:20:45 crc kubenswrapper[4840]: I0311 10:20:45.324579 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c35e1c7-18af-4015-8208-6d75f1dd54a4-catalog-content\") pod \"redhat-marketplace-cgkw8\" (UID: \"7c35e1c7-18af-4015-8208-6d75f1dd54a4\") " pod="openshift-marketplace/redhat-marketplace-cgkw8" Mar 11 10:20:45 crc kubenswrapper[4840]: I0311 10:20:45.342650 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgfb8\" (UniqueName: \"kubernetes.io/projected/7c35e1c7-18af-4015-8208-6d75f1dd54a4-kube-api-access-sgfb8\") pod \"redhat-marketplace-cgkw8\" (UID: \"7c35e1c7-18af-4015-8208-6d75f1dd54a4\") " pod="openshift-marketplace/redhat-marketplace-cgkw8" Mar 11 10:20:45 crc kubenswrapper[4840]: I0311 10:20:45.409231 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cgkw8" Mar 11 10:20:45 crc kubenswrapper[4840]: I0311 10:20:45.649859 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cgkw8"] Mar 11 10:20:46 crc kubenswrapper[4840]: I0311 10:20:46.565582 4840 generic.go:334] "Generic (PLEG): container finished" podID="7c35e1c7-18af-4015-8208-6d75f1dd54a4" containerID="b5907df000940f0f5208829d510dbfb09de0789232251f00e13c0325ff638a0b" exitCode=0 Mar 11 10:20:46 crc kubenswrapper[4840]: I0311 10:20:46.565646 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgkw8" event={"ID":"7c35e1c7-18af-4015-8208-6d75f1dd54a4","Type":"ContainerDied","Data":"b5907df000940f0f5208829d510dbfb09de0789232251f00e13c0325ff638a0b"} Mar 11 10:20:46 crc kubenswrapper[4840]: I0311 10:20:46.565885 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgkw8" event={"ID":"7c35e1c7-18af-4015-8208-6d75f1dd54a4","Type":"ContainerStarted","Data":"8baefc3bf1dd279161ad7b1ac85e0edf6e38cf2bc684c008f861770e3da7be44"} Mar 11 10:20:47 crc kubenswrapper[4840]: I0311 10:20:47.578533 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgkw8" event={"ID":"7c35e1c7-18af-4015-8208-6d75f1dd54a4","Type":"ContainerStarted","Data":"3d2cffd237f40ecbe09ffeb95767c6522b5d0f954eb05d47ce992585715877d0"} Mar 11 10:20:48 crc kubenswrapper[4840]: I0311 10:20:48.593640 4840 generic.go:334] "Generic (PLEG): container finished" podID="7c35e1c7-18af-4015-8208-6d75f1dd54a4" containerID="3d2cffd237f40ecbe09ffeb95767c6522b5d0f954eb05d47ce992585715877d0" exitCode=0 Mar 11 10:20:48 crc kubenswrapper[4840]: I0311 10:20:48.593870 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgkw8" event={"ID":"7c35e1c7-18af-4015-8208-6d75f1dd54a4","Type":"ContainerDied","Data":"3d2cffd237f40ecbe09ffeb95767c6522b5d0f954eb05d47ce992585715877d0"} Mar 11 10:20:48 crc kubenswrapper[4840]: I0311 10:20:48.594204 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgkw8" event={"ID":"7c35e1c7-18af-4015-8208-6d75f1dd54a4","Type":"ContainerStarted","Data":"bbd190fd074a01e77fa9a76cec67f7f6e0300014cb046ae08144668ee36ec05a"} Mar 11 10:20:48 crc kubenswrapper[4840]: I0311 10:20:48.631169 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cgkw8" podStartSLOduration=2.221358741 podStartE2EDuration="3.631144854s" podCreationTimestamp="2026-03-11 10:20:45 +0000 UTC" firstStartedPulling="2026-03-11 10:20:46.568338752 +0000 UTC m=+5045.234008607" lastFinishedPulling="2026-03-11 10:20:47.978124915 +0000 UTC m=+5046.643794720" observedRunningTime="2026-03-11 10:20:48.616331388 +0000 UTC m=+5047.282001213" watchObservedRunningTime="2026-03-11 10:20:48.631144854 +0000 UTC m=+5047.296814669" Mar 11 10:20:52 crc kubenswrapper[4840]: I0311 10:20:52.067221 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:20:52 crc kubenswrapper[4840]: E0311 10:20:52.068085 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:20:55 crc kubenswrapper[4840]: I0311 10:20:55.409451 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cgkw8" Mar 11 10:20:55 crc kubenswrapper[4840]: I0311 10:20:55.410297 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cgkw8" Mar 11 10:20:55 crc kubenswrapper[4840]: I0311 10:20:55.466528 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cgkw8" Mar 11 10:20:55 crc kubenswrapper[4840]: I0311 10:20:55.710788 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cgkw8" Mar 11 10:20:55 crc kubenswrapper[4840]: I0311 10:20:55.758406 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cgkw8"] Mar 11 10:20:57 crc kubenswrapper[4840]: I0311 10:20:57.670916 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cgkw8" podUID="7c35e1c7-18af-4015-8208-6d75f1dd54a4" containerName="registry-server" containerID="cri-o://bbd190fd074a01e77fa9a76cec67f7f6e0300014cb046ae08144668ee36ec05a" gracePeriod=2 Mar 11 10:20:58 crc kubenswrapper[4840]: I0311 10:20:58.136665 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cgkw8" Mar 11 10:20:58 crc kubenswrapper[4840]: I0311 10:20:58.245856 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgfb8\" (UniqueName: \"kubernetes.io/projected/7c35e1c7-18af-4015-8208-6d75f1dd54a4-kube-api-access-sgfb8\") pod \"7c35e1c7-18af-4015-8208-6d75f1dd54a4\" (UID: \"7c35e1c7-18af-4015-8208-6d75f1dd54a4\") " Mar 11 10:20:58 crc kubenswrapper[4840]: I0311 10:20:58.246276 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c35e1c7-18af-4015-8208-6d75f1dd54a4-catalog-content\") pod \"7c35e1c7-18af-4015-8208-6d75f1dd54a4\" (UID: \"7c35e1c7-18af-4015-8208-6d75f1dd54a4\") " Mar 11 10:20:58 crc kubenswrapper[4840]: I0311 10:20:58.246387 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c35e1c7-18af-4015-8208-6d75f1dd54a4-utilities\") pod \"7c35e1c7-18af-4015-8208-6d75f1dd54a4\" (UID: \"7c35e1c7-18af-4015-8208-6d75f1dd54a4\") " Mar 11 10:20:58 crc kubenswrapper[4840]: I0311 10:20:58.247253 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c35e1c7-18af-4015-8208-6d75f1dd54a4-utilities" (OuterVolumeSpecName: "utilities") pod "7c35e1c7-18af-4015-8208-6d75f1dd54a4" (UID: "7c35e1c7-18af-4015-8208-6d75f1dd54a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:20:58 crc kubenswrapper[4840]: I0311 10:20:58.253863 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c35e1c7-18af-4015-8208-6d75f1dd54a4-kube-api-access-sgfb8" (OuterVolumeSpecName: "kube-api-access-sgfb8") pod "7c35e1c7-18af-4015-8208-6d75f1dd54a4" (UID: "7c35e1c7-18af-4015-8208-6d75f1dd54a4"). InnerVolumeSpecName "kube-api-access-sgfb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:20:58 crc kubenswrapper[4840]: I0311 10:20:58.301698 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c35e1c7-18af-4015-8208-6d75f1dd54a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c35e1c7-18af-4015-8208-6d75f1dd54a4" (UID: "7c35e1c7-18af-4015-8208-6d75f1dd54a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:20:58 crc kubenswrapper[4840]: I0311 10:20:58.348339 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgfb8\" (UniqueName: \"kubernetes.io/projected/7c35e1c7-18af-4015-8208-6d75f1dd54a4-kube-api-access-sgfb8\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:58 crc kubenswrapper[4840]: I0311 10:20:58.348457 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c35e1c7-18af-4015-8208-6d75f1dd54a4-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:58 crc kubenswrapper[4840]: I0311 10:20:58.348492 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c35e1c7-18af-4015-8208-6d75f1dd54a4-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:20:58 crc kubenswrapper[4840]: I0311 10:20:58.682128 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cgkw8" Mar 11 10:20:58 crc kubenswrapper[4840]: I0311 10:20:58.682151 4840 generic.go:334] "Generic (PLEG): container finished" podID="7c35e1c7-18af-4015-8208-6d75f1dd54a4" containerID="bbd190fd074a01e77fa9a76cec67f7f6e0300014cb046ae08144668ee36ec05a" exitCode=0 Mar 11 10:20:58 crc kubenswrapper[4840]: I0311 10:20:58.682171 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgkw8" event={"ID":"7c35e1c7-18af-4015-8208-6d75f1dd54a4","Type":"ContainerDied","Data":"bbd190fd074a01e77fa9a76cec67f7f6e0300014cb046ae08144668ee36ec05a"} Mar 11 10:20:58 crc kubenswrapper[4840]: I0311 10:20:58.682246 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgkw8" event={"ID":"7c35e1c7-18af-4015-8208-6d75f1dd54a4","Type":"ContainerDied","Data":"8baefc3bf1dd279161ad7b1ac85e0edf6e38cf2bc684c008f861770e3da7be44"} Mar 11 10:20:58 crc kubenswrapper[4840]: I0311 10:20:58.682295 4840 scope.go:117] "RemoveContainer" containerID="bbd190fd074a01e77fa9a76cec67f7f6e0300014cb046ae08144668ee36ec05a" Mar 11 10:20:58 crc kubenswrapper[4840]: I0311 10:20:58.704295 4840 scope.go:117] "RemoveContainer" containerID="3d2cffd237f40ecbe09ffeb95767c6522b5d0f954eb05d47ce992585715877d0" Mar 11 10:20:58 crc kubenswrapper[4840]: I0311 10:20:58.731001 4840 scope.go:117] "RemoveContainer" containerID="b5907df000940f0f5208829d510dbfb09de0789232251f00e13c0325ff638a0b" Mar 11 10:20:58 crc kubenswrapper[4840]: I0311 10:20:58.732418 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cgkw8"] Mar 11 10:20:58 crc kubenswrapper[4840]: I0311 10:20:58.739731 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cgkw8"] Mar 11 10:20:58 crc kubenswrapper[4840]: I0311 10:20:58.759258 4840 scope.go:117] "RemoveContainer" containerID="bbd190fd074a01e77fa9a76cec67f7f6e0300014cb046ae08144668ee36ec05a" Mar 11 10:20:58 crc kubenswrapper[4840]: E0311 10:20:58.759803 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbd190fd074a01e77fa9a76cec67f7f6e0300014cb046ae08144668ee36ec05a\": container with ID starting with bbd190fd074a01e77fa9a76cec67f7f6e0300014cb046ae08144668ee36ec05a not found: ID does not exist" containerID="bbd190fd074a01e77fa9a76cec67f7f6e0300014cb046ae08144668ee36ec05a" Mar 11 10:20:58 crc kubenswrapper[4840]: I0311 10:20:58.759896 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbd190fd074a01e77fa9a76cec67f7f6e0300014cb046ae08144668ee36ec05a"} err="failed to get container status \"bbd190fd074a01e77fa9a76cec67f7f6e0300014cb046ae08144668ee36ec05a\": rpc error: code = NotFound desc = could not find container \"bbd190fd074a01e77fa9a76cec67f7f6e0300014cb046ae08144668ee36ec05a\": container with ID starting with bbd190fd074a01e77fa9a76cec67f7f6e0300014cb046ae08144668ee36ec05a not found: ID does not exist" Mar 11 10:20:58 crc kubenswrapper[4840]: I0311 10:20:58.759971 4840 scope.go:117] "RemoveContainer" containerID="3d2cffd237f40ecbe09ffeb95767c6522b5d0f954eb05d47ce992585715877d0" Mar 11 10:20:58 crc kubenswrapper[4840]: E0311 10:20:58.760298 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d2cffd237f40ecbe09ffeb95767c6522b5d0f954eb05d47ce992585715877d0\": container with ID starting with 3d2cffd237f40ecbe09ffeb95767c6522b5d0f954eb05d47ce992585715877d0 not found: ID does not exist" containerID="3d2cffd237f40ecbe09ffeb95767c6522b5d0f954eb05d47ce992585715877d0" Mar 11 10:20:58 crc kubenswrapper[4840]: I0311 10:20:58.760337 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d2cffd237f40ecbe09ffeb95767c6522b5d0f954eb05d47ce992585715877d0"} err="failed to get container status \"3d2cffd237f40ecbe09ffeb95767c6522b5d0f954eb05d47ce992585715877d0\": rpc error: code = NotFound desc = could not find container \"3d2cffd237f40ecbe09ffeb95767c6522b5d0f954eb05d47ce992585715877d0\": container with ID starting with 3d2cffd237f40ecbe09ffeb95767c6522b5d0f954eb05d47ce992585715877d0 not found: ID does not exist" Mar 11 10:20:58 crc kubenswrapper[4840]: I0311 10:20:58.760365 4840 scope.go:117] "RemoveContainer" containerID="b5907df000940f0f5208829d510dbfb09de0789232251f00e13c0325ff638a0b" Mar 11 10:20:58 crc kubenswrapper[4840]: E0311 10:20:58.760801 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5907df000940f0f5208829d510dbfb09de0789232251f00e13c0325ff638a0b\": container with ID starting with b5907df000940f0f5208829d510dbfb09de0789232251f00e13c0325ff638a0b not found: ID does not exist" containerID="b5907df000940f0f5208829d510dbfb09de0789232251f00e13c0325ff638a0b" Mar 11 10:20:58 crc kubenswrapper[4840]: I0311 10:20:58.760830 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5907df000940f0f5208829d510dbfb09de0789232251f00e13c0325ff638a0b"} err="failed to get container status \"b5907df000940f0f5208829d510dbfb09de0789232251f00e13c0325ff638a0b\": rpc error: code = NotFound desc = could not find container \"b5907df000940f0f5208829d510dbfb09de0789232251f00e13c0325ff638a0b\": container with ID starting with b5907df000940f0f5208829d510dbfb09de0789232251f00e13c0325ff638a0b not found: ID does not exist" Mar 11 10:21:00 crc kubenswrapper[4840]: I0311 10:21:00.074316 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c35e1c7-18af-4015-8208-6d75f1dd54a4" path="/var/lib/kubelet/pods/7c35e1c7-18af-4015-8208-6d75f1dd54a4/volumes" Mar 11 10:21:03 crc kubenswrapper[4840]: I0311 10:21:03.061641 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:21:03 crc kubenswrapper[4840]: E0311 10:21:03.062109 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:21:12 crc kubenswrapper[4840]: I0311 10:21:12.377843 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hqlx9"] Mar 11 10:21:12 crc kubenswrapper[4840]: E0311 10:21:12.379283 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c35e1c7-18af-4015-8208-6d75f1dd54a4" containerName="registry-server" Mar 11 10:21:12 crc kubenswrapper[4840]: I0311 10:21:12.379305 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c35e1c7-18af-4015-8208-6d75f1dd54a4" containerName="registry-server" Mar 11 10:21:12 crc kubenswrapper[4840]: E0311 10:21:12.379338 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c35e1c7-18af-4015-8208-6d75f1dd54a4" containerName="extract-content" Mar 11 10:21:12 crc kubenswrapper[4840]: I0311 10:21:12.379345 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c35e1c7-18af-4015-8208-6d75f1dd54a4" containerName="extract-content" Mar 11 10:21:12 crc kubenswrapper[4840]: E0311 10:21:12.379368 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c35e1c7-18af-4015-8208-6d75f1dd54a4" containerName="extract-utilities" Mar 11 10:21:12 crc kubenswrapper[4840]: I0311 10:21:12.379375 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c35e1c7-18af-4015-8208-6d75f1dd54a4" containerName="extract-utilities" Mar 11 10:21:12 crc kubenswrapper[4840]: I0311 10:21:12.379590 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c35e1c7-18af-4015-8208-6d75f1dd54a4" containerName="registry-server" Mar 11 10:21:12 crc kubenswrapper[4840]: I0311 10:21:12.381672 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqlx9" Mar 11 10:21:12 crc kubenswrapper[4840]: I0311 10:21:12.396477 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hqlx9"] Mar 11 10:21:12 crc kubenswrapper[4840]: I0311 10:21:12.497249 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wqcx\" (UniqueName: \"kubernetes.io/projected/206024ac-694b-47dc-bf25-893542856d51-kube-api-access-2wqcx\") pod \"community-operators-hqlx9\" (UID: \"206024ac-694b-47dc-bf25-893542856d51\") " pod="openshift-marketplace/community-operators-hqlx9" Mar 11 10:21:12 crc kubenswrapper[4840]: I0311 10:21:12.497295 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206024ac-694b-47dc-bf25-893542856d51-utilities\") pod \"community-operators-hqlx9\" (UID: \"206024ac-694b-47dc-bf25-893542856d51\") " pod="openshift-marketplace/community-operators-hqlx9" Mar 11 10:21:12 crc kubenswrapper[4840]: I0311 10:21:12.497411 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206024ac-694b-47dc-bf25-893542856d51-catalog-content\") pod \"community-operators-hqlx9\" (UID: \"206024ac-694b-47dc-bf25-893542856d51\") " pod="openshift-marketplace/community-operators-hqlx9" Mar 11 10:21:12 crc kubenswrapper[4840]: I0311 10:21:12.598894 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206024ac-694b-47dc-bf25-893542856d51-utilities\") pod \"community-operators-hqlx9\" (UID: \"206024ac-694b-47dc-bf25-893542856d51\") " pod="openshift-marketplace/community-operators-hqlx9" Mar 11 10:21:12 crc kubenswrapper[4840]: I0311 10:21:12.598973 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206024ac-694b-47dc-bf25-893542856d51-catalog-content\") pod \"community-operators-hqlx9\" (UID: \"206024ac-694b-47dc-bf25-893542856d51\") " pod="openshift-marketplace/community-operators-hqlx9" Mar 11 10:21:12 crc kubenswrapper[4840]: I0311 10:21:12.599045 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wqcx\" (UniqueName: \"kubernetes.io/projected/206024ac-694b-47dc-bf25-893542856d51-kube-api-access-2wqcx\") pod \"community-operators-hqlx9\" (UID: \"206024ac-694b-47dc-bf25-893542856d51\") " pod="openshift-marketplace/community-operators-hqlx9" Mar 11 10:21:12 crc kubenswrapper[4840]: I0311 10:21:12.599585 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206024ac-694b-47dc-bf25-893542856d51-catalog-content\") pod \"community-operators-hqlx9\" (UID: \"206024ac-694b-47dc-bf25-893542856d51\") " pod="openshift-marketplace/community-operators-hqlx9" Mar 11 10:21:12 crc kubenswrapper[4840]: I0311 10:21:12.599981 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206024ac-694b-47dc-bf25-893542856d51-utilities\") pod \"community-operators-hqlx9\" (UID: \"206024ac-694b-47dc-bf25-893542856d51\") " pod="openshift-marketplace/community-operators-hqlx9" Mar 11 10:21:12 crc kubenswrapper[4840]: I0311 10:21:12.624131 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wqcx\" (UniqueName: \"kubernetes.io/projected/206024ac-694b-47dc-bf25-893542856d51-kube-api-access-2wqcx\") pod \"community-operators-hqlx9\" (UID: \"206024ac-694b-47dc-bf25-893542856d51\") " pod="openshift-marketplace/community-operators-hqlx9" Mar 11 10:21:12 crc kubenswrapper[4840]: I0311 10:21:12.752982 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqlx9" Mar 11 10:21:13 crc kubenswrapper[4840]: I0311 10:21:13.264728 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hqlx9"] Mar 11 10:21:13 crc kubenswrapper[4840]: I0311 10:21:13.846221 4840 generic.go:334] "Generic (PLEG): container finished" podID="206024ac-694b-47dc-bf25-893542856d51" containerID="9720d117ebb57a9b0dd0b0fc7e7f8aac9eea2b75d54d463931d61cabfa3d9c7d" exitCode=0 Mar 11 10:21:13 crc kubenswrapper[4840]: I0311 10:21:13.846301 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqlx9" event={"ID":"206024ac-694b-47dc-bf25-893542856d51","Type":"ContainerDied","Data":"9720d117ebb57a9b0dd0b0fc7e7f8aac9eea2b75d54d463931d61cabfa3d9c7d"} Mar 11 10:21:13 crc kubenswrapper[4840]: I0311 10:21:13.848646 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 10:21:13 crc kubenswrapper[4840]: I0311 10:21:13.848652 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqlx9" event={"ID":"206024ac-694b-47dc-bf25-893542856d51","Type":"ContainerStarted","Data":"a29eaae5c2f97f08f0ec77d12c59261a12b5efa3673219bd53403774eded9897"} Mar 11 10:21:14 crc kubenswrapper[4840]: I0311 10:21:14.862512 4840 generic.go:334] "Generic (PLEG): container finished" podID="0e29f879-7226-4e32-bdac-aa71b0755af5" containerID="cb6380f8ca6e9f63f4f17d0d36e4477e9cac08a6b26b0f2fc435cf050bdc46b0" exitCode=0 Mar 11 10:21:14 crc kubenswrapper[4840]: I0311 10:21:14.862601 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0e29f879-7226-4e32-bdac-aa71b0755af5","Type":"ContainerDied","Data":"cb6380f8ca6e9f63f4f17d0d36e4477e9cac08a6b26b0f2fc435cf050bdc46b0"} Mar 11 10:21:14 crc kubenswrapper[4840]: I0311 10:21:14.867446 4840 generic.go:334] "Generic (PLEG): container finished" podID="71c42351-b9e3-43cb-b97a-0c28c37b5416" containerID="f22781d1b04f3562fa2fa485725efc0df1287abc19843e9822b858ae43704ec7" exitCode=0 Mar 11 10:21:14 crc kubenswrapper[4840]: I0311 10:21:14.867496 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"71c42351-b9e3-43cb-b97a-0c28c37b5416","Type":"ContainerDied","Data":"f22781d1b04f3562fa2fa485725efc0df1287abc19843e9822b858ae43704ec7"} Mar 11 10:21:14 crc kubenswrapper[4840]: I0311 10:21:14.874073 4840 generic.go:334] "Generic (PLEG): container finished" podID="206024ac-694b-47dc-bf25-893542856d51" containerID="4cf3c472998ae2d34714e776f70ab6825c2478a985ff643c0b3161d8e0b60742" exitCode=0 Mar 11 10:21:14 crc kubenswrapper[4840]: I0311 10:21:14.874139 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqlx9" event={"ID":"206024ac-694b-47dc-bf25-893542856d51","Type":"ContainerDied","Data":"4cf3c472998ae2d34714e776f70ab6825c2478a985ff643c0b3161d8e0b60742"} Mar 11 10:21:15 crc kubenswrapper[4840]: I0311 10:21:15.887243 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0e29f879-7226-4e32-bdac-aa71b0755af5","Type":"ContainerStarted","Data":"4668e0227ca76067c139ff6643c63083ea1c1defe723f8d9882a5cbdbe658cee"} Mar 11 10:21:15 crc kubenswrapper[4840]: I0311 10:21:15.888005 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 11 10:21:15 crc kubenswrapper[4840]: I0311 10:21:15.890263 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"71c42351-b9e3-43cb-b97a-0c28c37b5416","Type":"ContainerStarted","Data":"e0719e4e53cc37ba2903aa5feeffe529864068f6a921da3d88a2480b93d48fd7"} Mar 11 10:21:15 crc kubenswrapper[4840]: I0311 10:21:15.890556 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:21:15 crc kubenswrapper[4840]: I0311 10:21:15.893580 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqlx9" event={"ID":"206024ac-694b-47dc-bf25-893542856d51","Type":"ContainerStarted","Data":"9fbbf99abe28b535afce4f47dd01ad196ef350c0a27156445ac2e3c85afc1696"} Mar 11 10:21:15 crc kubenswrapper[4840]: I0311 10:21:15.930058 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.930031748 podStartE2EDuration="37.930031748s" podCreationTimestamp="2026-03-11 10:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:21:15.917861268 +0000 UTC m=+5074.583531113" watchObservedRunningTime="2026-03-11 10:21:15.930031748 +0000 UTC m=+5074.595701593" Mar 11 10:21:15 crc kubenswrapper[4840]: I0311 10:21:15.959403 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.959372352 podStartE2EDuration="36.959372352s" podCreationTimestamp="2026-03-11 10:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:21:15.944111285 +0000 UTC m=+5074.609781120" watchObservedRunningTime="2026-03-11 10:21:15.959372352 +0000 UTC m=+5074.625042177" Mar 11 10:21:15 crc kubenswrapper[4840]: I0311 10:21:15.978675 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hqlx9" podStartSLOduration=2.53189744 podStartE2EDuration="3.978657652s" podCreationTimestamp="2026-03-11 10:21:12 +0000 UTC" firstStartedPulling="2026-03-11 10:21:13.848222172 +0000 UTC m=+5072.513892007" lastFinishedPulling="2026-03-11 10:21:15.294982394 +0000 UTC m=+5073.960652219" observedRunningTime="2026-03-11 10:21:15.970000182 +0000 UTC m=+5074.635670007" watchObservedRunningTime="2026-03-11 10:21:15.978657652 +0000 UTC m=+5074.644327477" Mar 11 10:21:18 crc kubenswrapper[4840]: I0311 10:21:18.060860 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:21:18 crc kubenswrapper[4840]: E0311 10:21:18.061329 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:21:22 crc kubenswrapper[4840]: I0311 10:21:22.753233 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hqlx9" Mar 11 10:21:22 crc kubenswrapper[4840]: I0311 10:21:22.755089 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hqlx9" Mar 11 10:21:22 crc kubenswrapper[4840]: I0311 10:21:22.806778 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hqlx9" Mar 11 10:21:23 crc kubenswrapper[4840]: I0311 10:21:23.000759 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hqlx9" Mar 11 10:21:23 crc kubenswrapper[4840]: I0311 10:21:23.056526 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hqlx9"] Mar 11 10:21:24 crc kubenswrapper[4840]: I0311 10:21:24.975201 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hqlx9" podUID="206024ac-694b-47dc-bf25-893542856d51" containerName="registry-server" containerID="cri-o://9fbbf99abe28b535afce4f47dd01ad196ef350c0a27156445ac2e3c85afc1696" gracePeriod=2 Mar 11 10:21:25 crc kubenswrapper[4840]: I0311 10:21:25.988045 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqlx9" Mar 11 10:21:25 crc kubenswrapper[4840]: I0311 10:21:25.988614 4840 generic.go:334] "Generic (PLEG): container finished" podID="206024ac-694b-47dc-bf25-893542856d51" containerID="9fbbf99abe28b535afce4f47dd01ad196ef350c0a27156445ac2e3c85afc1696" exitCode=0 Mar 11 10:21:25 crc kubenswrapper[4840]: I0311 10:21:25.988687 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqlx9" event={"ID":"206024ac-694b-47dc-bf25-893542856d51","Type":"ContainerDied","Data":"9fbbf99abe28b535afce4f47dd01ad196ef350c0a27156445ac2e3c85afc1696"} Mar 11 10:21:25 crc kubenswrapper[4840]: I0311 10:21:25.988808 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hqlx9" event={"ID":"206024ac-694b-47dc-bf25-893542856d51","Type":"ContainerDied","Data":"a29eaae5c2f97f08f0ec77d12c59261a12b5efa3673219bd53403774eded9897"} Mar 11 10:21:25 crc kubenswrapper[4840]: I0311 10:21:25.988848 4840 scope.go:117] "RemoveContainer" containerID="9fbbf99abe28b535afce4f47dd01ad196ef350c0a27156445ac2e3c85afc1696" Mar 11 10:21:26 crc kubenswrapper[4840]: I0311 10:21:26.025141 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wqcx\" (UniqueName: \"kubernetes.io/projected/206024ac-694b-47dc-bf25-893542856d51-kube-api-access-2wqcx\") pod \"206024ac-694b-47dc-bf25-893542856d51\" (UID: \"206024ac-694b-47dc-bf25-893542856d51\") " Mar 11 10:21:26 crc kubenswrapper[4840]: I0311 10:21:26.025746 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206024ac-694b-47dc-bf25-893542856d51-utilities\") pod \"206024ac-694b-47dc-bf25-893542856d51\" (UID: \"206024ac-694b-47dc-bf25-893542856d51\") " Mar 11 10:21:26 crc kubenswrapper[4840]: I0311 10:21:26.025808 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206024ac-694b-47dc-bf25-893542856d51-catalog-content\") pod \"206024ac-694b-47dc-bf25-893542856d51\" (UID: \"206024ac-694b-47dc-bf25-893542856d51\") " Mar 11 10:21:26 crc kubenswrapper[4840]: I0311 10:21:26.027661 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/206024ac-694b-47dc-bf25-893542856d51-utilities" (OuterVolumeSpecName: "utilities") pod "206024ac-694b-47dc-bf25-893542856d51" (UID: "206024ac-694b-47dc-bf25-893542856d51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:21:26 crc kubenswrapper[4840]: I0311 10:21:26.037188 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/206024ac-694b-47dc-bf25-893542856d51-kube-api-access-2wqcx" (OuterVolumeSpecName: "kube-api-access-2wqcx") pod "206024ac-694b-47dc-bf25-893542856d51" (UID: "206024ac-694b-47dc-bf25-893542856d51"). InnerVolumeSpecName "kube-api-access-2wqcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:21:26 crc kubenswrapper[4840]: I0311 10:21:26.040257 4840 scope.go:117] "RemoveContainer" containerID="4cf3c472998ae2d34714e776f70ab6825c2478a985ff643c0b3161d8e0b60742" Mar 11 10:21:26 crc kubenswrapper[4840]: I0311 10:21:26.091403 4840 scope.go:117] "RemoveContainer" containerID="9720d117ebb57a9b0dd0b0fc7e7f8aac9eea2b75d54d463931d61cabfa3d9c7d" Mar 11 10:21:26 crc kubenswrapper[4840]: I0311 10:21:26.097184 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/206024ac-694b-47dc-bf25-893542856d51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "206024ac-694b-47dc-bf25-893542856d51" (UID: "206024ac-694b-47dc-bf25-893542856d51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:21:26 crc kubenswrapper[4840]: I0311 10:21:26.115225 4840 scope.go:117] "RemoveContainer" containerID="9fbbf99abe28b535afce4f47dd01ad196ef350c0a27156445ac2e3c85afc1696" Mar 11 10:21:26 crc kubenswrapper[4840]: E0311 10:21:26.115850 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fbbf99abe28b535afce4f47dd01ad196ef350c0a27156445ac2e3c85afc1696\": container with ID starting with 9fbbf99abe28b535afce4f47dd01ad196ef350c0a27156445ac2e3c85afc1696 not found: ID does not exist" containerID="9fbbf99abe28b535afce4f47dd01ad196ef350c0a27156445ac2e3c85afc1696" Mar 11 10:21:26 crc kubenswrapper[4840]: I0311 10:21:26.115906 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fbbf99abe28b535afce4f47dd01ad196ef350c0a27156445ac2e3c85afc1696"} err="failed to get container status \"9fbbf99abe28b535afce4f47dd01ad196ef350c0a27156445ac2e3c85afc1696\": rpc error: code = NotFound desc = could not find container \"9fbbf99abe28b535afce4f47dd01ad196ef350c0a27156445ac2e3c85afc1696\": container with ID starting with 9fbbf99abe28b535afce4f47dd01ad196ef350c0a27156445ac2e3c85afc1696 not found: ID does not exist" Mar 11 10:21:26 crc kubenswrapper[4840]: I0311 10:21:26.115956 4840 scope.go:117] "RemoveContainer" containerID="4cf3c472998ae2d34714e776f70ab6825c2478a985ff643c0b3161d8e0b60742" Mar 11 10:21:26 crc kubenswrapper[4840]: E0311 10:21:26.116284 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cf3c472998ae2d34714e776f70ab6825c2478a985ff643c0b3161d8e0b60742\": container with ID starting with 4cf3c472998ae2d34714e776f70ab6825c2478a985ff643c0b3161d8e0b60742 not found: ID does not exist" containerID="4cf3c472998ae2d34714e776f70ab6825c2478a985ff643c0b3161d8e0b60742" Mar 11 10:21:26 crc kubenswrapper[4840]: I0311 10:21:26.116337 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cf3c472998ae2d34714e776f70ab6825c2478a985ff643c0b3161d8e0b60742"} err="failed to get container status \"4cf3c472998ae2d34714e776f70ab6825c2478a985ff643c0b3161d8e0b60742\": rpc error: code = NotFound desc = could not find container \"4cf3c472998ae2d34714e776f70ab6825c2478a985ff643c0b3161d8e0b60742\": container with ID starting with 4cf3c472998ae2d34714e776f70ab6825c2478a985ff643c0b3161d8e0b60742 not found: ID does not exist" Mar 11 10:21:26 crc kubenswrapper[4840]: I0311 10:21:26.116369 4840 scope.go:117] "RemoveContainer" containerID="9720d117ebb57a9b0dd0b0fc7e7f8aac9eea2b75d54d463931d61cabfa3d9c7d" Mar 11 10:21:26 crc kubenswrapper[4840]: E0311 10:21:26.116729 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9720d117ebb57a9b0dd0b0fc7e7f8aac9eea2b75d54d463931d61cabfa3d9c7d\": container with ID starting with 9720d117ebb57a9b0dd0b0fc7e7f8aac9eea2b75d54d463931d61cabfa3d9c7d not found: ID does not exist" containerID="9720d117ebb57a9b0dd0b0fc7e7f8aac9eea2b75d54d463931d61cabfa3d9c7d" Mar 11 10:21:26 crc kubenswrapper[4840]: I0311 10:21:26.116785 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9720d117ebb57a9b0dd0b0fc7e7f8aac9eea2b75d54d463931d61cabfa3d9c7d"} err="failed to get container status \"9720d117ebb57a9b0dd0b0fc7e7f8aac9eea2b75d54d463931d61cabfa3d9c7d\": rpc error: code = NotFound desc = could not find container \"9720d117ebb57a9b0dd0b0fc7e7f8aac9eea2b75d54d463931d61cabfa3d9c7d\": container with ID starting with 9720d117ebb57a9b0dd0b0fc7e7f8aac9eea2b75d54d463931d61cabfa3d9c7d not found: ID does not exist" Mar 11 10:21:26 crc kubenswrapper[4840]: I0311 10:21:26.127962 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206024ac-694b-47dc-bf25-893542856d51-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:21:26 crc kubenswrapper[4840]: I0311 10:21:26.128023 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206024ac-694b-47dc-bf25-893542856d51-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:21:26 crc kubenswrapper[4840]: I0311 10:21:26.128039 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wqcx\" (UniqueName: \"kubernetes.io/projected/206024ac-694b-47dc-bf25-893542856d51-kube-api-access-2wqcx\") on node \"crc\" DevicePath \"\"" Mar 11 10:21:26 crc kubenswrapper[4840]: I0311 10:21:26.999543 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hqlx9" Mar 11 10:21:27 crc kubenswrapper[4840]: I0311 10:21:27.054164 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hqlx9"] Mar 11 10:21:27 crc kubenswrapper[4840]: I0311 10:21:27.060418 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hqlx9"] Mar 11 10:21:28 crc kubenswrapper[4840]: I0311 10:21:28.078668 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="206024ac-694b-47dc-bf25-893542856d51" path="/var/lib/kubelet/pods/206024ac-694b-47dc-bf25-893542856d51/volumes" Mar 11 10:21:29 crc kubenswrapper[4840]: I0311 10:21:29.139758 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 11 10:21:30 crc kubenswrapper[4840]: I0311 10:21:30.060420 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:21:30 crc kubenswrapper[4840]: E0311 10:21:30.060924 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:21:30 crc kubenswrapper[4840]: I0311 10:21:30.134733 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 11 10:21:33 crc kubenswrapper[4840]: I0311 10:21:33.116136 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Mar 11 10:21:33 crc kubenswrapper[4840]: E0311 10:21:33.118011 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206024ac-694b-47dc-bf25-893542856d51" containerName="extract-content" Mar 11 10:21:33 crc kubenswrapper[4840]: I0311 10:21:33.118044 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="206024ac-694b-47dc-bf25-893542856d51" containerName="extract-content" Mar 11 10:21:33 crc kubenswrapper[4840]: E0311 10:21:33.118084 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206024ac-694b-47dc-bf25-893542856d51" containerName="extract-utilities" Mar 11 10:21:33 crc kubenswrapper[4840]: I0311 10:21:33.118098 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="206024ac-694b-47dc-bf25-893542856d51" containerName="extract-utilities" Mar 11 10:21:33 crc kubenswrapper[4840]: E0311 10:21:33.118145 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206024ac-694b-47dc-bf25-893542856d51" containerName="registry-server" Mar 11 10:21:33 crc kubenswrapper[4840]: I0311 10:21:33.118160 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="206024ac-694b-47dc-bf25-893542856d51" containerName="registry-server" Mar 11 10:21:33 crc kubenswrapper[4840]: I0311 10:21:33.118526 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="206024ac-694b-47dc-bf25-893542856d51" containerName="registry-server" Mar 11 10:21:33 crc kubenswrapper[4840]: I0311 10:21:33.119615 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 11 10:21:33 crc kubenswrapper[4840]: I0311 10:21:33.126373 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bxkdr" Mar 11 10:21:33 crc kubenswrapper[4840]: I0311 10:21:33.128772 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:21:33 crc kubenswrapper[4840]: I0311 10:21:33.159061 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8rkq\" (UniqueName: \"kubernetes.io/projected/f090ad83-acb4-4160-8119-efc155a61dff-kube-api-access-j8rkq\") pod \"mariadb-client\" (UID: \"f090ad83-acb4-4160-8119-efc155a61dff\") " pod="openstack/mariadb-client" Mar 11 10:21:33 crc kubenswrapper[4840]: I0311 10:21:33.261604 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8rkq\" (UniqueName: \"kubernetes.io/projected/f090ad83-acb4-4160-8119-efc155a61dff-kube-api-access-j8rkq\") pod \"mariadb-client\" (UID: \"f090ad83-acb4-4160-8119-efc155a61dff\") " pod="openstack/mariadb-client" Mar 11 10:21:33 crc kubenswrapper[4840]: I0311 10:21:33.299706 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8rkq\" (UniqueName: \"kubernetes.io/projected/f090ad83-acb4-4160-8119-efc155a61dff-kube-api-access-j8rkq\") pod \"mariadb-client\" (UID: \"f090ad83-acb4-4160-8119-efc155a61dff\") " pod="openstack/mariadb-client" Mar 11 10:21:33 crc kubenswrapper[4840]: I0311 10:21:33.457909 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 11 10:21:33 crc kubenswrapper[4840]: I0311 10:21:33.819263 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:21:34 crc kubenswrapper[4840]: I0311 10:21:34.091843 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"f090ad83-acb4-4160-8119-efc155a61dff","Type":"ContainerStarted","Data":"1939ba27f7886191e1bf861f409c5ae745d1a3c6b500b996dcdca0782854f261"} Mar 11 10:21:40 crc kubenswrapper[4840]: I0311 10:21:40.141338 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"f090ad83-acb4-4160-8119-efc155a61dff","Type":"ContainerStarted","Data":"15058553525da6b8fcac6acdfad389086b1b3e1c37470cc845928afee6422860"} Mar 11 10:21:40 crc kubenswrapper[4840]: I0311 10:21:40.158997 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=1.169570482 podStartE2EDuration="7.158970837s" podCreationTimestamp="2026-03-11 10:21:33 +0000 UTC" firstStartedPulling="2026-03-11 10:21:33.824243554 +0000 UTC m=+5092.489913369" lastFinishedPulling="2026-03-11 10:21:39.813643909 +0000 UTC m=+5098.479313724" observedRunningTime="2026-03-11 10:21:40.155391546 +0000 UTC m=+5098.821061371" watchObservedRunningTime="2026-03-11 10:21:40.158970837 +0000 UTC m=+5098.824640692" Mar 11 10:21:41 crc kubenswrapper[4840]: I0311 10:21:41.062935 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:21:41 crc kubenswrapper[4840]: E0311 10:21:41.063746 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:21:52 crc kubenswrapper[4840]: I0311 10:21:52.066541 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:21:52 crc kubenswrapper[4840]: E0311 10:21:52.067486 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:21:53 crc kubenswrapper[4840]: I0311 10:21:53.490429 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:21:53 crc kubenswrapper[4840]: I0311 10:21:53.491176 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="f090ad83-acb4-4160-8119-efc155a61dff" containerName="mariadb-client" containerID="cri-o://15058553525da6b8fcac6acdfad389086b1b3e1c37470cc845928afee6422860" gracePeriod=30 Mar 11 10:21:54 crc kubenswrapper[4840]: I0311 10:21:54.108894 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 11 10:21:54 crc kubenswrapper[4840]: I0311 10:21:54.227525 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8rkq\" (UniqueName: \"kubernetes.io/projected/f090ad83-acb4-4160-8119-efc155a61dff-kube-api-access-j8rkq\") pod \"f090ad83-acb4-4160-8119-efc155a61dff\" (UID: \"f090ad83-acb4-4160-8119-efc155a61dff\") " Mar 11 10:21:54 crc kubenswrapper[4840]: I0311 10:21:54.236884 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f090ad83-acb4-4160-8119-efc155a61dff-kube-api-access-j8rkq" (OuterVolumeSpecName: "kube-api-access-j8rkq") pod "f090ad83-acb4-4160-8119-efc155a61dff" (UID: "f090ad83-acb4-4160-8119-efc155a61dff"). InnerVolumeSpecName "kube-api-access-j8rkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:21:54 crc kubenswrapper[4840]: I0311 10:21:54.279423 4840 generic.go:334] "Generic (PLEG): container finished" podID="f090ad83-acb4-4160-8119-efc155a61dff" containerID="15058553525da6b8fcac6acdfad389086b1b3e1c37470cc845928afee6422860" exitCode=143 Mar 11 10:21:54 crc kubenswrapper[4840]: I0311 10:21:54.279494 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"f090ad83-acb4-4160-8119-efc155a61dff","Type":"ContainerDied","Data":"15058553525da6b8fcac6acdfad389086b1b3e1c37470cc845928afee6422860"} Mar 11 10:21:54 crc kubenswrapper[4840]: I0311 10:21:54.279537 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"f090ad83-acb4-4160-8119-efc155a61dff","Type":"ContainerDied","Data":"1939ba27f7886191e1bf861f409c5ae745d1a3c6b500b996dcdca0782854f261"} Mar 11 10:21:54 crc kubenswrapper[4840]: I0311 10:21:54.279595 4840 scope.go:117] "RemoveContainer" containerID="15058553525da6b8fcac6acdfad389086b1b3e1c37470cc845928afee6422860" Mar 11 10:21:54 crc kubenswrapper[4840]: I0311 10:21:54.279595 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 11 10:21:54 crc kubenswrapper[4840]: I0311 10:21:54.309023 4840 scope.go:117] "RemoveContainer" containerID="15058553525da6b8fcac6acdfad389086b1b3e1c37470cc845928afee6422860" Mar 11 10:21:54 crc kubenswrapper[4840]: E0311 10:21:54.310088 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15058553525da6b8fcac6acdfad389086b1b3e1c37470cc845928afee6422860\": container with ID starting with 15058553525da6b8fcac6acdfad389086b1b3e1c37470cc845928afee6422860 not found: ID does not exist" containerID="15058553525da6b8fcac6acdfad389086b1b3e1c37470cc845928afee6422860" Mar 11 10:21:54 crc kubenswrapper[4840]: I0311 10:21:54.310160 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15058553525da6b8fcac6acdfad389086b1b3e1c37470cc845928afee6422860"} err="failed to get container status \"15058553525da6b8fcac6acdfad389086b1b3e1c37470cc845928afee6422860\": rpc error: code = NotFound desc = could not find container \"15058553525da6b8fcac6acdfad389086b1b3e1c37470cc845928afee6422860\": container with ID starting with 15058553525da6b8fcac6acdfad389086b1b3e1c37470cc845928afee6422860 not found: ID does not exist" Mar 11 10:21:54 crc kubenswrapper[4840]: I0311 10:21:54.329506 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8rkq\" (UniqueName: \"kubernetes.io/projected/f090ad83-acb4-4160-8119-efc155a61dff-kube-api-access-j8rkq\") on node \"crc\" DevicePath \"\"" Mar 11 10:21:54 crc kubenswrapper[4840]: I0311 10:21:54.329524 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:21:54 crc kubenswrapper[4840]: I0311 10:21:54.335388 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:21:56 crc kubenswrapper[4840]: I0311 10:21:56.071596 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f090ad83-acb4-4160-8119-efc155a61dff" path="/var/lib/kubelet/pods/f090ad83-acb4-4160-8119-efc155a61dff/volumes" Mar 11 10:22:00 crc kubenswrapper[4840]: I0311 10:22:00.152639 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553742-dx2mq"] Mar 11 10:22:00 crc kubenswrapper[4840]: E0311 10:22:00.153233 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f090ad83-acb4-4160-8119-efc155a61dff" containerName="mariadb-client" Mar 11 10:22:00 crc kubenswrapper[4840]: I0311 10:22:00.153249 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f090ad83-acb4-4160-8119-efc155a61dff" containerName="mariadb-client" Mar 11 10:22:00 crc kubenswrapper[4840]: I0311 10:22:00.153398 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f090ad83-acb4-4160-8119-efc155a61dff" containerName="mariadb-client" Mar 11 10:22:00 crc kubenswrapper[4840]: I0311 10:22:00.153887 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553742-dx2mq" Mar 11 10:22:00 crc kubenswrapper[4840]: I0311 10:22:00.157198 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:22:00 crc kubenswrapper[4840]: I0311 10:22:00.157334 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:22:00 crc kubenswrapper[4840]: I0311 10:22:00.159162 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:22:00 crc kubenswrapper[4840]: I0311 10:22:00.168607 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553742-dx2mq"] Mar 11 10:22:00 crc kubenswrapper[4840]: I0311 10:22:00.345339 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scnls\" (UniqueName: \"kubernetes.io/projected/12b5624a-876b-4f92-8019-6d096072ffbf-kube-api-access-scnls\") pod \"auto-csr-approver-29553742-dx2mq\" (UID: \"12b5624a-876b-4f92-8019-6d096072ffbf\") " pod="openshift-infra/auto-csr-approver-29553742-dx2mq" Mar 11 10:22:00 crc kubenswrapper[4840]: I0311 10:22:00.448026 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scnls\" (UniqueName: \"kubernetes.io/projected/12b5624a-876b-4f92-8019-6d096072ffbf-kube-api-access-scnls\") pod \"auto-csr-approver-29553742-dx2mq\" (UID: \"12b5624a-876b-4f92-8019-6d096072ffbf\") " pod="openshift-infra/auto-csr-approver-29553742-dx2mq" Mar 11 10:22:00 crc kubenswrapper[4840]: I0311 10:22:00.478977 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scnls\" (UniqueName: \"kubernetes.io/projected/12b5624a-876b-4f92-8019-6d096072ffbf-kube-api-access-scnls\") pod \"auto-csr-approver-29553742-dx2mq\" (UID: \"12b5624a-876b-4f92-8019-6d096072ffbf\") " pod="openshift-infra/auto-csr-approver-29553742-dx2mq" Mar 11 10:22:00 crc kubenswrapper[4840]: I0311 10:22:00.776101 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553742-dx2mq" Mar 11 10:22:01 crc kubenswrapper[4840]: I0311 10:22:01.205146 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553742-dx2mq"] Mar 11 10:22:01 crc kubenswrapper[4840]: I0311 10:22:01.515922 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553742-dx2mq" event={"ID":"12b5624a-876b-4f92-8019-6d096072ffbf","Type":"ContainerStarted","Data":"4588f9e44bb0f2714c88f3d8de3e3f4c6b7c002a27151a353877f77a655a36ed"} Mar 11 10:22:03 crc kubenswrapper[4840]: I0311 10:22:03.530340 4840 generic.go:334] "Generic (PLEG): container finished" podID="12b5624a-876b-4f92-8019-6d096072ffbf" containerID="825406a9ebb76948474ff5d1f1c62bd07d50dc438d94522761491cd8a3abf229" exitCode=0 Mar 11 10:22:03 crc kubenswrapper[4840]: I0311 10:22:03.530435 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553742-dx2mq" event={"ID":"12b5624a-876b-4f92-8019-6d096072ffbf","Type":"ContainerDied","Data":"825406a9ebb76948474ff5d1f1c62bd07d50dc438d94522761491cd8a3abf229"} Mar 11 10:22:04 crc kubenswrapper[4840]: I0311 10:22:04.060930 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:22:04 crc kubenswrapper[4840]: E0311 10:22:04.061278 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:22:04 crc kubenswrapper[4840]: I0311 10:22:04.843956 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553742-dx2mq" Mar 11 10:22:05 crc kubenswrapper[4840]: I0311 10:22:05.026627 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scnls\" (UniqueName: \"kubernetes.io/projected/12b5624a-876b-4f92-8019-6d096072ffbf-kube-api-access-scnls\") pod \"12b5624a-876b-4f92-8019-6d096072ffbf\" (UID: \"12b5624a-876b-4f92-8019-6d096072ffbf\") " Mar 11 10:22:05 crc kubenswrapper[4840]: I0311 10:22:05.032573 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12b5624a-876b-4f92-8019-6d096072ffbf-kube-api-access-scnls" (OuterVolumeSpecName: "kube-api-access-scnls") pod "12b5624a-876b-4f92-8019-6d096072ffbf" (UID: "12b5624a-876b-4f92-8019-6d096072ffbf"). InnerVolumeSpecName "kube-api-access-scnls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:22:05 crc kubenswrapper[4840]: I0311 10:22:05.128337 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scnls\" (UniqueName: \"kubernetes.io/projected/12b5624a-876b-4f92-8019-6d096072ffbf-kube-api-access-scnls\") on node \"crc\" DevicePath \"\"" Mar 11 10:22:05 crc kubenswrapper[4840]: I0311 10:22:05.547353 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553742-dx2mq" event={"ID":"12b5624a-876b-4f92-8019-6d096072ffbf","Type":"ContainerDied","Data":"4588f9e44bb0f2714c88f3d8de3e3f4c6b7c002a27151a353877f77a655a36ed"} Mar 11 10:22:05 crc kubenswrapper[4840]: I0311 10:22:05.547396 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4588f9e44bb0f2714c88f3d8de3e3f4c6b7c002a27151a353877f77a655a36ed" Mar 11 10:22:05 crc kubenswrapper[4840]: I0311 10:22:05.547407 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553742-dx2mq" Mar 11 10:22:05 crc kubenswrapper[4840]: I0311 10:22:05.914527 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553736-lwkg5"] Mar 11 10:22:05 crc kubenswrapper[4840]: I0311 10:22:05.919501 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553736-lwkg5"] Mar 11 10:22:06 crc kubenswrapper[4840]: I0311 10:22:06.081767 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6590ca4e-9b8d-474d-8aef-d6b3a5bf54b7" path="/var/lib/kubelet/pods/6590ca4e-9b8d-474d-8aef-d6b3a5bf54b7/volumes" Mar 11 10:22:12 crc kubenswrapper[4840]: I0311 10:22:12.248735 4840 scope.go:117] "RemoveContainer" containerID="d642ec9544691ec500cebd88a599552d6407c816c56c9252a4ff6edcbe768fe4" Mar 11 10:22:17 crc kubenswrapper[4840]: I0311 10:22:17.060686 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:22:17 crc kubenswrapper[4840]: E0311 10:22:17.061829 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:22:28 crc kubenswrapper[4840]: I0311 10:22:28.060923 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:22:28 crc kubenswrapper[4840]: E0311 10:22:28.061943 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:22:41 crc kubenswrapper[4840]: I0311 10:22:41.060595 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:22:41 crc kubenswrapper[4840]: E0311 10:22:41.061626 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:22:52 crc kubenswrapper[4840]: I0311 10:22:52.068513 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:22:52 crc kubenswrapper[4840]: E0311 10:22:52.069685 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:23:06 crc kubenswrapper[4840]: I0311 10:23:06.060622 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:23:06 crc kubenswrapper[4840]: E0311 10:23:06.061568 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:23:17 crc kubenswrapper[4840]: I0311 10:23:17.060006 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:23:17 crc kubenswrapper[4840]: E0311 10:23:17.060843 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:23:32 crc kubenswrapper[4840]: I0311 10:23:32.065754 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:23:32 crc kubenswrapper[4840]: E0311 10:23:32.066895 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:23:45 crc kubenswrapper[4840]: I0311 10:23:45.060310 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:23:45 crc kubenswrapper[4840]: E0311 10:23:45.061579 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:23:56 crc kubenswrapper[4840]: I0311 10:23:56.061047 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:23:56 crc kubenswrapper[4840]: E0311 10:23:56.061940 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:24:00 crc kubenswrapper[4840]: I0311 10:24:00.145631 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553744-lkgm2"] Mar 11 10:24:00 crc kubenswrapper[4840]: E0311 10:24:00.146336 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b5624a-876b-4f92-8019-6d096072ffbf" containerName="oc" Mar 11 10:24:00 crc kubenswrapper[4840]: I0311 10:24:00.146353 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b5624a-876b-4f92-8019-6d096072ffbf" containerName="oc" Mar 11 10:24:00 crc kubenswrapper[4840]: I0311 10:24:00.146611 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="12b5624a-876b-4f92-8019-6d096072ffbf" containerName="oc" Mar 11 10:24:00 crc kubenswrapper[4840]: I0311 10:24:00.147413 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553744-lkgm2" Mar 11 10:24:00 crc kubenswrapper[4840]: I0311 10:24:00.150310 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:24:00 crc kubenswrapper[4840]: I0311 10:24:00.150316 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:24:00 crc kubenswrapper[4840]: I0311 10:24:00.153989 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:24:00 crc kubenswrapper[4840]: I0311 10:24:00.166045 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553744-lkgm2"] Mar 11 10:24:00 crc kubenswrapper[4840]: I0311 10:24:00.195437 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ft7q\" (UniqueName: \"kubernetes.io/projected/069f80b6-5230-4478-bef5-17a562bf0cad-kube-api-access-4ft7q\") pod \"auto-csr-approver-29553744-lkgm2\" (UID: \"069f80b6-5230-4478-bef5-17a562bf0cad\") " pod="openshift-infra/auto-csr-approver-29553744-lkgm2" Mar 11 10:24:00 crc kubenswrapper[4840]: I0311 10:24:00.297525 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ft7q\" (UniqueName: \"kubernetes.io/projected/069f80b6-5230-4478-bef5-17a562bf0cad-kube-api-access-4ft7q\") pod \"auto-csr-approver-29553744-lkgm2\" (UID: \"069f80b6-5230-4478-bef5-17a562bf0cad\") " pod="openshift-infra/auto-csr-approver-29553744-lkgm2" Mar 11 10:24:00 crc kubenswrapper[4840]: I0311 10:24:00.330369 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ft7q\" (UniqueName: \"kubernetes.io/projected/069f80b6-5230-4478-bef5-17a562bf0cad-kube-api-access-4ft7q\") pod \"auto-csr-approver-29553744-lkgm2\" (UID: \"069f80b6-5230-4478-bef5-17a562bf0cad\") " pod="openshift-infra/auto-csr-approver-29553744-lkgm2" Mar 11 10:24:00 crc kubenswrapper[4840]: I0311 10:24:00.467375 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553744-lkgm2" Mar 11 10:24:00 crc kubenswrapper[4840]: I0311 10:24:00.938281 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553744-lkgm2"] Mar 11 10:24:01 crc kubenswrapper[4840]: I0311 10:24:01.594243 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553744-lkgm2" event={"ID":"069f80b6-5230-4478-bef5-17a562bf0cad","Type":"ContainerStarted","Data":"cb6206fc284501f87cadf6ec5963b2b9cfa4283823b803065cef4ec955722976"} Mar 11 10:24:02 crc kubenswrapper[4840]: I0311 10:24:02.608282 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553744-lkgm2" event={"ID":"069f80b6-5230-4478-bef5-17a562bf0cad","Type":"ContainerStarted","Data":"457633fffec461d81f61871a7542a4ad6c93ced02061c56490b90986f8a1067b"} Mar 11 10:24:02 crc kubenswrapper[4840]: I0311 10:24:02.626846 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553744-lkgm2" podStartSLOduration=1.284850174 podStartE2EDuration="2.626824336s" podCreationTimestamp="2026-03-11 10:24:00 +0000 UTC" firstStartedPulling="2026-03-11 10:24:00.944451581 +0000 UTC m=+5239.610121396" lastFinishedPulling="2026-03-11 10:24:02.286425743 +0000 UTC m=+5240.952095558" observedRunningTime="2026-03-11 10:24:02.624921447 +0000 UTC m=+5241.290591282" watchObservedRunningTime="2026-03-11 10:24:02.626824336 +0000 UTC m=+5241.292494151" Mar 11 10:24:03 crc kubenswrapper[4840]: I0311 10:24:03.624982 4840 generic.go:334] "Generic (PLEG): container finished" podID="069f80b6-5230-4478-bef5-17a562bf0cad" containerID="457633fffec461d81f61871a7542a4ad6c93ced02061c56490b90986f8a1067b" exitCode=0 Mar 11 10:24:03 crc kubenswrapper[4840]: I0311 10:24:03.625071 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553744-lkgm2" event={"ID":"069f80b6-5230-4478-bef5-17a562bf0cad","Type":"ContainerDied","Data":"457633fffec461d81f61871a7542a4ad6c93ced02061c56490b90986f8a1067b"} Mar 11 10:24:04 crc kubenswrapper[4840]: I0311 10:24:04.991409 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553744-lkgm2" Mar 11 10:24:05 crc kubenswrapper[4840]: I0311 10:24:05.081154 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ft7q\" (UniqueName: \"kubernetes.io/projected/069f80b6-5230-4478-bef5-17a562bf0cad-kube-api-access-4ft7q\") pod \"069f80b6-5230-4478-bef5-17a562bf0cad\" (UID: \"069f80b6-5230-4478-bef5-17a562bf0cad\") " Mar 11 10:24:05 crc kubenswrapper[4840]: I0311 10:24:05.089141 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/069f80b6-5230-4478-bef5-17a562bf0cad-kube-api-access-4ft7q" (OuterVolumeSpecName: "kube-api-access-4ft7q") pod "069f80b6-5230-4478-bef5-17a562bf0cad" (UID: "069f80b6-5230-4478-bef5-17a562bf0cad"). InnerVolumeSpecName "kube-api-access-4ft7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:24:05 crc kubenswrapper[4840]: I0311 10:24:05.162566 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553738-d5xp4"] Mar 11 10:24:05 crc kubenswrapper[4840]: I0311 10:24:05.171441 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553738-d5xp4"] Mar 11 10:24:05 crc kubenswrapper[4840]: I0311 10:24:05.183619 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ft7q\" (UniqueName: \"kubernetes.io/projected/069f80b6-5230-4478-bef5-17a562bf0cad-kube-api-access-4ft7q\") on node \"crc\" DevicePath \"\"" Mar 11 10:24:05 crc kubenswrapper[4840]: I0311 10:24:05.644791 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553744-lkgm2" event={"ID":"069f80b6-5230-4478-bef5-17a562bf0cad","Type":"ContainerDied","Data":"cb6206fc284501f87cadf6ec5963b2b9cfa4283823b803065cef4ec955722976"} Mar 11 10:24:05 crc kubenswrapper[4840]: I0311 10:24:05.644829 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb6206fc284501f87cadf6ec5963b2b9cfa4283823b803065cef4ec955722976" Mar 11 10:24:05 crc kubenswrapper[4840]: I0311 10:24:05.644841 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553744-lkgm2" Mar 11 10:24:06 crc kubenswrapper[4840]: I0311 10:24:06.069342 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="970dbd8c-cbf5-4050-a910-97ea944101b8" path="/var/lib/kubelet/pods/970dbd8c-cbf5-4050-a910-97ea944101b8/volumes" Mar 11 10:24:07 crc kubenswrapper[4840]: I0311 10:24:07.060503 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:24:07 crc kubenswrapper[4840]: E0311 10:24:07.060967 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:24:12 crc kubenswrapper[4840]: I0311 10:24:12.373926 4840 scope.go:117] "RemoveContainer" containerID="3474ec326266b4f62ee3c7845d45d33b39e7789d0992660c14c9ae8b8b4744f1" Mar 11 10:24:12 crc kubenswrapper[4840]: I0311 10:24:12.432645 4840 scope.go:117] "RemoveContainer" containerID="5bdf42074f3582c3af387a3dcd78ec21ade2cd471791b224242b4bfadbaa9446" Mar 11 10:24:22 crc kubenswrapper[4840]: I0311 10:24:22.068932 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:24:22 crc kubenswrapper[4840]: E0311 10:24:22.071991 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:24:35 crc kubenswrapper[4840]: I0311 10:24:35.060201 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:24:35 crc kubenswrapper[4840]: E0311 10:24:35.061000 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:24:48 crc kubenswrapper[4840]: I0311 10:24:48.060323 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:24:48 crc kubenswrapper[4840]: E0311 10:24:48.061794 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:25:03 crc kubenswrapper[4840]: I0311 10:25:03.060699 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:25:03 crc kubenswrapper[4840]: E0311 10:25:03.061990 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:25:16 crc kubenswrapper[4840]: I0311 10:25:16.060689 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:25:16 crc kubenswrapper[4840]: E0311 10:25:16.061787 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:25:28 crc kubenswrapper[4840]: I0311 10:25:28.589711 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Mar 11 10:25:28 crc kubenswrapper[4840]: E0311 10:25:28.591130 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="069f80b6-5230-4478-bef5-17a562bf0cad" containerName="oc" Mar 11 10:25:28 crc kubenswrapper[4840]: I0311 10:25:28.591154 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="069f80b6-5230-4478-bef5-17a562bf0cad" containerName="oc" Mar 11 10:25:28 crc kubenswrapper[4840]: I0311 10:25:28.591403 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="069f80b6-5230-4478-bef5-17a562bf0cad" containerName="oc" Mar 11 10:25:28 crc kubenswrapper[4840]: I0311 10:25:28.592237 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Mar 11 10:25:28 crc kubenswrapper[4840]: I0311 10:25:28.594506 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-bxkdr" Mar 11 10:25:28 crc kubenswrapper[4840]: I0311 10:25:28.630718 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Mar 11 10:25:28 crc kubenswrapper[4840]: I0311 10:25:28.731261 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xktsk\" (UniqueName: \"kubernetes.io/projected/455e7fd5-57d3-417d-888b-f5f16756b1a9-kube-api-access-xktsk\") pod \"mariadb-copy-data\" (UID: \"455e7fd5-57d3-417d-888b-f5f16756b1a9\") " pod="openstack/mariadb-copy-data" Mar 11 10:25:28 crc kubenswrapper[4840]: I0311 10:25:28.731330 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1ced4a91-b52b-4284-b501-419cd48afc9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1ced4a91-b52b-4284-b501-419cd48afc9c\") pod \"mariadb-copy-data\" (UID: \"455e7fd5-57d3-417d-888b-f5f16756b1a9\") " pod="openstack/mariadb-copy-data" Mar 11 10:25:28 crc kubenswrapper[4840]: I0311 10:25:28.832603 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xktsk\" (UniqueName: \"kubernetes.io/projected/455e7fd5-57d3-417d-888b-f5f16756b1a9-kube-api-access-xktsk\") pod \"mariadb-copy-data\" (UID: \"455e7fd5-57d3-417d-888b-f5f16756b1a9\") " pod="openstack/mariadb-copy-data" Mar 11 10:25:28 crc kubenswrapper[4840]: I0311 10:25:28.832745 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1ced4a91-b52b-4284-b501-419cd48afc9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1ced4a91-b52b-4284-b501-419cd48afc9c\") pod \"mariadb-copy-data\" (UID: \"455e7fd5-57d3-417d-888b-f5f16756b1a9\") " pod="openstack/mariadb-copy-data" Mar 11 10:25:28 crc kubenswrapper[4840]: I0311 10:25:28.836019 4840 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:25:28 crc kubenswrapper[4840]: I0311 10:25:28.836058 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1ced4a91-b52b-4284-b501-419cd48afc9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1ced4a91-b52b-4284-b501-419cd48afc9c\") pod \"mariadb-copy-data\" (UID: \"455e7fd5-57d3-417d-888b-f5f16756b1a9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8448ac4a6da35a67d222d477ef7bb29d038cc4205bb2ca6c159c3f98be0f891b/globalmount\"" pod="openstack/mariadb-copy-data" Mar 11 10:25:28 crc kubenswrapper[4840]: I0311 10:25:28.854851 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xktsk\" (UniqueName: \"kubernetes.io/projected/455e7fd5-57d3-417d-888b-f5f16756b1a9-kube-api-access-xktsk\") pod \"mariadb-copy-data\" (UID: \"455e7fd5-57d3-417d-888b-f5f16756b1a9\") " pod="openstack/mariadb-copy-data" Mar 11 10:25:28 crc kubenswrapper[4840]: I0311 10:25:28.863377 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1ced4a91-b52b-4284-b501-419cd48afc9c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1ced4a91-b52b-4284-b501-419cd48afc9c\") pod \"mariadb-copy-data\" (UID: \"455e7fd5-57d3-417d-888b-f5f16756b1a9\") " pod="openstack/mariadb-copy-data" Mar 11 10:25:28 crc kubenswrapper[4840]: I0311 10:25:28.956175 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Mar 11 10:25:29 crc kubenswrapper[4840]: I0311 10:25:29.497598 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Mar 11 10:25:30 crc kubenswrapper[4840]: I0311 10:25:30.059866 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:25:30 crc kubenswrapper[4840]: I0311 10:25:30.392353 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"8ecd5e8792daa32307a0f073aedb0fb34456f86b6adb94e207aa2934b241391a"} Mar 11 10:25:30 crc kubenswrapper[4840]: I0311 10:25:30.393964 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"455e7fd5-57d3-417d-888b-f5f16756b1a9","Type":"ContainerStarted","Data":"1bc98fac90629bb8f14df17476b5fa43b80f235e0ac1fd971989379032a423de"} Mar 11 10:25:30 crc kubenswrapper[4840]: I0311 10:25:30.394003 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"455e7fd5-57d3-417d-888b-f5f16756b1a9","Type":"ContainerStarted","Data":"5a2c0eba850c9a9bae7b8cb6bd0cec8fb42a128bcb9f4c09340c1518f803133e"} Mar 11 10:25:33 crc kubenswrapper[4840]: I0311 10:25:33.301668 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=6.301640901 podStartE2EDuration="6.301640901s" podCreationTimestamp="2026-03-11 10:25:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:25:30.431731166 +0000 UTC m=+5329.097400981" watchObservedRunningTime="2026-03-11 10:25:33.301640901 +0000 UTC m=+5331.967310756" Mar 11 10:25:33 crc kubenswrapper[4840]: I0311 10:25:33.314866 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Mar 11 10:25:33 crc kubenswrapper[4840]: I0311 10:25:33.316654 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 11 10:25:33 crc kubenswrapper[4840]: I0311 10:25:33.327115 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:25:33 crc kubenswrapper[4840]: I0311 10:25:33.408995 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsv7v\" (UniqueName: \"kubernetes.io/projected/a9241679-2907-4d08-8f1e-9955df2e2de2-kube-api-access-tsv7v\") pod \"mariadb-client\" (UID: \"a9241679-2907-4d08-8f1e-9955df2e2de2\") " pod="openstack/mariadb-client" Mar 11 10:25:33 crc kubenswrapper[4840]: I0311 10:25:33.510683 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsv7v\" (UniqueName: \"kubernetes.io/projected/a9241679-2907-4d08-8f1e-9955df2e2de2-kube-api-access-tsv7v\") pod \"mariadb-client\" (UID: \"a9241679-2907-4d08-8f1e-9955df2e2de2\") " pod="openstack/mariadb-client" Mar 11 10:25:33 crc kubenswrapper[4840]: I0311 10:25:33.536802 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsv7v\" (UniqueName: \"kubernetes.io/projected/a9241679-2907-4d08-8f1e-9955df2e2de2-kube-api-access-tsv7v\") pod \"mariadb-client\" (UID: \"a9241679-2907-4d08-8f1e-9955df2e2de2\") " pod="openstack/mariadb-client" Mar 11 10:25:33 crc kubenswrapper[4840]: I0311 10:25:33.657128 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 11 10:25:34 crc kubenswrapper[4840]: I0311 10:25:34.129845 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:25:34 crc kubenswrapper[4840]: W0311 10:25:34.141529 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9241679_2907_4d08_8f1e_9955df2e2de2.slice/crio-9891b96f614c3ad541db6ee3fed6dabec23a7b8cc26526e1a2d70a027aebdbb1 WatchSource:0}: Error finding container 9891b96f614c3ad541db6ee3fed6dabec23a7b8cc26526e1a2d70a027aebdbb1: Status 404 returned error can't find the container with id 9891b96f614c3ad541db6ee3fed6dabec23a7b8cc26526e1a2d70a027aebdbb1 Mar 11 10:25:34 crc kubenswrapper[4840]: I0311 10:25:34.438632 4840 generic.go:334] "Generic (PLEG): container finished" podID="a9241679-2907-4d08-8f1e-9955df2e2de2" containerID="4d56525e71a857634a9feb2ab8fc78cbba046d51b434ba943d67c7beb1c0098c" exitCode=0 Mar 11 10:25:34 crc kubenswrapper[4840]: I0311 10:25:34.438756 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"a9241679-2907-4d08-8f1e-9955df2e2de2","Type":"ContainerDied","Data":"4d56525e71a857634a9feb2ab8fc78cbba046d51b434ba943d67c7beb1c0098c"} Mar 11 10:25:34 crc kubenswrapper[4840]: I0311 10:25:34.439001 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"a9241679-2907-4d08-8f1e-9955df2e2de2","Type":"ContainerStarted","Data":"9891b96f614c3ad541db6ee3fed6dabec23a7b8cc26526e1a2d70a027aebdbb1"} Mar 11 10:25:35 crc kubenswrapper[4840]: I0311 10:25:35.825965 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 11 10:25:35 crc kubenswrapper[4840]: I0311 10:25:35.850987 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_a9241679-2907-4d08-8f1e-9955df2e2de2/mariadb-client/0.log" Mar 11 10:25:35 crc kubenswrapper[4840]: I0311 10:25:35.879684 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:25:35 crc kubenswrapper[4840]: I0311 10:25:35.887583 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:25:35 crc kubenswrapper[4840]: I0311 10:25:35.951887 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsv7v\" (UniqueName: \"kubernetes.io/projected/a9241679-2907-4d08-8f1e-9955df2e2de2-kube-api-access-tsv7v\") pod \"a9241679-2907-4d08-8f1e-9955df2e2de2\" (UID: \"a9241679-2907-4d08-8f1e-9955df2e2de2\") " Mar 11 10:25:35 crc kubenswrapper[4840]: I0311 10:25:35.957724 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9241679-2907-4d08-8f1e-9955df2e2de2-kube-api-access-tsv7v" (OuterVolumeSpecName: "kube-api-access-tsv7v") pod "a9241679-2907-4d08-8f1e-9955df2e2de2" (UID: "a9241679-2907-4d08-8f1e-9955df2e2de2"). InnerVolumeSpecName "kube-api-access-tsv7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:25:36 crc kubenswrapper[4840]: I0311 10:25:36.012100 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Mar 11 10:25:36 crc kubenswrapper[4840]: E0311 10:25:36.012516 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9241679-2907-4d08-8f1e-9955df2e2de2" containerName="mariadb-client" Mar 11 10:25:36 crc kubenswrapper[4840]: I0311 10:25:36.012533 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9241679-2907-4d08-8f1e-9955df2e2de2" containerName="mariadb-client" Mar 11 10:25:36 crc kubenswrapper[4840]: I0311 10:25:36.012839 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9241679-2907-4d08-8f1e-9955df2e2de2" containerName="mariadb-client" Mar 11 10:25:36 crc kubenswrapper[4840]: I0311 10:25:36.013549 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 11 10:25:36 crc kubenswrapper[4840]: I0311 10:25:36.022238 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:25:36 crc kubenswrapper[4840]: I0311 10:25:36.054244 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsv7v\" (UniqueName: \"kubernetes.io/projected/a9241679-2907-4d08-8f1e-9955df2e2de2-kube-api-access-tsv7v\") on node \"crc\" DevicePath \"\"" Mar 11 10:25:36 crc kubenswrapper[4840]: I0311 10:25:36.070377 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9241679-2907-4d08-8f1e-9955df2e2de2" path="/var/lib/kubelet/pods/a9241679-2907-4d08-8f1e-9955df2e2de2/volumes" Mar 11 10:25:36 crc kubenswrapper[4840]: I0311 10:25:36.155459 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgkpr\" (UniqueName: \"kubernetes.io/projected/69b9d3e2-a78c-4e0d-8f58-dee36501d3b4-kube-api-access-lgkpr\") pod \"mariadb-client\" (UID: \"69b9d3e2-a78c-4e0d-8f58-dee36501d3b4\") " pod="openstack/mariadb-client" Mar 11 10:25:36 crc kubenswrapper[4840]: I0311 10:25:36.257098 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgkpr\" (UniqueName: \"kubernetes.io/projected/69b9d3e2-a78c-4e0d-8f58-dee36501d3b4-kube-api-access-lgkpr\") pod \"mariadb-client\" (UID: \"69b9d3e2-a78c-4e0d-8f58-dee36501d3b4\") " pod="openstack/mariadb-client" Mar 11 10:25:36 crc kubenswrapper[4840]: I0311 10:25:36.292841 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgkpr\" (UniqueName: \"kubernetes.io/projected/69b9d3e2-a78c-4e0d-8f58-dee36501d3b4-kube-api-access-lgkpr\") pod \"mariadb-client\" (UID: \"69b9d3e2-a78c-4e0d-8f58-dee36501d3b4\") " pod="openstack/mariadb-client" Mar 11 10:25:36 crc kubenswrapper[4840]: I0311 10:25:36.337300 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 11 10:25:36 crc kubenswrapper[4840]: I0311 10:25:36.467142 4840 scope.go:117] "RemoveContainer" containerID="4d56525e71a857634a9feb2ab8fc78cbba046d51b434ba943d67c7beb1c0098c" Mar 11 10:25:36 crc kubenswrapper[4840]: I0311 10:25:36.467175 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 11 10:25:36 crc kubenswrapper[4840]: I0311 10:25:36.807549 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:25:36 crc kubenswrapper[4840]: W0311 10:25:36.814411 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69b9d3e2_a78c_4e0d_8f58_dee36501d3b4.slice/crio-5162b3479ae7ca8a8770d7e724d0be51febdb278d490b9eaf13ece6945b6a5c5 WatchSource:0}: Error finding container 5162b3479ae7ca8a8770d7e724d0be51febdb278d490b9eaf13ece6945b6a5c5: Status 404 returned error can't find the container with id 5162b3479ae7ca8a8770d7e724d0be51febdb278d490b9eaf13ece6945b6a5c5 Mar 11 10:25:37 crc kubenswrapper[4840]: I0311 10:25:37.482506 4840 generic.go:334] "Generic (PLEG): container finished" podID="69b9d3e2-a78c-4e0d-8f58-dee36501d3b4" containerID="513b5527e4ba800c455c6d9f966ecf1ed8db6ae4b1831555b71535ec9a9bd388" exitCode=0 Mar 11 10:25:37 crc kubenswrapper[4840]: I0311 10:25:37.482754 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"69b9d3e2-a78c-4e0d-8f58-dee36501d3b4","Type":"ContainerDied","Data":"513b5527e4ba800c455c6d9f966ecf1ed8db6ae4b1831555b71535ec9a9bd388"} Mar 11 10:25:37 crc kubenswrapper[4840]: I0311 10:25:37.482877 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"69b9d3e2-a78c-4e0d-8f58-dee36501d3b4","Type":"ContainerStarted","Data":"5162b3479ae7ca8a8770d7e724d0be51febdb278d490b9eaf13ece6945b6a5c5"} Mar 11 10:25:38 crc kubenswrapper[4840]: I0311 10:25:38.913063 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 11 10:25:38 crc kubenswrapper[4840]: I0311 10:25:38.939275 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_69b9d3e2-a78c-4e0d-8f58-dee36501d3b4/mariadb-client/0.log" Mar 11 10:25:38 crc kubenswrapper[4840]: I0311 10:25:38.973853 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:25:38 crc kubenswrapper[4840]: I0311 10:25:38.981248 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Mar 11 10:25:39 crc kubenswrapper[4840]: I0311 10:25:39.001210 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgkpr\" (UniqueName: \"kubernetes.io/projected/69b9d3e2-a78c-4e0d-8f58-dee36501d3b4-kube-api-access-lgkpr\") pod \"69b9d3e2-a78c-4e0d-8f58-dee36501d3b4\" (UID: \"69b9d3e2-a78c-4e0d-8f58-dee36501d3b4\") " Mar 11 10:25:39 crc kubenswrapper[4840]: I0311 10:25:39.010745 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69b9d3e2-a78c-4e0d-8f58-dee36501d3b4-kube-api-access-lgkpr" (OuterVolumeSpecName: "kube-api-access-lgkpr") pod "69b9d3e2-a78c-4e0d-8f58-dee36501d3b4" (UID: "69b9d3e2-a78c-4e0d-8f58-dee36501d3b4"). InnerVolumeSpecName "kube-api-access-lgkpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:25:39 crc kubenswrapper[4840]: I0311 10:25:39.104749 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgkpr\" (UniqueName: \"kubernetes.io/projected/69b9d3e2-a78c-4e0d-8f58-dee36501d3b4-kube-api-access-lgkpr\") on node \"crc\" DevicePath \"\"" Mar 11 10:25:39 crc kubenswrapper[4840]: I0311 10:25:39.501691 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5162b3479ae7ca8a8770d7e724d0be51febdb278d490b9eaf13ece6945b6a5c5" Mar 11 10:25:39 crc kubenswrapper[4840]: I0311 10:25:39.501770 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 11 10:25:40 crc kubenswrapper[4840]: I0311 10:25:40.079109 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69b9d3e2-a78c-4e0d-8f58-dee36501d3b4" path="/var/lib/kubelet/pods/69b9d3e2-a78c-4e0d-8f58-dee36501d3b4/volumes" Mar 11 10:26:00 crc kubenswrapper[4840]: I0311 10:26:00.154779 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553746-hhwj8"] Mar 11 10:26:00 crc kubenswrapper[4840]: E0311 10:26:00.155769 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b9d3e2-a78c-4e0d-8f58-dee36501d3b4" containerName="mariadb-client" Mar 11 10:26:00 crc kubenswrapper[4840]: I0311 10:26:00.155785 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b9d3e2-a78c-4e0d-8f58-dee36501d3b4" containerName="mariadb-client" Mar 11 10:26:00 crc kubenswrapper[4840]: I0311 10:26:00.155982 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="69b9d3e2-a78c-4e0d-8f58-dee36501d3b4" containerName="mariadb-client" Mar 11 10:26:00 crc kubenswrapper[4840]: I0311 10:26:00.157058 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553746-hhwj8" Mar 11 10:26:00 crc kubenswrapper[4840]: I0311 10:26:00.159398 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:26:00 crc kubenswrapper[4840]: I0311 10:26:00.159795 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:26:00 crc kubenswrapper[4840]: I0311 10:26:00.160515 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:26:00 crc kubenswrapper[4840]: I0311 10:26:00.170449 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553746-hhwj8"] Mar 11 10:26:00 crc kubenswrapper[4840]: I0311 10:26:00.270898 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldtmg\" (UniqueName: \"kubernetes.io/projected/939d8d3a-89e2-438a-8c26-f54960a3b0b7-kube-api-access-ldtmg\") pod \"auto-csr-approver-29553746-hhwj8\" (UID: \"939d8d3a-89e2-438a-8c26-f54960a3b0b7\") " pod="openshift-infra/auto-csr-approver-29553746-hhwj8" Mar 11 10:26:00 crc kubenswrapper[4840]: I0311 10:26:00.372487 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldtmg\" (UniqueName: \"kubernetes.io/projected/939d8d3a-89e2-438a-8c26-f54960a3b0b7-kube-api-access-ldtmg\") pod \"auto-csr-approver-29553746-hhwj8\" (UID: \"939d8d3a-89e2-438a-8c26-f54960a3b0b7\") " pod="openshift-infra/auto-csr-approver-29553746-hhwj8" Mar 11 10:26:00 crc kubenswrapper[4840]: I0311 10:26:00.391991 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldtmg\" (UniqueName: \"kubernetes.io/projected/939d8d3a-89e2-438a-8c26-f54960a3b0b7-kube-api-access-ldtmg\") pod \"auto-csr-approver-29553746-hhwj8\" (UID: \"939d8d3a-89e2-438a-8c26-f54960a3b0b7\") " pod="openshift-infra/auto-csr-approver-29553746-hhwj8" Mar 11 10:26:00 crc kubenswrapper[4840]: I0311 10:26:00.489596 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553746-hhwj8" Mar 11 10:26:00 crc kubenswrapper[4840]: I0311 10:26:00.710884 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553746-hhwj8"] Mar 11 10:26:01 crc kubenswrapper[4840]: I0311 10:26:01.706557 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553746-hhwj8" event={"ID":"939d8d3a-89e2-438a-8c26-f54960a3b0b7","Type":"ContainerStarted","Data":"3a5f619789b2d21a6f79e526397c659c83057f18ab30a818859f151ad7428f2e"} Mar 11 10:26:02 crc kubenswrapper[4840]: I0311 10:26:02.719216 4840 generic.go:334] "Generic (PLEG): container finished" podID="939d8d3a-89e2-438a-8c26-f54960a3b0b7" containerID="58c50eafa84688b091e12d0027ebb087a753a472bc3d686484f0f31baa67ec96" exitCode=0 Mar 11 10:26:02 crc kubenswrapper[4840]: I0311 10:26:02.719304 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553746-hhwj8" event={"ID":"939d8d3a-89e2-438a-8c26-f54960a3b0b7","Type":"ContainerDied","Data":"58c50eafa84688b091e12d0027ebb087a753a472bc3d686484f0f31baa67ec96"} Mar 11 10:26:04 crc kubenswrapper[4840]: I0311 10:26:04.065809 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553746-hhwj8" Mar 11 10:26:04 crc kubenswrapper[4840]: I0311 10:26:04.139127 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldtmg\" (UniqueName: \"kubernetes.io/projected/939d8d3a-89e2-438a-8c26-f54960a3b0b7-kube-api-access-ldtmg\") pod \"939d8d3a-89e2-438a-8c26-f54960a3b0b7\" (UID: \"939d8d3a-89e2-438a-8c26-f54960a3b0b7\") " Mar 11 10:26:04 crc kubenswrapper[4840]: I0311 10:26:04.148512 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/939d8d3a-89e2-438a-8c26-f54960a3b0b7-kube-api-access-ldtmg" (OuterVolumeSpecName: "kube-api-access-ldtmg") pod "939d8d3a-89e2-438a-8c26-f54960a3b0b7" (UID: "939d8d3a-89e2-438a-8c26-f54960a3b0b7"). InnerVolumeSpecName "kube-api-access-ldtmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:26:04 crc kubenswrapper[4840]: I0311 10:26:04.241525 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldtmg\" (UniqueName: \"kubernetes.io/projected/939d8d3a-89e2-438a-8c26-f54960a3b0b7-kube-api-access-ldtmg\") on node \"crc\" DevicePath \"\"" Mar 11 10:26:04 crc kubenswrapper[4840]: I0311 10:26:04.747685 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553746-hhwj8" event={"ID":"939d8d3a-89e2-438a-8c26-f54960a3b0b7","Type":"ContainerDied","Data":"3a5f619789b2d21a6f79e526397c659c83057f18ab30a818859f151ad7428f2e"} Mar 11 10:26:04 crc kubenswrapper[4840]: I0311 10:26:04.748043 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a5f619789b2d21a6f79e526397c659c83057f18ab30a818859f151ad7428f2e" Mar 11 10:26:04 crc kubenswrapper[4840]: I0311 10:26:04.747751 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553746-hhwj8" Mar 11 10:26:05 crc kubenswrapper[4840]: I0311 10:26:05.153938 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553740-qdkls"] Mar 11 10:26:05 crc kubenswrapper[4840]: I0311 10:26:05.162334 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553740-qdkls"] Mar 11 10:26:06 crc kubenswrapper[4840]: I0311 10:26:06.071379 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e524ba7e-5fb4-4c6a-964b-6e01712007b5" path="/var/lib/kubelet/pods/e524ba7e-5fb4-4c6a-964b-6e01712007b5/volumes" Mar 11 10:26:12 crc kubenswrapper[4840]: I0311 10:26:12.519635 4840 scope.go:117] "RemoveContainer" containerID="d92736068fbb09a50a29c7c36c51cbc8856dbd060f60ddc8177db8baa4b426fb" Mar 11 10:26:12 crc kubenswrapper[4840]: I0311 10:26:12.563065 4840 scope.go:117] "RemoveContainer" containerID="0193ec3b541f59d58245e25327d5673b7e89484bf5a497e35ea7e201c8ae588f" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.420037 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 11 10:26:15 crc kubenswrapper[4840]: E0311 10:26:15.423313 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="939d8d3a-89e2-438a-8c26-f54960a3b0b7" containerName="oc" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.423357 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="939d8d3a-89e2-438a-8c26-f54960a3b0b7" containerName="oc" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.423653 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="939d8d3a-89e2-438a-8c26-f54960a3b0b7" containerName="oc" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.425986 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.431237 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.432105 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.432393 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-wxfcw" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.433213 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.433484 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.434133 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.458259 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.460719 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.464981 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.479861 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.489256 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.494226 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.532973 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/000957b4-d3c7-4076-a0d3-21a679bfe061-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"000957b4-d3c7-4076-a0d3-21a679bfe061\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.533014 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0fcde2cc-f714-46c5-a15d-dff200639b33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0fcde2cc-f714-46c5-a15d-dff200639b33\") pod \"ovsdbserver-nb-1\" (UID: \"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.533034 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.533053 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6h7f\" (UniqueName: \"kubernetes.io/projected/000957b4-d3c7-4076-a0d3-21a679bfe061-kube-api-access-j6h7f\") pod \"ovsdbserver-nb-2\" (UID: \"000957b4-d3c7-4076-a0d3-21a679bfe061\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.533209 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/000957b4-d3c7-4076-a0d3-21a679bfe061-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"000957b4-d3c7-4076-a0d3-21a679bfe061\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.533266 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f0eda589-bbba-45c5-9a8e-013d9764b6bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f0eda589-bbba-45c5-9a8e-013d9764b6bd\") pod \"ovsdbserver-nb-2\" (UID: \"000957b4-d3c7-4076-a0d3-21a679bfe061\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.533313 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-eba2a6be-9f90-44cd-b17e-be560c693aff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eba2a6be-9f90-44cd-b17e-be560c693aff\") pod \"ovsdbserver-nb-0\" (UID: \"efbbc77c-f1d4-4756-9e19-6f53b03c275b\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.533350 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efbbc77c-f1d4-4756-9e19-6f53b03c275b-config\") pod \"ovsdbserver-nb-0\" (UID: \"efbbc77c-f1d4-4756-9e19-6f53b03c275b\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.533376 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7bbd\" (UniqueName: \"kubernetes.io/projected/80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a-kube-api-access-p7bbd\") pod \"ovsdbserver-nb-1\" (UID: \"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.533409 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6b8h\" (UniqueName: \"kubernetes.io/projected/efbbc77c-f1d4-4756-9e19-6f53b03c275b-kube-api-access-t6b8h\") pod \"ovsdbserver-nb-0\" (UID: \"efbbc77c-f1d4-4756-9e19-6f53b03c275b\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.533425 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/000957b4-d3c7-4076-a0d3-21a679bfe061-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"000957b4-d3c7-4076-a0d3-21a679bfe061\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.533478 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.533502 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efbbc77c-f1d4-4756-9e19-6f53b03c275b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"efbbc77c-f1d4-4756-9e19-6f53b03c275b\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.533525 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/000957b4-d3c7-4076-a0d3-21a679bfe061-config\") pod \"ovsdbserver-nb-2\" (UID: \"000957b4-d3c7-4076-a0d3-21a679bfe061\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.533550 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.533610 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.533647 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.533670 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/efbbc77c-f1d4-4756-9e19-6f53b03c275b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"efbbc77c-f1d4-4756-9e19-6f53b03c275b\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.533705 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/efbbc77c-f1d4-4756-9e19-6f53b03c275b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"efbbc77c-f1d4-4756-9e19-6f53b03c275b\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.533733 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a-config\") pod \"ovsdbserver-nb-1\" (UID: \"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.533760 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/efbbc77c-f1d4-4756-9e19-6f53b03c275b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"efbbc77c-f1d4-4756-9e19-6f53b03c275b\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.533779 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efbbc77c-f1d4-4756-9e19-6f53b03c275b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"efbbc77c-f1d4-4756-9e19-6f53b03c275b\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.533800 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/000957b4-d3c7-4076-a0d3-21a679bfe061-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"000957b4-d3c7-4076-a0d3-21a679bfe061\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.533819 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/000957b4-d3c7-4076-a0d3-21a679bfe061-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"000957b4-d3c7-4076-a0d3-21a679bfe061\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.635599 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/000957b4-d3c7-4076-a0d3-21a679bfe061-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"000957b4-d3c7-4076-a0d3-21a679bfe061\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.635649 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.635675 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efbbc77c-f1d4-4756-9e19-6f53b03c275b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"efbbc77c-f1d4-4756-9e19-6f53b03c275b\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.635697 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/000957b4-d3c7-4076-a0d3-21a679bfe061-config\") pod \"ovsdbserver-nb-2\" (UID: \"000957b4-d3c7-4076-a0d3-21a679bfe061\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.635718 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.635752 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.635770 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.635788 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/efbbc77c-f1d4-4756-9e19-6f53b03c275b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"efbbc77c-f1d4-4756-9e19-6f53b03c275b\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.635813 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/efbbc77c-f1d4-4756-9e19-6f53b03c275b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"efbbc77c-f1d4-4756-9e19-6f53b03c275b\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.635836 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a-config\") pod \"ovsdbserver-nb-1\" (UID: \"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.635858 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/efbbc77c-f1d4-4756-9e19-6f53b03c275b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"efbbc77c-f1d4-4756-9e19-6f53b03c275b\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.635877 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efbbc77c-f1d4-4756-9e19-6f53b03c275b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"efbbc77c-f1d4-4756-9e19-6f53b03c275b\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.635894 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/000957b4-d3c7-4076-a0d3-21a679bfe061-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"000957b4-d3c7-4076-a0d3-21a679bfe061\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.635915 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/000957b4-d3c7-4076-a0d3-21a679bfe061-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"000957b4-d3c7-4076-a0d3-21a679bfe061\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.635929 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/000957b4-d3c7-4076-a0d3-21a679bfe061-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"000957b4-d3c7-4076-a0d3-21a679bfe061\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.635948 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0fcde2cc-f714-46c5-a15d-dff200639b33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0fcde2cc-f714-46c5-a15d-dff200639b33\") pod \"ovsdbserver-nb-1\" (UID: \"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.635963 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.635981 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6h7f\" (UniqueName: \"kubernetes.io/projected/000957b4-d3c7-4076-a0d3-21a679bfe061-kube-api-access-j6h7f\") pod \"ovsdbserver-nb-2\" (UID: \"000957b4-d3c7-4076-a0d3-21a679bfe061\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.635995 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/000957b4-d3c7-4076-a0d3-21a679bfe061-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"000957b4-d3c7-4076-a0d3-21a679bfe061\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.636046 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f0eda589-bbba-45c5-9a8e-013d9764b6bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f0eda589-bbba-45c5-9a8e-013d9764b6bd\") pod \"ovsdbserver-nb-2\" (UID: \"000957b4-d3c7-4076-a0d3-21a679bfe061\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.636083 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-eba2a6be-9f90-44cd-b17e-be560c693aff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eba2a6be-9f90-44cd-b17e-be560c693aff\") pod \"ovsdbserver-nb-0\" (UID: \"efbbc77c-f1d4-4756-9e19-6f53b03c275b\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.636105 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efbbc77c-f1d4-4756-9e19-6f53b03c275b-config\") pod \"ovsdbserver-nb-0\" (UID: \"efbbc77c-f1d4-4756-9e19-6f53b03c275b\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.636127 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7bbd\" (UniqueName: \"kubernetes.io/projected/80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a-kube-api-access-p7bbd\") pod \"ovsdbserver-nb-1\" (UID: \"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.636150 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6b8h\" (UniqueName: \"kubernetes.io/projected/efbbc77c-f1d4-4756-9e19-6f53b03c275b-kube-api-access-t6b8h\") pod \"ovsdbserver-nb-0\" (UID: \"efbbc77c-f1d4-4756-9e19-6f53b03c275b\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.636301 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.637709 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/000957b4-d3c7-4076-a0d3-21a679bfe061-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"000957b4-d3c7-4076-a0d3-21a679bfe061\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.638086 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/000957b4-d3c7-4076-a0d3-21a679bfe061-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"000957b4-d3c7-4076-a0d3-21a679bfe061\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.638873 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/000957b4-d3c7-4076-a0d3-21a679bfe061-config\") pod \"ovsdbserver-nb-2\" (UID: \"000957b4-d3c7-4076-a0d3-21a679bfe061\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.639206 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a-config\") pod \"ovsdbserver-nb-1\" (UID: \"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.639210 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/efbbc77c-f1d4-4756-9e19-6f53b03c275b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"efbbc77c-f1d4-4756-9e19-6f53b03c275b\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.642735 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/000957b4-d3c7-4076-a0d3-21a679bfe061-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"000957b4-d3c7-4076-a0d3-21a679bfe061\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.642784 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efbbc77c-f1d4-4756-9e19-6f53b03c275b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"efbbc77c-f1d4-4756-9e19-6f53b03c275b\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.642915 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/efbbc77c-f1d4-4756-9e19-6f53b03c275b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"efbbc77c-f1d4-4756-9e19-6f53b03c275b\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.643207 4840 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.643231 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f0eda589-bbba-45c5-9a8e-013d9764b6bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f0eda589-bbba-45c5-9a8e-013d9764b6bd\") pod \"ovsdbserver-nb-2\" (UID: \"000957b4-d3c7-4076-a0d3-21a679bfe061\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/78c7662a24d4298f83328f49bcc3293659d9b82d72fdec9d19337c5b3d21ea22/globalmount\"" pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.643503 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efbbc77c-f1d4-4756-9e19-6f53b03c275b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"efbbc77c-f1d4-4756-9e19-6f53b03c275b\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.643629 4840 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.643656 4840 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.643695 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-eba2a6be-9f90-44cd-b17e-be560c693aff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eba2a6be-9f90-44cd-b17e-be560c693aff\") pod \"ovsdbserver-nb-0\" (UID: \"efbbc77c-f1d4-4756-9e19-6f53b03c275b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2fcaa24b7588c2de0e912d7954bc3eb1810babe7e89a799ecce2ee828dbeb47e/globalmount\"" pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.643655 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0fcde2cc-f714-46c5-a15d-dff200639b33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0fcde2cc-f714-46c5-a15d-dff200639b33\") pod \"ovsdbserver-nb-1\" (UID: \"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/44e3c2fb6c5ee99d3fdceb32c853d6ca50224f200bc73c4a314bfb4eac9feb97/globalmount\"" pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.644037 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.645078 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.646398 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efbbc77c-f1d4-4756-9e19-6f53b03c275b-config\") pod \"ovsdbserver-nb-0\" (UID: \"efbbc77c-f1d4-4756-9e19-6f53b03c275b\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.649031 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/000957b4-d3c7-4076-a0d3-21a679bfe061-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"000957b4-d3c7-4076-a0d3-21a679bfe061\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.650936 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.651842 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6b8h\" (UniqueName: \"kubernetes.io/projected/efbbc77c-f1d4-4756-9e19-6f53b03c275b-kube-api-access-t6b8h\") pod \"ovsdbserver-nb-0\" (UID: \"efbbc77c-f1d4-4756-9e19-6f53b03c275b\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.653779 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.654338 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/efbbc77c-f1d4-4756-9e19-6f53b03c275b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"efbbc77c-f1d4-4756-9e19-6f53b03c275b\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.655574 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7bbd\" (UniqueName: \"kubernetes.io/projected/80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a-kube-api-access-p7bbd\") pod \"ovsdbserver-nb-1\" (UID: \"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.657577 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6h7f\" (UniqueName: \"kubernetes.io/projected/000957b4-d3c7-4076-a0d3-21a679bfe061-kube-api-access-j6h7f\") pod \"ovsdbserver-nb-2\" (UID: \"000957b4-d3c7-4076-a0d3-21a679bfe061\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.675447 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0fcde2cc-f714-46c5-a15d-dff200639b33\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0fcde2cc-f714-46c5-a15d-dff200639b33\") pod \"ovsdbserver-nb-1\" (UID: \"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a\") " pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.675801 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/000957b4-d3c7-4076-a0d3-21a679bfe061-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"000957b4-d3c7-4076-a0d3-21a679bfe061\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.691496 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-eba2a6be-9f90-44cd-b17e-be560c693aff\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eba2a6be-9f90-44cd-b17e-be560c693aff\") pod \"ovsdbserver-nb-0\" (UID: \"efbbc77c-f1d4-4756-9e19-6f53b03c275b\") " pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.691451 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f0eda589-bbba-45c5-9a8e-013d9764b6bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f0eda589-bbba-45c5-9a8e-013d9764b6bd\") pod \"ovsdbserver-nb-2\" (UID: \"000957b4-d3c7-4076-a0d3-21a679bfe061\") " pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.789044 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.800233 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:15 crc kubenswrapper[4840]: I0311 10:26:15.812160 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.182461 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Mar 11 10:26:16 crc kubenswrapper[4840]: W0311 10:26:16.186282 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod000957b4_d3c7_4076_a0d3_21a679bfe061.slice/crio-4d76688ae9fb888d6ea5780188e3368431c2c12b4dc16361b91653b6e558458f WatchSource:0}: Error finding container 4d76688ae9fb888d6ea5780188e3368431c2c12b4dc16361b91653b6e558458f: Status 404 returned error can't find the container with id 4d76688ae9fb888d6ea5780188e3368431c2c12b4dc16361b91653b6e558458f Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.296031 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.301257 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.304630 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.305551 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.305694 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-kvwgc" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.305812 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.311711 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.313239 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.333829 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.353244 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.354830 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.356483 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69010a76-692d-46c4-bf9b-db918d487b4c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"69010a76-692d-46c4-bf9b-db918d487b4c\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.356512 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/69010a76-692d-46c4-bf9b-db918d487b4c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"69010a76-692d-46c4-bf9b-db918d487b4c\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.356554 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4e4f0e5b-e6d0-4559-add9-b23e72caf80d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e4f0e5b-e6d0-4559-add9-b23e72caf80d\") pod \"ovsdbserver-sb-1\" (UID: \"c7ad5868-055d-4e77-bc60-a7a2bab42b89\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.356573 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x29f5\" (UniqueName: \"kubernetes.io/projected/69010a76-692d-46c4-bf9b-db918d487b4c-kube-api-access-x29f5\") pod \"ovsdbserver-sb-0\" (UID: \"69010a76-692d-46c4-bf9b-db918d487b4c\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.356595 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ad5868-055d-4e77-bc60-a7a2bab42b89-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"c7ad5868-055d-4e77-bc60-a7a2bab42b89\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.356612 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z4jj\" (UniqueName: \"kubernetes.io/projected/c7ad5868-055d-4e77-bc60-a7a2bab42b89-kube-api-access-4z4jj\") pod \"ovsdbserver-sb-1\" (UID: \"c7ad5868-055d-4e77-bc60-a7a2bab42b89\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.356628 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69010a76-692d-46c4-bf9b-db918d487b4c-config\") pod \"ovsdbserver-sb-0\" (UID: \"69010a76-692d-46c4-bf9b-db918d487b4c\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.356653 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7ad5868-055d-4e77-bc60-a7a2bab42b89-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"c7ad5868-055d-4e77-bc60-a7a2bab42b89\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.356671 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/69010a76-692d-46c4-bf9b-db918d487b4c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"69010a76-692d-46c4-bf9b-db918d487b4c\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.356686 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c7ad5868-055d-4e77-bc60-a7a2bab42b89-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"c7ad5868-055d-4e77-bc60-a7a2bab42b89\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.356703 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7ad5868-055d-4e77-bc60-a7a2bab42b89-config\") pod \"ovsdbserver-sb-1\" (UID: \"c7ad5868-055d-4e77-bc60-a7a2bab42b89\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.356730 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7ad5868-055d-4e77-bc60-a7a2bab42b89-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"c7ad5868-055d-4e77-bc60-a7a2bab42b89\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.356746 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/69010a76-692d-46c4-bf9b-db918d487b4c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"69010a76-692d-46c4-bf9b-db918d487b4c\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.356764 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3e048ad2-bab8-4142-bf00-fd016a727499\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e048ad2-bab8-4142-bf00-fd016a727499\") pod \"ovsdbserver-sb-0\" (UID: \"69010a76-692d-46c4-bf9b-db918d487b4c\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.356779 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69010a76-692d-46c4-bf9b-db918d487b4c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"69010a76-692d-46c4-bf9b-db918d487b4c\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.356799 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ad5868-055d-4e77-bc60-a7a2bab42b89-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"c7ad5868-055d-4e77-bc60-a7a2bab42b89\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: W0311 10:26:16.364705 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefbbc77c_f1d4_4756_9e19_6f53b03c275b.slice/crio-83922156b6016ddb0b4901375a60e0daaca766513997fc2da905f2a9fdbe9f13 WatchSource:0}: Error finding container 83922156b6016ddb0b4901375a60e0daaca766513997fc2da905f2a9fdbe9f13: Status 404 returned error can't find the container with id 83922156b6016ddb0b4901375a60e0daaca766513997fc2da905f2a9fdbe9f13 Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.374517 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.384512 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.400554 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.458289 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69010a76-692d-46c4-bf9b-db918d487b4c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"69010a76-692d-46c4-bf9b-db918d487b4c\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.458326 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/69010a76-692d-46c4-bf9b-db918d487b4c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"69010a76-692d-46c4-bf9b-db918d487b4c\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.458373 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/86a8de5d-3566-4a15-a21e-59379813ed0d-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"86a8de5d-3566-4a15-a21e-59379813ed0d\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.458398 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4e4f0e5b-e6d0-4559-add9-b23e72caf80d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e4f0e5b-e6d0-4559-add9-b23e72caf80d\") pod \"ovsdbserver-sb-1\" (UID: \"c7ad5868-055d-4e77-bc60-a7a2bab42b89\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.458421 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x29f5\" (UniqueName: \"kubernetes.io/projected/69010a76-692d-46c4-bf9b-db918d487b4c-kube-api-access-x29f5\") pod \"ovsdbserver-sb-0\" (UID: \"69010a76-692d-46c4-bf9b-db918d487b4c\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.458445 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ad5868-055d-4e77-bc60-a7a2bab42b89-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"c7ad5868-055d-4e77-bc60-a7a2bab42b89\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.458460 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z4jj\" (UniqueName: \"kubernetes.io/projected/c7ad5868-055d-4e77-bc60-a7a2bab42b89-kube-api-access-4z4jj\") pod \"ovsdbserver-sb-1\" (UID: \"c7ad5868-055d-4e77-bc60-a7a2bab42b89\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.458489 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69010a76-692d-46c4-bf9b-db918d487b4c-config\") pod \"ovsdbserver-sb-0\" (UID: \"69010a76-692d-46c4-bf9b-db918d487b4c\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.458514 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrjr9\" (UniqueName: \"kubernetes.io/projected/86a8de5d-3566-4a15-a21e-59379813ed0d-kube-api-access-jrjr9\") pod \"ovsdbserver-sb-2\" (UID: \"86a8de5d-3566-4a15-a21e-59379813ed0d\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.458540 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7ad5868-055d-4e77-bc60-a7a2bab42b89-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"c7ad5868-055d-4e77-bc60-a7a2bab42b89\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.458558 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/69010a76-692d-46c4-bf9b-db918d487b4c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"69010a76-692d-46c4-bf9b-db918d487b4c\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.458575 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c7ad5868-055d-4e77-bc60-a7a2bab42b89-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"c7ad5868-055d-4e77-bc60-a7a2bab42b89\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.458593 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7ad5868-055d-4e77-bc60-a7a2bab42b89-config\") pod \"ovsdbserver-sb-1\" (UID: \"c7ad5868-055d-4e77-bc60-a7a2bab42b89\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.458621 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7ad5868-055d-4e77-bc60-a7a2bab42b89-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"c7ad5868-055d-4e77-bc60-a7a2bab42b89\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.458635 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/69010a76-692d-46c4-bf9b-db918d487b4c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"69010a76-692d-46c4-bf9b-db918d487b4c\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.458652 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a8de5d-3566-4a15-a21e-59379813ed0d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"86a8de5d-3566-4a15-a21e-59379813ed0d\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.458673 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3e048ad2-bab8-4142-bf00-fd016a727499\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e048ad2-bab8-4142-bf00-fd016a727499\") pod \"ovsdbserver-sb-0\" (UID: \"69010a76-692d-46c4-bf9b-db918d487b4c\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.458713 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69010a76-692d-46c4-bf9b-db918d487b4c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"69010a76-692d-46c4-bf9b-db918d487b4c\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.458736 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86a8de5d-3566-4a15-a21e-59379813ed0d-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"86a8de5d-3566-4a15-a21e-59379813ed0d\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.458759 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ad5868-055d-4e77-bc60-a7a2bab42b89-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"c7ad5868-055d-4e77-bc60-a7a2bab42b89\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.458825 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86a8de5d-3566-4a15-a21e-59379813ed0d-config\") pod \"ovsdbserver-sb-2\" (UID: \"86a8de5d-3566-4a15-a21e-59379813ed0d\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.458841 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a8de5d-3566-4a15-a21e-59379813ed0d-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"86a8de5d-3566-4a15-a21e-59379813ed0d\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.458888 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bb8c19bc-73a9-414a-8c61-cc7b0fbce0a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bb8c19bc-73a9-414a-8c61-cc7b0fbce0a0\") pod \"ovsdbserver-sb-2\" (UID: \"86a8de5d-3566-4a15-a21e-59379813ed0d\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.458911 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a8de5d-3566-4a15-a21e-59379813ed0d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"86a8de5d-3566-4a15-a21e-59379813ed0d\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.459683 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69010a76-692d-46c4-bf9b-db918d487b4c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"69010a76-692d-46c4-bf9b-db918d487b4c\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.460634 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7ad5868-055d-4e77-bc60-a7a2bab42b89-config\") pod \"ovsdbserver-sb-1\" (UID: \"c7ad5868-055d-4e77-bc60-a7a2bab42b89\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.460693 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.461621 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69010a76-692d-46c4-bf9b-db918d487b4c-config\") pod \"ovsdbserver-sb-0\" (UID: \"69010a76-692d-46c4-bf9b-db918d487b4c\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.462002 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/69010a76-692d-46c4-bf9b-db918d487b4c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"69010a76-692d-46c4-bf9b-db918d487b4c\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.462084 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c7ad5868-055d-4e77-bc60-a7a2bab42b89-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"c7ad5868-055d-4e77-bc60-a7a2bab42b89\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.462831 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c7ad5868-055d-4e77-bc60-a7a2bab42b89-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"c7ad5868-055d-4e77-bc60-a7a2bab42b89\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.463965 4840 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.463999 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4e4f0e5b-e6d0-4559-add9-b23e72caf80d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e4f0e5b-e6d0-4559-add9-b23e72caf80d\") pod \"ovsdbserver-sb-1\" (UID: \"c7ad5868-055d-4e77-bc60-a7a2bab42b89\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/adf2b9f8191163ee2d9d2ac1b3f962be70303274f5cd8eb2405bdc8057c342e2/globalmount\"" pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.466701 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69010a76-692d-46c4-bf9b-db918d487b4c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"69010a76-692d-46c4-bf9b-db918d487b4c\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.467609 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/69010a76-692d-46c4-bf9b-db918d487b4c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"69010a76-692d-46c4-bf9b-db918d487b4c\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.467800 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ad5868-055d-4e77-bc60-a7a2bab42b89-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"c7ad5868-055d-4e77-bc60-a7a2bab42b89\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.468610 4840 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.468645 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3e048ad2-bab8-4142-bf00-fd016a727499\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e048ad2-bab8-4142-bf00-fd016a727499\") pod \"ovsdbserver-sb-0\" (UID: \"69010a76-692d-46c4-bf9b-db918d487b4c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bf35e9d81f96e0be293f3ce7d96d20a0f95533e9154e7a76392f312d1b31ab64/globalmount\"" pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.472451 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7ad5868-055d-4e77-bc60-a7a2bab42b89-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"c7ad5868-055d-4e77-bc60-a7a2bab42b89\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.473451 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/69010a76-692d-46c4-bf9b-db918d487b4c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"69010a76-692d-46c4-bf9b-db918d487b4c\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.473616 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7ad5868-055d-4e77-bc60-a7a2bab42b89-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"c7ad5868-055d-4e77-bc60-a7a2bab42b89\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.478971 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z4jj\" (UniqueName: \"kubernetes.io/projected/c7ad5868-055d-4e77-bc60-a7a2bab42b89-kube-api-access-4z4jj\") pod \"ovsdbserver-sb-1\" (UID: \"c7ad5868-055d-4e77-bc60-a7a2bab42b89\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.479505 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x29f5\" (UniqueName: \"kubernetes.io/projected/69010a76-692d-46c4-bf9b-db918d487b4c-kube-api-access-x29f5\") pod \"ovsdbserver-sb-0\" (UID: \"69010a76-692d-46c4-bf9b-db918d487b4c\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.505402 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4e4f0e5b-e6d0-4559-add9-b23e72caf80d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e4f0e5b-e6d0-4559-add9-b23e72caf80d\") pod \"ovsdbserver-sb-1\" (UID: \"c7ad5868-055d-4e77-bc60-a7a2bab42b89\") " pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.513375 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3e048ad2-bab8-4142-bf00-fd016a727499\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e048ad2-bab8-4142-bf00-fd016a727499\") pod \"ovsdbserver-sb-0\" (UID: \"69010a76-692d-46c4-bf9b-db918d487b4c\") " pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.559983 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/86a8de5d-3566-4a15-a21e-59379813ed0d-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"86a8de5d-3566-4a15-a21e-59379813ed0d\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.560083 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrjr9\" (UniqueName: \"kubernetes.io/projected/86a8de5d-3566-4a15-a21e-59379813ed0d-kube-api-access-jrjr9\") pod \"ovsdbserver-sb-2\" (UID: \"86a8de5d-3566-4a15-a21e-59379813ed0d\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.560141 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a8de5d-3566-4a15-a21e-59379813ed0d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"86a8de5d-3566-4a15-a21e-59379813ed0d\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.560168 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86a8de5d-3566-4a15-a21e-59379813ed0d-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"86a8de5d-3566-4a15-a21e-59379813ed0d\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.560201 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86a8de5d-3566-4a15-a21e-59379813ed0d-config\") pod \"ovsdbserver-sb-2\" (UID: \"86a8de5d-3566-4a15-a21e-59379813ed0d\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.560218 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a8de5d-3566-4a15-a21e-59379813ed0d-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"86a8de5d-3566-4a15-a21e-59379813ed0d\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.560256 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-bb8c19bc-73a9-414a-8c61-cc7b0fbce0a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bb8c19bc-73a9-414a-8c61-cc7b0fbce0a0\") pod \"ovsdbserver-sb-2\" (UID: \"86a8de5d-3566-4a15-a21e-59379813ed0d\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.560281 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a8de5d-3566-4a15-a21e-59379813ed0d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"86a8de5d-3566-4a15-a21e-59379813ed0d\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.560438 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/86a8de5d-3566-4a15-a21e-59379813ed0d-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"86a8de5d-3566-4a15-a21e-59379813ed0d\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.561126 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86a8de5d-3566-4a15-a21e-59379813ed0d-config\") pod \"ovsdbserver-sb-2\" (UID: \"86a8de5d-3566-4a15-a21e-59379813ed0d\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.561628 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/86a8de5d-3566-4a15-a21e-59379813ed0d-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"86a8de5d-3566-4a15-a21e-59379813ed0d\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.562630 4840 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.562662 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-bb8c19bc-73a9-414a-8c61-cc7b0fbce0a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bb8c19bc-73a9-414a-8c61-cc7b0fbce0a0\") pod \"ovsdbserver-sb-2\" (UID: \"86a8de5d-3566-4a15-a21e-59379813ed0d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/dd831bb67c57c98166cb31e8796c1ec7cc935238aa94d5467aa86f9444d1bb10/globalmount\"" pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.565089 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a8de5d-3566-4a15-a21e-59379813ed0d-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"86a8de5d-3566-4a15-a21e-59379813ed0d\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.565265 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a8de5d-3566-4a15-a21e-59379813ed0d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"86a8de5d-3566-4a15-a21e-59379813ed0d\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.571017 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a8de5d-3566-4a15-a21e-59379813ed0d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"86a8de5d-3566-4a15-a21e-59379813ed0d\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.588557 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrjr9\" (UniqueName: \"kubernetes.io/projected/86a8de5d-3566-4a15-a21e-59379813ed0d-kube-api-access-jrjr9\") pod \"ovsdbserver-sb-2\" (UID: \"86a8de5d-3566-4a15-a21e-59379813ed0d\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.600993 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-bb8c19bc-73a9-414a-8c61-cc7b0fbce0a0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-bb8c19bc-73a9-414a-8c61-cc7b0fbce0a0\") pod \"ovsdbserver-sb-2\" (UID: \"86a8de5d-3566-4a15-a21e-59379813ed0d\") " pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.622808 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.640775 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.687275 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.850260 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a","Type":"ContainerStarted","Data":"bf25b41922d5abe3ffe4a472a015ccb97906d0f0aab2e44a44b6fd42b96f52dc"} Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.850532 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a","Type":"ContainerStarted","Data":"e0020bcd7cdfc7ebf8d1ea80d46a0d34a4be4cba77a6d86ff7e9c72b08d9e490"} Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.853554 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"efbbc77c-f1d4-4756-9e19-6f53b03c275b","Type":"ContainerStarted","Data":"700fa0e121423a34dd9b0c46dac8621924efbab1d7413f599ead5ebfbb0cda6b"} Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.853583 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"efbbc77c-f1d4-4756-9e19-6f53b03c275b","Type":"ContainerStarted","Data":"f57eb0f88ec6a64ecffd154cf4f65e189049f8fd1d38783b4cb67d78245d5f47"} Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.853593 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"efbbc77c-f1d4-4756-9e19-6f53b03c275b","Type":"ContainerStarted","Data":"83922156b6016ddb0b4901375a60e0daaca766513997fc2da905f2a9fdbe9f13"} Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.857730 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"000957b4-d3c7-4076-a0d3-21a679bfe061","Type":"ContainerStarted","Data":"14f102684588d3c92456f936e852d8144842ee7c536a3683256d5b17c4e1ab1f"} Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.857763 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"000957b4-d3c7-4076-a0d3-21a679bfe061","Type":"ContainerStarted","Data":"ca20afae93f7247207dd3e751347198f95a9e1c2488c360e605dcb6a0bca9cb2"} Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.857772 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"000957b4-d3c7-4076-a0d3-21a679bfe061","Type":"ContainerStarted","Data":"4d76688ae9fb888d6ea5780188e3368431c2c12b4dc16361b91653b6e558458f"} Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.877355 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=2.877340345 podStartE2EDuration="2.877340345s" podCreationTimestamp="2026-03-11 10:26:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:26:16.873156968 +0000 UTC m=+5375.538826783" watchObservedRunningTime="2026-03-11 10:26:16.877340345 +0000 UTC m=+5375.543010160" Mar 11 10:26:16 crc kubenswrapper[4840]: I0311 10:26:16.921694 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=2.9216747400000003 podStartE2EDuration="2.92167474s" podCreationTimestamp="2026-03-11 10:26:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:26:16.915896344 +0000 UTC m=+5375.581566159" watchObservedRunningTime="2026-03-11 10:26:16.92167474 +0000 UTC m=+5375.587344555" Mar 11 10:26:17 crc kubenswrapper[4840]: I0311 10:26:17.027968 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Mar 11 10:26:17 crc kubenswrapper[4840]: I0311 10:26:17.118318 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 11 10:26:17 crc kubenswrapper[4840]: I0311 10:26:17.312180 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Mar 11 10:26:17 crc kubenswrapper[4840]: W0311 10:26:17.322690 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86a8de5d_3566_4a15_a21e_59379813ed0d.slice/crio-5ab0ee9e9d9c5dbe230a5983b223fce726b90d56153bba54bd118d51e798b3fc WatchSource:0}: Error finding container 5ab0ee9e9d9c5dbe230a5983b223fce726b90d56153bba54bd118d51e798b3fc: Status 404 returned error can't find the container with id 5ab0ee9e9d9c5dbe230a5983b223fce726b90d56153bba54bd118d51e798b3fc Mar 11 10:26:17 crc kubenswrapper[4840]: I0311 10:26:17.870668 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"c7ad5868-055d-4e77-bc60-a7a2bab42b89","Type":"ContainerStarted","Data":"8cc147fc130268a27383ae8c12d5ddff5aa2f88e6fb3d64751e7729b204747ca"} Mar 11 10:26:17 crc kubenswrapper[4840]: I0311 10:26:17.871054 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"c7ad5868-055d-4e77-bc60-a7a2bab42b89","Type":"ContainerStarted","Data":"d29af28369b802c4f9dfd29e9d472ab2fd8aab60485bd73b3bee9ff6a635a3d8"} Mar 11 10:26:17 crc kubenswrapper[4840]: I0311 10:26:17.871066 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"c7ad5868-055d-4e77-bc60-a7a2bab42b89","Type":"ContainerStarted","Data":"5ebb8b095161b1178635809c7d0654b01c541d27b8092b4c809f49e97111ab8e"} Mar 11 10:26:17 crc kubenswrapper[4840]: I0311 10:26:17.872511 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a","Type":"ContainerStarted","Data":"5bbb7f3c832523db5d5435a6a62be5841868c8a4fd80cc050a8172d69c7321fd"} Mar 11 10:26:17 crc kubenswrapper[4840]: I0311 10:26:17.875246 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"86a8de5d-3566-4a15-a21e-59379813ed0d","Type":"ContainerStarted","Data":"bd2e27418fe0e073b46b7f4b6ad444cae14ffd9ea3453f5aaf45133b3cb60fea"} Mar 11 10:26:17 crc kubenswrapper[4840]: I0311 10:26:17.875277 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"86a8de5d-3566-4a15-a21e-59379813ed0d","Type":"ContainerStarted","Data":"243d1896e2d9ad96f2d54e1b4cc223feb65f2a3f8a5c4c3da5a3c2942ee948e3"} Mar 11 10:26:17 crc kubenswrapper[4840]: I0311 10:26:17.875291 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"86a8de5d-3566-4a15-a21e-59379813ed0d","Type":"ContainerStarted","Data":"5ab0ee9e9d9c5dbe230a5983b223fce726b90d56153bba54bd118d51e798b3fc"} Mar 11 10:26:17 crc kubenswrapper[4840]: I0311 10:26:17.878820 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"69010a76-692d-46c4-bf9b-db918d487b4c","Type":"ContainerStarted","Data":"8143c8f9d5761cbd76ee0a651865e8a312cc50c479d9f277509bcdeac7bae7f1"} Mar 11 10:26:17 crc kubenswrapper[4840]: I0311 10:26:17.878851 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"69010a76-692d-46c4-bf9b-db918d487b4c","Type":"ContainerStarted","Data":"ea2b285699516976b7bbd8e2f37caab55862e853a1df6b4d3fb70583462a6e38"} Mar 11 10:26:17 crc kubenswrapper[4840]: I0311 10:26:17.878865 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"69010a76-692d-46c4-bf9b-db918d487b4c","Type":"ContainerStarted","Data":"89fb6439b9bd6bff6ddd16ecd6ab6996b3c5853152955def41dacef400290d93"} Mar 11 10:26:17 crc kubenswrapper[4840]: I0311 10:26:17.902442 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=2.90240763 podStartE2EDuration="2.90240763s" podCreationTimestamp="2026-03-11 10:26:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:26:17.896660085 +0000 UTC m=+5376.562329910" watchObservedRunningTime="2026-03-11 10:26:17.90240763 +0000 UTC m=+5376.568077465" Mar 11 10:26:17 crc kubenswrapper[4840]: I0311 10:26:17.926121 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.9261012920000002 podStartE2EDuration="3.926101292s" podCreationTimestamp="2026-03-11 10:26:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:26:17.923023214 +0000 UTC m=+5376.588693069" watchObservedRunningTime="2026-03-11 10:26:17.926101292 +0000 UTC m=+5376.591771107" Mar 11 10:26:17 crc kubenswrapper[4840]: I0311 10:26:17.944038 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=2.944017827 podStartE2EDuration="2.944017827s" podCreationTimestamp="2026-03-11 10:26:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:26:17.942498398 +0000 UTC m=+5376.608168223" watchObservedRunningTime="2026-03-11 10:26:17.944017827 +0000 UTC m=+5376.609687642" Mar 11 10:26:17 crc kubenswrapper[4840]: I0311 10:26:17.974375 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=2.974352597 podStartE2EDuration="2.974352597s" podCreationTimestamp="2026-03-11 10:26:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:26:17.968389466 +0000 UTC m=+5376.634059301" watchObservedRunningTime="2026-03-11 10:26:17.974352597 +0000 UTC m=+5376.640022412" Mar 11 10:26:18 crc kubenswrapper[4840]: I0311 10:26:18.789454 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:18 crc kubenswrapper[4840]: I0311 10:26:18.801687 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:18 crc kubenswrapper[4840]: I0311 10:26:18.813963 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:19 crc kubenswrapper[4840]: I0311 10:26:19.623592 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:19 crc kubenswrapper[4840]: I0311 10:26:19.641453 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:19 crc kubenswrapper[4840]: I0311 10:26:19.688316 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:20 crc kubenswrapper[4840]: I0311 10:26:20.790113 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:20 crc kubenswrapper[4840]: I0311 10:26:20.801668 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:20 crc kubenswrapper[4840]: I0311 10:26:20.814622 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:21 crc kubenswrapper[4840]: I0311 10:26:21.623127 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:21 crc kubenswrapper[4840]: I0311 10:26:21.641799 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:21 crc kubenswrapper[4840]: I0311 10:26:21.687990 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:21 crc kubenswrapper[4840]: I0311 10:26:21.847160 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:21 crc kubenswrapper[4840]: I0311 10:26:21.858761 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:21 crc kubenswrapper[4840]: I0311 10:26:21.867525 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:21 crc kubenswrapper[4840]: I0311 10:26:21.921912 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 11 10:26:21 crc kubenswrapper[4840]: I0311 10:26:21.938524 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Mar 11 10:26:21 crc kubenswrapper[4840]: I0311 10:26:21.979584 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.246295 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65f49d9b4c-brmwz"] Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.249301 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f49d9b4c-brmwz" Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.256697 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.285800 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65f49d9b4c-brmwz"] Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.378426 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97ab0530-6013-4fad-ba49-408f87956dfe-config\") pod \"dnsmasq-dns-65f49d9b4c-brmwz\" (UID: \"97ab0530-6013-4fad-ba49-408f87956dfe\") " pod="openstack/dnsmasq-dns-65f49d9b4c-brmwz" Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.378618 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkb6x\" (UniqueName: \"kubernetes.io/projected/97ab0530-6013-4fad-ba49-408f87956dfe-kube-api-access-dkb6x\") pod \"dnsmasq-dns-65f49d9b4c-brmwz\" (UID: \"97ab0530-6013-4fad-ba49-408f87956dfe\") " pod="openstack/dnsmasq-dns-65f49d9b4c-brmwz" Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.378651 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97ab0530-6013-4fad-ba49-408f87956dfe-dns-svc\") pod \"dnsmasq-dns-65f49d9b4c-brmwz\" (UID: \"97ab0530-6013-4fad-ba49-408f87956dfe\") " pod="openstack/dnsmasq-dns-65f49d9b4c-brmwz" Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.378673 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97ab0530-6013-4fad-ba49-408f87956dfe-ovsdbserver-nb\") pod \"dnsmasq-dns-65f49d9b4c-brmwz\" (UID: \"97ab0530-6013-4fad-ba49-408f87956dfe\") " pod="openstack/dnsmasq-dns-65f49d9b4c-brmwz" Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.479572 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkb6x\" (UniqueName: \"kubernetes.io/projected/97ab0530-6013-4fad-ba49-408f87956dfe-kube-api-access-dkb6x\") pod \"dnsmasq-dns-65f49d9b4c-brmwz\" (UID: \"97ab0530-6013-4fad-ba49-408f87956dfe\") " pod="openstack/dnsmasq-dns-65f49d9b4c-brmwz" Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.479642 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97ab0530-6013-4fad-ba49-408f87956dfe-dns-svc\") pod \"dnsmasq-dns-65f49d9b4c-brmwz\" (UID: \"97ab0530-6013-4fad-ba49-408f87956dfe\") " pod="openstack/dnsmasq-dns-65f49d9b4c-brmwz" Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.479661 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97ab0530-6013-4fad-ba49-408f87956dfe-ovsdbserver-nb\") pod \"dnsmasq-dns-65f49d9b4c-brmwz\" (UID: \"97ab0530-6013-4fad-ba49-408f87956dfe\") " pod="openstack/dnsmasq-dns-65f49d9b4c-brmwz" Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.479698 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97ab0530-6013-4fad-ba49-408f87956dfe-config\") pod \"dnsmasq-dns-65f49d9b4c-brmwz\" (UID: \"97ab0530-6013-4fad-ba49-408f87956dfe\") " pod="openstack/dnsmasq-dns-65f49d9b4c-brmwz" Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.480637 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97ab0530-6013-4fad-ba49-408f87956dfe-config\") pod \"dnsmasq-dns-65f49d9b4c-brmwz\" (UID: \"97ab0530-6013-4fad-ba49-408f87956dfe\") " pod="openstack/dnsmasq-dns-65f49d9b4c-brmwz" Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.481369 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97ab0530-6013-4fad-ba49-408f87956dfe-dns-svc\") pod \"dnsmasq-dns-65f49d9b4c-brmwz\" (UID: \"97ab0530-6013-4fad-ba49-408f87956dfe\") " pod="openstack/dnsmasq-dns-65f49d9b4c-brmwz" Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.481949 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97ab0530-6013-4fad-ba49-408f87956dfe-ovsdbserver-nb\") pod \"dnsmasq-dns-65f49d9b4c-brmwz\" (UID: \"97ab0530-6013-4fad-ba49-408f87956dfe\") " pod="openstack/dnsmasq-dns-65f49d9b4c-brmwz" Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.499842 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkb6x\" (UniqueName: \"kubernetes.io/projected/97ab0530-6013-4fad-ba49-408f87956dfe-kube-api-access-dkb6x\") pod \"dnsmasq-dns-65f49d9b4c-brmwz\" (UID: \"97ab0530-6013-4fad-ba49-408f87956dfe\") " pod="openstack/dnsmasq-dns-65f49d9b4c-brmwz" Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.566189 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f49d9b4c-brmwz" Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.670427 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.696328 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.737344 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.739408 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.756688 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.787176 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.931570 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65f49d9b4c-brmwz"] Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.972647 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf4cccc77-szj74"] Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.981079 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf4cccc77-szj74" Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.984246 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 11 10:26:22 crc kubenswrapper[4840]: I0311 10:26:22.994114 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf4cccc77-szj74"] Mar 11 10:26:23 crc kubenswrapper[4840]: I0311 10:26:23.013491 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65f49d9b4c-brmwz"] Mar 11 10:26:23 crc kubenswrapper[4840]: I0311 10:26:23.094606 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg6w7\" (UniqueName: \"kubernetes.io/projected/84734188-5a59-409c-b48d-7c68235005b5-kube-api-access-cg6w7\") pod \"dnsmasq-dns-cf4cccc77-szj74\" (UID: \"84734188-5a59-409c-b48d-7c68235005b5\") " pod="openstack/dnsmasq-dns-cf4cccc77-szj74" Mar 11 10:26:23 crc kubenswrapper[4840]: I0311 10:26:23.094984 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84734188-5a59-409c-b48d-7c68235005b5-config\") pod \"dnsmasq-dns-cf4cccc77-szj74\" (UID: \"84734188-5a59-409c-b48d-7c68235005b5\") " pod="openstack/dnsmasq-dns-cf4cccc77-szj74" Mar 11 10:26:23 crc kubenswrapper[4840]: I0311 10:26:23.095062 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84734188-5a59-409c-b48d-7c68235005b5-dns-svc\") pod \"dnsmasq-dns-cf4cccc77-szj74\" (UID: \"84734188-5a59-409c-b48d-7c68235005b5\") " pod="openstack/dnsmasq-dns-cf4cccc77-szj74" Mar 11 10:26:23 crc kubenswrapper[4840]: I0311 10:26:23.095137 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84734188-5a59-409c-b48d-7c68235005b5-ovsdbserver-nb\") pod \"dnsmasq-dns-cf4cccc77-szj74\" (UID: \"84734188-5a59-409c-b48d-7c68235005b5\") " pod="openstack/dnsmasq-dns-cf4cccc77-szj74" Mar 11 10:26:23 crc kubenswrapper[4840]: I0311 10:26:23.095598 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84734188-5a59-409c-b48d-7c68235005b5-ovsdbserver-sb\") pod \"dnsmasq-dns-cf4cccc77-szj74\" (UID: \"84734188-5a59-409c-b48d-7c68235005b5\") " pod="openstack/dnsmasq-dns-cf4cccc77-szj74" Mar 11 10:26:23 crc kubenswrapper[4840]: I0311 10:26:23.198101 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84734188-5a59-409c-b48d-7c68235005b5-ovsdbserver-sb\") pod \"dnsmasq-dns-cf4cccc77-szj74\" (UID: \"84734188-5a59-409c-b48d-7c68235005b5\") " pod="openstack/dnsmasq-dns-cf4cccc77-szj74" Mar 11 10:26:23 crc kubenswrapper[4840]: I0311 10:26:23.198147 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg6w7\" (UniqueName: \"kubernetes.io/projected/84734188-5a59-409c-b48d-7c68235005b5-kube-api-access-cg6w7\") pod \"dnsmasq-dns-cf4cccc77-szj74\" (UID: \"84734188-5a59-409c-b48d-7c68235005b5\") " pod="openstack/dnsmasq-dns-cf4cccc77-szj74" Mar 11 10:26:23 crc kubenswrapper[4840]: I0311 10:26:23.198864 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84734188-5a59-409c-b48d-7c68235005b5-config\") pod \"dnsmasq-dns-cf4cccc77-szj74\" (UID: \"84734188-5a59-409c-b48d-7c68235005b5\") " pod="openstack/dnsmasq-dns-cf4cccc77-szj74" Mar 11 10:26:23 crc kubenswrapper[4840]: I0311 10:26:23.198920 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84734188-5a59-409c-b48d-7c68235005b5-dns-svc\") pod \"dnsmasq-dns-cf4cccc77-szj74\" (UID: \"84734188-5a59-409c-b48d-7c68235005b5\") " pod="openstack/dnsmasq-dns-cf4cccc77-szj74" Mar 11 10:26:23 crc kubenswrapper[4840]: I0311 10:26:23.198956 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84734188-5a59-409c-b48d-7c68235005b5-ovsdbserver-nb\") pod \"dnsmasq-dns-cf4cccc77-szj74\" (UID: \"84734188-5a59-409c-b48d-7c68235005b5\") " pod="openstack/dnsmasq-dns-cf4cccc77-szj74" Mar 11 10:26:23 crc kubenswrapper[4840]: I0311 10:26:23.199171 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84734188-5a59-409c-b48d-7c68235005b5-config\") pod \"dnsmasq-dns-cf4cccc77-szj74\" (UID: \"84734188-5a59-409c-b48d-7c68235005b5\") " pod="openstack/dnsmasq-dns-cf4cccc77-szj74" Mar 11 10:26:23 crc kubenswrapper[4840]: I0311 10:26:23.199220 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84734188-5a59-409c-b48d-7c68235005b5-ovsdbserver-sb\") pod \"dnsmasq-dns-cf4cccc77-szj74\" (UID: \"84734188-5a59-409c-b48d-7c68235005b5\") " pod="openstack/dnsmasq-dns-cf4cccc77-szj74" Mar 11 10:26:23 crc kubenswrapper[4840]: I0311 10:26:23.199777 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84734188-5a59-409c-b48d-7c68235005b5-ovsdbserver-nb\") pod \"dnsmasq-dns-cf4cccc77-szj74\" (UID: \"84734188-5a59-409c-b48d-7c68235005b5\") " pod="openstack/dnsmasq-dns-cf4cccc77-szj74" Mar 11 10:26:23 crc kubenswrapper[4840]: I0311 10:26:23.200274 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84734188-5a59-409c-b48d-7c68235005b5-dns-svc\") pod \"dnsmasq-dns-cf4cccc77-szj74\" (UID: \"84734188-5a59-409c-b48d-7c68235005b5\") " pod="openstack/dnsmasq-dns-cf4cccc77-szj74" Mar 11 10:26:23 crc kubenswrapper[4840]: I0311 10:26:23.213949 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg6w7\" (UniqueName: \"kubernetes.io/projected/84734188-5a59-409c-b48d-7c68235005b5-kube-api-access-cg6w7\") pod \"dnsmasq-dns-cf4cccc77-szj74\" (UID: \"84734188-5a59-409c-b48d-7c68235005b5\") " pod="openstack/dnsmasq-dns-cf4cccc77-szj74" Mar 11 10:26:23 crc kubenswrapper[4840]: I0311 10:26:23.309719 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf4cccc77-szj74" Mar 11 10:26:23 crc kubenswrapper[4840]: I0311 10:26:23.774920 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf4cccc77-szj74"] Mar 11 10:26:23 crc kubenswrapper[4840]: I0311 10:26:23.953503 4840 generic.go:334] "Generic (PLEG): container finished" podID="97ab0530-6013-4fad-ba49-408f87956dfe" containerID="f8b6fcb57fa790ccb2dae5450a204a3d412eff447ec04a1c3fc0ab74fe936d1e" exitCode=0 Mar 11 10:26:23 crc kubenswrapper[4840]: I0311 10:26:23.953614 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f49d9b4c-brmwz" event={"ID":"97ab0530-6013-4fad-ba49-408f87956dfe","Type":"ContainerDied","Data":"f8b6fcb57fa790ccb2dae5450a204a3d412eff447ec04a1c3fc0ab74fe936d1e"} Mar 11 10:26:23 crc kubenswrapper[4840]: I0311 10:26:23.953649 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f49d9b4c-brmwz" event={"ID":"97ab0530-6013-4fad-ba49-408f87956dfe","Type":"ContainerStarted","Data":"7f324d7812c1c47625a9767a70487e1738eca74010b1d0bf3c57149de579cb18"} Mar 11 10:26:23 crc kubenswrapper[4840]: I0311 10:26:23.955750 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf4cccc77-szj74" event={"ID":"84734188-5a59-409c-b48d-7c68235005b5","Type":"ContainerStarted","Data":"cc037ac93f78df62a6ef08fc441c1e739bdba0acd5fef201e8d8cf429b5e44d4"} Mar 11 10:26:24 crc kubenswrapper[4840]: I0311 10:26:24.230130 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f49d9b4c-brmwz" Mar 11 10:26:24 crc kubenswrapper[4840]: I0311 10:26:24.317638 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkb6x\" (UniqueName: \"kubernetes.io/projected/97ab0530-6013-4fad-ba49-408f87956dfe-kube-api-access-dkb6x\") pod \"97ab0530-6013-4fad-ba49-408f87956dfe\" (UID: \"97ab0530-6013-4fad-ba49-408f87956dfe\") " Mar 11 10:26:24 crc kubenswrapper[4840]: I0311 10:26:24.317690 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97ab0530-6013-4fad-ba49-408f87956dfe-dns-svc\") pod \"97ab0530-6013-4fad-ba49-408f87956dfe\" (UID: \"97ab0530-6013-4fad-ba49-408f87956dfe\") " Mar 11 10:26:24 crc kubenswrapper[4840]: I0311 10:26:24.317798 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97ab0530-6013-4fad-ba49-408f87956dfe-ovsdbserver-nb\") pod \"97ab0530-6013-4fad-ba49-408f87956dfe\" (UID: \"97ab0530-6013-4fad-ba49-408f87956dfe\") " Mar 11 10:26:24 crc kubenswrapper[4840]: I0311 10:26:24.317914 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97ab0530-6013-4fad-ba49-408f87956dfe-config\") pod \"97ab0530-6013-4fad-ba49-408f87956dfe\" (UID: \"97ab0530-6013-4fad-ba49-408f87956dfe\") " Mar 11 10:26:24 crc kubenswrapper[4840]: I0311 10:26:24.321998 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97ab0530-6013-4fad-ba49-408f87956dfe-kube-api-access-dkb6x" (OuterVolumeSpecName: "kube-api-access-dkb6x") pod "97ab0530-6013-4fad-ba49-408f87956dfe" (UID: "97ab0530-6013-4fad-ba49-408f87956dfe"). InnerVolumeSpecName "kube-api-access-dkb6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:26:24 crc kubenswrapper[4840]: I0311 10:26:24.335915 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97ab0530-6013-4fad-ba49-408f87956dfe-config" (OuterVolumeSpecName: "config") pod "97ab0530-6013-4fad-ba49-408f87956dfe" (UID: "97ab0530-6013-4fad-ba49-408f87956dfe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:26:24 crc kubenswrapper[4840]: I0311 10:26:24.336843 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97ab0530-6013-4fad-ba49-408f87956dfe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "97ab0530-6013-4fad-ba49-408f87956dfe" (UID: "97ab0530-6013-4fad-ba49-408f87956dfe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:26:24 crc kubenswrapper[4840]: I0311 10:26:24.340203 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97ab0530-6013-4fad-ba49-408f87956dfe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "97ab0530-6013-4fad-ba49-408f87956dfe" (UID: "97ab0530-6013-4fad-ba49-408f87956dfe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:26:24 crc kubenswrapper[4840]: I0311 10:26:24.419401 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97ab0530-6013-4fad-ba49-408f87956dfe-config\") on node \"crc\" DevicePath \"\"" Mar 11 10:26:24 crc kubenswrapper[4840]: I0311 10:26:24.419445 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkb6x\" (UniqueName: \"kubernetes.io/projected/97ab0530-6013-4fad-ba49-408f87956dfe-kube-api-access-dkb6x\") on node \"crc\" DevicePath \"\"" Mar 11 10:26:24 crc kubenswrapper[4840]: I0311 10:26:24.419461 4840 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/97ab0530-6013-4fad-ba49-408f87956dfe-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 10:26:24 crc kubenswrapper[4840]: I0311 10:26:24.419490 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/97ab0530-6013-4fad-ba49-408f87956dfe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 10:26:24 crc kubenswrapper[4840]: I0311 10:26:24.965443 4840 generic.go:334] "Generic (PLEG): container finished" podID="84734188-5a59-409c-b48d-7c68235005b5" containerID="3c6ddef317340f1dc7f72e6b129251da4be9c8aee050bc037d7a6f11136b046b" exitCode=0 Mar 11 10:26:24 crc kubenswrapper[4840]: I0311 10:26:24.965507 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf4cccc77-szj74" event={"ID":"84734188-5a59-409c-b48d-7c68235005b5","Type":"ContainerDied","Data":"3c6ddef317340f1dc7f72e6b129251da4be9c8aee050bc037d7a6f11136b046b"} Mar 11 10:26:24 crc kubenswrapper[4840]: I0311 10:26:24.967837 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65f49d9b4c-brmwz" event={"ID":"97ab0530-6013-4fad-ba49-408f87956dfe","Type":"ContainerDied","Data":"7f324d7812c1c47625a9767a70487e1738eca74010b1d0bf3c57149de579cb18"} Mar 11 10:26:24 crc kubenswrapper[4840]: I0311 10:26:24.967892 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65f49d9b4c-brmwz" Mar 11 10:26:24 crc kubenswrapper[4840]: I0311 10:26:24.967898 4840 scope.go:117] "RemoveContainer" containerID="f8b6fcb57fa790ccb2dae5450a204a3d412eff447ec04a1c3fc0ab74fe936d1e" Mar 11 10:26:25 crc kubenswrapper[4840]: I0311 10:26:25.165357 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65f49d9b4c-brmwz"] Mar 11 10:26:25 crc kubenswrapper[4840]: I0311 10:26:25.172288 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65f49d9b4c-brmwz"] Mar 11 10:26:25 crc kubenswrapper[4840]: I0311 10:26:25.219927 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Mar 11 10:26:25 crc kubenswrapper[4840]: E0311 10:26:25.220271 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ab0530-6013-4fad-ba49-408f87956dfe" containerName="init" Mar 11 10:26:25 crc kubenswrapper[4840]: I0311 10:26:25.220282 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ab0530-6013-4fad-ba49-408f87956dfe" containerName="init" Mar 11 10:26:25 crc kubenswrapper[4840]: I0311 10:26:25.220478 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="97ab0530-6013-4fad-ba49-408f87956dfe" containerName="init" Mar 11 10:26:25 crc kubenswrapper[4840]: I0311 10:26:25.221010 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Mar 11 10:26:25 crc kubenswrapper[4840]: I0311 10:26:25.225939 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Mar 11 10:26:25 crc kubenswrapper[4840]: I0311 10:26:25.233223 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Mar 11 10:26:25 crc kubenswrapper[4840]: I0311 10:26:25.334894 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2msc\" (UniqueName: \"kubernetes.io/projected/e8c95ae1-037a-453e-8910-774fc6c665cb-kube-api-access-b2msc\") pod \"ovn-copy-data\" (UID: \"e8c95ae1-037a-453e-8910-774fc6c665cb\") " pod="openstack/ovn-copy-data" Mar 11 10:26:25 crc kubenswrapper[4840]: I0311 10:26:25.334952 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a5764ed8-d6d6-474a-b868-0d0d0939c38d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5764ed8-d6d6-474a-b868-0d0d0939c38d\") pod \"ovn-copy-data\" (UID: \"e8c95ae1-037a-453e-8910-774fc6c665cb\") " pod="openstack/ovn-copy-data" Mar 11 10:26:25 crc kubenswrapper[4840]: I0311 10:26:25.335258 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/e8c95ae1-037a-453e-8910-774fc6c665cb-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"e8c95ae1-037a-453e-8910-774fc6c665cb\") " pod="openstack/ovn-copy-data" Mar 11 10:26:25 crc kubenswrapper[4840]: I0311 10:26:25.436944 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/e8c95ae1-037a-453e-8910-774fc6c665cb-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"e8c95ae1-037a-453e-8910-774fc6c665cb\") " pod="openstack/ovn-copy-data" Mar 11 10:26:25 crc kubenswrapper[4840]: I0311 10:26:25.437143 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2msc\" (UniqueName: \"kubernetes.io/projected/e8c95ae1-037a-453e-8910-774fc6c665cb-kube-api-access-b2msc\") pod \"ovn-copy-data\" (UID: \"e8c95ae1-037a-453e-8910-774fc6c665cb\") " pod="openstack/ovn-copy-data" Mar 11 10:26:25 crc kubenswrapper[4840]: I0311 10:26:25.437193 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a5764ed8-d6d6-474a-b868-0d0d0939c38d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5764ed8-d6d6-474a-b868-0d0d0939c38d\") pod \"ovn-copy-data\" (UID: \"e8c95ae1-037a-453e-8910-774fc6c665cb\") " pod="openstack/ovn-copy-data" Mar 11 10:26:25 crc kubenswrapper[4840]: I0311 10:26:25.440159 4840 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 11 10:26:25 crc kubenswrapper[4840]: I0311 10:26:25.440294 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a5764ed8-d6d6-474a-b868-0d0d0939c38d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5764ed8-d6d6-474a-b868-0d0d0939c38d\") pod \"ovn-copy-data\" (UID: \"e8c95ae1-037a-453e-8910-774fc6c665cb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/29acc9d75a7ba642b33376b3355c8e90483e14a969861a4f5fb23243c0dfe447/globalmount\"" pod="openstack/ovn-copy-data" Mar 11 10:26:25 crc kubenswrapper[4840]: I0311 10:26:25.448214 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/e8c95ae1-037a-453e-8910-774fc6c665cb-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"e8c95ae1-037a-453e-8910-774fc6c665cb\") " pod="openstack/ovn-copy-data" Mar 11 10:26:25 crc kubenswrapper[4840]: I0311 10:26:25.463563 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2msc\" (UniqueName: \"kubernetes.io/projected/e8c95ae1-037a-453e-8910-774fc6c665cb-kube-api-access-b2msc\") pod \"ovn-copy-data\" (UID: \"e8c95ae1-037a-453e-8910-774fc6c665cb\") " pod="openstack/ovn-copy-data" Mar 11 10:26:25 crc kubenswrapper[4840]: I0311 10:26:25.471229 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a5764ed8-d6d6-474a-b868-0d0d0939c38d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a5764ed8-d6d6-474a-b868-0d0d0939c38d\") pod \"ovn-copy-data\" (UID: \"e8c95ae1-037a-453e-8910-774fc6c665cb\") " pod="openstack/ovn-copy-data" Mar 11 10:26:25 crc kubenswrapper[4840]: I0311 10:26:25.546623 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Mar 11 10:26:25 crc kubenswrapper[4840]: I0311 10:26:25.980172 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf4cccc77-szj74" event={"ID":"84734188-5a59-409c-b48d-7c68235005b5","Type":"ContainerStarted","Data":"a8c8ca8c645618865f874401e3808bf0c9bd5bc7c9d42267da19faf426f66ec4"} Mar 11 10:26:25 crc kubenswrapper[4840]: I0311 10:26:25.981339 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf4cccc77-szj74" Mar 11 10:26:26 crc kubenswrapper[4840]: I0311 10:26:26.004228 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf4cccc77-szj74" podStartSLOduration=4.004202067 podStartE2EDuration="4.004202067s" podCreationTimestamp="2026-03-11 10:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:26:26.00271917 +0000 UTC m=+5384.668388995" watchObservedRunningTime="2026-03-11 10:26:26.004202067 +0000 UTC m=+5384.669871902" Mar 11 10:26:26 crc kubenswrapper[4840]: I0311 10:26:26.089668 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97ab0530-6013-4fad-ba49-408f87956dfe" path="/var/lib/kubelet/pods/97ab0530-6013-4fad-ba49-408f87956dfe/volumes" Mar 11 10:26:26 crc kubenswrapper[4840]: I0311 10:26:26.096626 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Mar 11 10:26:26 crc kubenswrapper[4840]: I0311 10:26:26.099055 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 10:26:26 crc kubenswrapper[4840]: I0311 10:26:26.994007 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"e8c95ae1-037a-453e-8910-774fc6c665cb","Type":"ContainerStarted","Data":"ea946339a8e9b98a274b4a9699bfb3b92f2ae69157d06d0bd8f74a19a94fccf0"} Mar 11 10:26:29 crc kubenswrapper[4840]: I0311 10:26:29.012422 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"e8c95ae1-037a-453e-8910-774fc6c665cb","Type":"ContainerStarted","Data":"3a2212c7b222bab650fc1ded45567c3892da2d4a05dd1f30d7399bf76798a23f"} Mar 11 10:26:29 crc kubenswrapper[4840]: I0311 10:26:29.038130 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=2.40092477 podStartE2EDuration="5.038104045s" podCreationTimestamp="2026-03-11 10:26:24 +0000 UTC" firstStartedPulling="2026-03-11 10:26:26.097167048 +0000 UTC m=+5384.762836863" lastFinishedPulling="2026-03-11 10:26:28.734346323 +0000 UTC m=+5387.400016138" observedRunningTime="2026-03-11 10:26:29.026304365 +0000 UTC m=+5387.691974170" watchObservedRunningTime="2026-03-11 10:26:29.038104045 +0000 UTC m=+5387.703773890" Mar 11 10:26:33 crc kubenswrapper[4840]: E0311 10:26:33.190613 4840 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.30:60188->38.102.83.30:45639: write tcp 38.102.83.30:60188->38.102.83.30:45639: write: broken pipe Mar 11 10:26:33 crc kubenswrapper[4840]: I0311 10:26:33.310661 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cf4cccc77-szj74" Mar 11 10:26:33 crc kubenswrapper[4840]: I0311 10:26:33.380363 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66d5bf7c87-h2f4v"] Mar 11 10:26:33 crc kubenswrapper[4840]: I0311 10:26:33.385571 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66d5bf7c87-h2f4v" podUID="cf2b9835-4682-46ed-bc21-509455274aba" containerName="dnsmasq-dns" containerID="cri-o://4ccbbb08acb3cc73b5c3a9ea8acec564de2673b580ca4ab46e97c9eb7921f7db" gracePeriod=10 Mar 11 10:26:33 crc kubenswrapper[4840]: I0311 10:26:33.854658 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66d5bf7c87-h2f4v" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.002195 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf2b9835-4682-46ed-bc21-509455274aba-dns-svc\") pod \"cf2b9835-4682-46ed-bc21-509455274aba\" (UID: \"cf2b9835-4682-46ed-bc21-509455274aba\") " Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.002809 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf2b9835-4682-46ed-bc21-509455274aba-config\") pod \"cf2b9835-4682-46ed-bc21-509455274aba\" (UID: \"cf2b9835-4682-46ed-bc21-509455274aba\") " Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.002969 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4fj5\" (UniqueName: \"kubernetes.io/projected/cf2b9835-4682-46ed-bc21-509455274aba-kube-api-access-x4fj5\") pod \"cf2b9835-4682-46ed-bc21-509455274aba\" (UID: \"cf2b9835-4682-46ed-bc21-509455274aba\") " Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.018903 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf2b9835-4682-46ed-bc21-509455274aba-kube-api-access-x4fj5" (OuterVolumeSpecName: "kube-api-access-x4fj5") pod "cf2b9835-4682-46ed-bc21-509455274aba" (UID: "cf2b9835-4682-46ed-bc21-509455274aba"). InnerVolumeSpecName "kube-api-access-x4fj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.051867 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf2b9835-4682-46ed-bc21-509455274aba-config" (OuterVolumeSpecName: "config") pod "cf2b9835-4682-46ed-bc21-509455274aba" (UID: "cf2b9835-4682-46ed-bc21-509455274aba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.053360 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf2b9835-4682-46ed-bc21-509455274aba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf2b9835-4682-46ed-bc21-509455274aba" (UID: "cf2b9835-4682-46ed-bc21-509455274aba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.054700 4840 generic.go:334] "Generic (PLEG): container finished" podID="cf2b9835-4682-46ed-bc21-509455274aba" containerID="4ccbbb08acb3cc73b5c3a9ea8acec564de2673b580ca4ab46e97c9eb7921f7db" exitCode=0 Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.054772 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d5bf7c87-h2f4v" event={"ID":"cf2b9835-4682-46ed-bc21-509455274aba","Type":"ContainerDied","Data":"4ccbbb08acb3cc73b5c3a9ea8acec564de2673b580ca4ab46e97c9eb7921f7db"} Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.054804 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66d5bf7c87-h2f4v" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.054820 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66d5bf7c87-h2f4v" event={"ID":"cf2b9835-4682-46ed-bc21-509455274aba","Type":"ContainerDied","Data":"55aade4e3599d3daa197a6426524215c64e8328157dc849b9c3e9f17426df1e8"} Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.054843 4840 scope.go:117] "RemoveContainer" containerID="4ccbbb08acb3cc73b5c3a9ea8acec564de2673b580ca4ab46e97c9eb7921f7db" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.122672 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 11 10:26:34 crc kubenswrapper[4840]: E0311 10:26:34.123455 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf2b9835-4682-46ed-bc21-509455274aba" containerName="init" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.123478 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2b9835-4682-46ed-bc21-509455274aba" containerName="init" Mar 11 10:26:34 crc kubenswrapper[4840]: E0311 10:26:34.123518 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf2b9835-4682-46ed-bc21-509455274aba" containerName="dnsmasq-dns" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.123526 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf2b9835-4682-46ed-bc21-509455274aba" containerName="dnsmasq-dns" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.123958 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf2b9835-4682-46ed-bc21-509455274aba" containerName="dnsmasq-dns" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.127639 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4fj5\" (UniqueName: \"kubernetes.io/projected/cf2b9835-4682-46ed-bc21-509455274aba-kube-api-access-x4fj5\") on node \"crc\" DevicePath \"\"" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.127670 4840 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf2b9835-4682-46ed-bc21-509455274aba-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.127683 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf2b9835-4682-46ed-bc21-509455274aba-config\") on node \"crc\" DevicePath \"\"" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.127894 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.136212 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.138295 4840 scope.go:117] "RemoveContainer" containerID="d03c5fabda1b6aaf929787c91cfa7498b9698020852cebc78244fb808f9e62f2" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.141413 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-sbg45" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.141458 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.141553 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.141415 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.175398 4840 scope.go:117] "RemoveContainer" containerID="4ccbbb08acb3cc73b5c3a9ea8acec564de2673b580ca4ab46e97c9eb7921f7db" Mar 11 10:26:34 crc kubenswrapper[4840]: E0311 10:26:34.176284 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ccbbb08acb3cc73b5c3a9ea8acec564de2673b580ca4ab46e97c9eb7921f7db\": container with ID starting with 4ccbbb08acb3cc73b5c3a9ea8acec564de2673b580ca4ab46e97c9eb7921f7db not found: ID does not exist" containerID="4ccbbb08acb3cc73b5c3a9ea8acec564de2673b580ca4ab46e97c9eb7921f7db" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.176327 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ccbbb08acb3cc73b5c3a9ea8acec564de2673b580ca4ab46e97c9eb7921f7db"} err="failed to get container status \"4ccbbb08acb3cc73b5c3a9ea8acec564de2673b580ca4ab46e97c9eb7921f7db\": rpc error: code = NotFound desc = could not find container \"4ccbbb08acb3cc73b5c3a9ea8acec564de2673b580ca4ab46e97c9eb7921f7db\": container with ID starting with 4ccbbb08acb3cc73b5c3a9ea8acec564de2673b580ca4ab46e97c9eb7921f7db not found: ID does not exist" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.176353 4840 scope.go:117] "RemoveContainer" containerID="d03c5fabda1b6aaf929787c91cfa7498b9698020852cebc78244fb808f9e62f2" Mar 11 10:26:34 crc kubenswrapper[4840]: E0311 10:26:34.177996 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d03c5fabda1b6aaf929787c91cfa7498b9698020852cebc78244fb808f9e62f2\": container with ID starting with d03c5fabda1b6aaf929787c91cfa7498b9698020852cebc78244fb808f9e62f2 not found: ID does not exist" containerID="d03c5fabda1b6aaf929787c91cfa7498b9698020852cebc78244fb808f9e62f2" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.178072 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d03c5fabda1b6aaf929787c91cfa7498b9698020852cebc78244fb808f9e62f2"} err="failed to get container status \"d03c5fabda1b6aaf929787c91cfa7498b9698020852cebc78244fb808f9e62f2\": rpc error: code = NotFound desc = could not find container \"d03c5fabda1b6aaf929787c91cfa7498b9698020852cebc78244fb808f9e62f2\": container with ID starting with d03c5fabda1b6aaf929787c91cfa7498b9698020852cebc78244fb808f9e62f2 not found: ID does not exist" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.181984 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66d5bf7c87-h2f4v"] Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.187712 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66d5bf7c87-h2f4v"] Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.228873 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2d77cda-f495-462c-941d-90edf6abb3ca-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a2d77cda-f495-462c-941d-90edf6abb3ca\") " pod="openstack/ovn-northd-0" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.228949 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2d77cda-f495-462c-941d-90edf6abb3ca-scripts\") pod \"ovn-northd-0\" (UID: \"a2d77cda-f495-462c-941d-90edf6abb3ca\") " pod="openstack/ovn-northd-0" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.228975 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlxkz\" (UniqueName: \"kubernetes.io/projected/a2d77cda-f495-462c-941d-90edf6abb3ca-kube-api-access-wlxkz\") pod \"ovn-northd-0\" (UID: \"a2d77cda-f495-462c-941d-90edf6abb3ca\") " pod="openstack/ovn-northd-0" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.229010 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a2d77cda-f495-462c-941d-90edf6abb3ca-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a2d77cda-f495-462c-941d-90edf6abb3ca\") " pod="openstack/ovn-northd-0" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.229034 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2d77cda-f495-462c-941d-90edf6abb3ca-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a2d77cda-f495-462c-941d-90edf6abb3ca\") " pod="openstack/ovn-northd-0" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.229055 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2d77cda-f495-462c-941d-90edf6abb3ca-config\") pod \"ovn-northd-0\" (UID: \"a2d77cda-f495-462c-941d-90edf6abb3ca\") " pod="openstack/ovn-northd-0" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.229105 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d77cda-f495-462c-941d-90edf6abb3ca-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a2d77cda-f495-462c-941d-90edf6abb3ca\") " pod="openstack/ovn-northd-0" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.330963 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a2d77cda-f495-462c-941d-90edf6abb3ca-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a2d77cda-f495-462c-941d-90edf6abb3ca\") " pod="openstack/ovn-northd-0" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.331038 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2d77cda-f495-462c-941d-90edf6abb3ca-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a2d77cda-f495-462c-941d-90edf6abb3ca\") " pod="openstack/ovn-northd-0" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.331078 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2d77cda-f495-462c-941d-90edf6abb3ca-config\") pod \"ovn-northd-0\" (UID: \"a2d77cda-f495-462c-941d-90edf6abb3ca\") " pod="openstack/ovn-northd-0" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.331134 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d77cda-f495-462c-941d-90edf6abb3ca-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a2d77cda-f495-462c-941d-90edf6abb3ca\") " pod="openstack/ovn-northd-0" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.331194 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2d77cda-f495-462c-941d-90edf6abb3ca-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a2d77cda-f495-462c-941d-90edf6abb3ca\") " pod="openstack/ovn-northd-0" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.331249 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2d77cda-f495-462c-941d-90edf6abb3ca-scripts\") pod \"ovn-northd-0\" (UID: \"a2d77cda-f495-462c-941d-90edf6abb3ca\") " pod="openstack/ovn-northd-0" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.331271 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlxkz\" (UniqueName: \"kubernetes.io/projected/a2d77cda-f495-462c-941d-90edf6abb3ca-kube-api-access-wlxkz\") pod \"ovn-northd-0\" (UID: \"a2d77cda-f495-462c-941d-90edf6abb3ca\") " pod="openstack/ovn-northd-0" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.332137 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a2d77cda-f495-462c-941d-90edf6abb3ca-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"a2d77cda-f495-462c-941d-90edf6abb3ca\") " pod="openstack/ovn-northd-0" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.333339 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2d77cda-f495-462c-941d-90edf6abb3ca-scripts\") pod \"ovn-northd-0\" (UID: \"a2d77cda-f495-462c-941d-90edf6abb3ca\") " pod="openstack/ovn-northd-0" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.333496 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2d77cda-f495-462c-941d-90edf6abb3ca-config\") pod \"ovn-northd-0\" (UID: \"a2d77cda-f495-462c-941d-90edf6abb3ca\") " pod="openstack/ovn-northd-0" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.336400 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2d77cda-f495-462c-941d-90edf6abb3ca-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"a2d77cda-f495-462c-941d-90edf6abb3ca\") " pod="openstack/ovn-northd-0" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.336610 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d77cda-f495-462c-941d-90edf6abb3ca-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"a2d77cda-f495-462c-941d-90edf6abb3ca\") " pod="openstack/ovn-northd-0" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.336854 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2d77cda-f495-462c-941d-90edf6abb3ca-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"a2d77cda-f495-462c-941d-90edf6abb3ca\") " pod="openstack/ovn-northd-0" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.349182 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlxkz\" (UniqueName: \"kubernetes.io/projected/a2d77cda-f495-462c-941d-90edf6abb3ca-kube-api-access-wlxkz\") pod \"ovn-northd-0\" (UID: \"a2d77cda-f495-462c-941d-90edf6abb3ca\") " pod="openstack/ovn-northd-0" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.479432 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 11 10:26:34 crc kubenswrapper[4840]: I0311 10:26:34.920015 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 11 10:26:35 crc kubenswrapper[4840]: I0311 10:26:35.066975 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a2d77cda-f495-462c-941d-90edf6abb3ca","Type":"ContainerStarted","Data":"ed8c1098a77ea15b9e57e39ea220c7b0c944c7e33e2851edd1298aae52048e1e"} Mar 11 10:26:36 crc kubenswrapper[4840]: I0311 10:26:36.069756 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf2b9835-4682-46ed-bc21-509455274aba" path="/var/lib/kubelet/pods/cf2b9835-4682-46ed-bc21-509455274aba/volumes" Mar 11 10:26:36 crc kubenswrapper[4840]: I0311 10:26:36.078556 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a2d77cda-f495-462c-941d-90edf6abb3ca","Type":"ContainerStarted","Data":"df30569353deb4faa8ce77078c0f39260972f1bae2bd997f9bf164736baa462e"} Mar 11 10:26:36 crc kubenswrapper[4840]: I0311 10:26:36.078681 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"a2d77cda-f495-462c-941d-90edf6abb3ca","Type":"ContainerStarted","Data":"ad76bc42783718bae8ef505526274b01403080dd5626719f793a4f59c24f4ab0"} Mar 11 10:26:36 crc kubenswrapper[4840]: I0311 10:26:36.079002 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 11 10:26:36 crc kubenswrapper[4840]: I0311 10:26:36.106045 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.106024862 podStartE2EDuration="2.106024862s" podCreationTimestamp="2026-03-11 10:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:26:36.102855962 +0000 UTC m=+5394.768525777" watchObservedRunningTime="2026-03-11 10:26:36.106024862 +0000 UTC m=+5394.771694677" Mar 11 10:26:39 crc kubenswrapper[4840]: I0311 10:26:39.414197 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-qbnnf"] Mar 11 10:26:39 crc kubenswrapper[4840]: I0311 10:26:39.415835 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qbnnf" Mar 11 10:26:39 crc kubenswrapper[4840]: I0311 10:26:39.423863 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qbnnf"] Mar 11 10:26:39 crc kubenswrapper[4840]: I0311 10:26:39.515958 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-699f-account-create-update-ncdx2"] Mar 11 10:26:39 crc kubenswrapper[4840]: I0311 10:26:39.516915 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-699f-account-create-update-ncdx2" Mar 11 10:26:39 crc kubenswrapper[4840]: I0311 10:26:39.518627 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 11 10:26:39 crc kubenswrapper[4840]: I0311 10:26:39.525005 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-699f-account-create-update-ncdx2"] Mar 11 10:26:39 crc kubenswrapper[4840]: I0311 10:26:39.532298 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fc84158-c763-463c-8d98-317545c9f29b-operator-scripts\") pod \"keystone-db-create-qbnnf\" (UID: \"3fc84158-c763-463c-8d98-317545c9f29b\") " pod="openstack/keystone-db-create-qbnnf" Mar 11 10:26:39 crc kubenswrapper[4840]: I0311 10:26:39.532384 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km4g6\" (UniqueName: \"kubernetes.io/projected/3fc84158-c763-463c-8d98-317545c9f29b-kube-api-access-km4g6\") pod \"keystone-db-create-qbnnf\" (UID: \"3fc84158-c763-463c-8d98-317545c9f29b\") " pod="openstack/keystone-db-create-qbnnf" Mar 11 10:26:39 crc kubenswrapper[4840]: I0311 10:26:39.634719 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fc84158-c763-463c-8d98-317545c9f29b-operator-scripts\") pod \"keystone-db-create-qbnnf\" (UID: \"3fc84158-c763-463c-8d98-317545c9f29b\") " pod="openstack/keystone-db-create-qbnnf" Mar 11 10:26:39 crc kubenswrapper[4840]: I0311 10:26:39.634793 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4jdc\" (UniqueName: \"kubernetes.io/projected/c951d98c-4b85-4bb6-adcb-ddd8e8159304-kube-api-access-s4jdc\") pod \"keystone-699f-account-create-update-ncdx2\" (UID: \"c951d98c-4b85-4bb6-adcb-ddd8e8159304\") " pod="openstack/keystone-699f-account-create-update-ncdx2" Mar 11 10:26:39 crc kubenswrapper[4840]: I0311 10:26:39.634831 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c951d98c-4b85-4bb6-adcb-ddd8e8159304-operator-scripts\") pod \"keystone-699f-account-create-update-ncdx2\" (UID: \"c951d98c-4b85-4bb6-adcb-ddd8e8159304\") " pod="openstack/keystone-699f-account-create-update-ncdx2" Mar 11 10:26:39 crc kubenswrapper[4840]: I0311 10:26:39.634917 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km4g6\" (UniqueName: \"kubernetes.io/projected/3fc84158-c763-463c-8d98-317545c9f29b-kube-api-access-km4g6\") pod \"keystone-db-create-qbnnf\" (UID: \"3fc84158-c763-463c-8d98-317545c9f29b\") " pod="openstack/keystone-db-create-qbnnf" Mar 11 10:26:39 crc kubenswrapper[4840]: I0311 10:26:39.635649 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fc84158-c763-463c-8d98-317545c9f29b-operator-scripts\") pod \"keystone-db-create-qbnnf\" (UID: \"3fc84158-c763-463c-8d98-317545c9f29b\") " pod="openstack/keystone-db-create-qbnnf" Mar 11 10:26:39 crc kubenswrapper[4840]: I0311 10:26:39.664862 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km4g6\" (UniqueName: \"kubernetes.io/projected/3fc84158-c763-463c-8d98-317545c9f29b-kube-api-access-km4g6\") pod \"keystone-db-create-qbnnf\" (UID: \"3fc84158-c763-463c-8d98-317545c9f29b\") " pod="openstack/keystone-db-create-qbnnf" Mar 11 10:26:39 crc kubenswrapper[4840]: I0311 10:26:39.736978 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4jdc\" (UniqueName: \"kubernetes.io/projected/c951d98c-4b85-4bb6-adcb-ddd8e8159304-kube-api-access-s4jdc\") pod \"keystone-699f-account-create-update-ncdx2\" (UID: \"c951d98c-4b85-4bb6-adcb-ddd8e8159304\") " pod="openstack/keystone-699f-account-create-update-ncdx2" Mar 11 10:26:39 crc kubenswrapper[4840]: I0311 10:26:39.737038 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c951d98c-4b85-4bb6-adcb-ddd8e8159304-operator-scripts\") pod \"keystone-699f-account-create-update-ncdx2\" (UID: \"c951d98c-4b85-4bb6-adcb-ddd8e8159304\") " pod="openstack/keystone-699f-account-create-update-ncdx2" Mar 11 10:26:39 crc kubenswrapper[4840]: I0311 10:26:39.737895 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c951d98c-4b85-4bb6-adcb-ddd8e8159304-operator-scripts\") pod \"keystone-699f-account-create-update-ncdx2\" (UID: \"c951d98c-4b85-4bb6-adcb-ddd8e8159304\") " pod="openstack/keystone-699f-account-create-update-ncdx2" Mar 11 10:26:39 crc kubenswrapper[4840]: I0311 10:26:39.739227 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qbnnf" Mar 11 10:26:39 crc kubenswrapper[4840]: I0311 10:26:39.754259 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4jdc\" (UniqueName: \"kubernetes.io/projected/c951d98c-4b85-4bb6-adcb-ddd8e8159304-kube-api-access-s4jdc\") pod \"keystone-699f-account-create-update-ncdx2\" (UID: \"c951d98c-4b85-4bb6-adcb-ddd8e8159304\") " pod="openstack/keystone-699f-account-create-update-ncdx2" Mar 11 10:26:39 crc kubenswrapper[4840]: I0311 10:26:39.830416 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-699f-account-create-update-ncdx2" Mar 11 10:26:40 crc kubenswrapper[4840]: I0311 10:26:40.200703 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qbnnf"] Mar 11 10:26:40 crc kubenswrapper[4840]: W0311 10:26:40.202929 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fc84158_c763_463c_8d98_317545c9f29b.slice/crio-aeed78634053cfa83a30f3cfb63e922f5670dba686e87505e9a5f7c30616d313 WatchSource:0}: Error finding container aeed78634053cfa83a30f3cfb63e922f5670dba686e87505e9a5f7c30616d313: Status 404 returned error can't find the container with id aeed78634053cfa83a30f3cfb63e922f5670dba686e87505e9a5f7c30616d313 Mar 11 10:26:40 crc kubenswrapper[4840]: I0311 10:26:40.258278 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-699f-account-create-update-ncdx2"] Mar 11 10:26:40 crc kubenswrapper[4840]: W0311 10:26:40.265445 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc951d98c_4b85_4bb6_adcb_ddd8e8159304.slice/crio-a399b0c158e74d3e577703b0e4155aac9e9ec795ceb3a7897961cf840a1b4b5e WatchSource:0}: Error finding container a399b0c158e74d3e577703b0e4155aac9e9ec795ceb3a7897961cf840a1b4b5e: Status 404 returned error can't find the container with id a399b0c158e74d3e577703b0e4155aac9e9ec795ceb3a7897961cf840a1b4b5e Mar 11 10:26:41 crc kubenswrapper[4840]: I0311 10:26:41.124822 4840 generic.go:334] "Generic (PLEG): container finished" podID="3fc84158-c763-463c-8d98-317545c9f29b" containerID="96bf277b1f3889e50d403c29447ca50d41f6154676a73247e7537804f9a8a703" exitCode=0 Mar 11 10:26:41 crc kubenswrapper[4840]: I0311 10:26:41.124943 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qbnnf" event={"ID":"3fc84158-c763-463c-8d98-317545c9f29b","Type":"ContainerDied","Data":"96bf277b1f3889e50d403c29447ca50d41f6154676a73247e7537804f9a8a703"} Mar 11 10:26:41 crc kubenswrapper[4840]: I0311 10:26:41.125185 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qbnnf" event={"ID":"3fc84158-c763-463c-8d98-317545c9f29b","Type":"ContainerStarted","Data":"aeed78634053cfa83a30f3cfb63e922f5670dba686e87505e9a5f7c30616d313"} Mar 11 10:26:41 crc kubenswrapper[4840]: I0311 10:26:41.133135 4840 generic.go:334] "Generic (PLEG): container finished" podID="c951d98c-4b85-4bb6-adcb-ddd8e8159304" containerID="ad2751d7126f9b67f4929c11bc060b50e402baada7ca7612ac500a76bd29e02f" exitCode=0 Mar 11 10:26:41 crc kubenswrapper[4840]: I0311 10:26:41.133180 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-699f-account-create-update-ncdx2" event={"ID":"c951d98c-4b85-4bb6-adcb-ddd8e8159304","Type":"ContainerDied","Data":"ad2751d7126f9b67f4929c11bc060b50e402baada7ca7612ac500a76bd29e02f"} Mar 11 10:26:41 crc kubenswrapper[4840]: I0311 10:26:41.133218 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-699f-account-create-update-ncdx2" event={"ID":"c951d98c-4b85-4bb6-adcb-ddd8e8159304","Type":"ContainerStarted","Data":"a399b0c158e74d3e577703b0e4155aac9e9ec795ceb3a7897961cf840a1b4b5e"} Mar 11 10:26:42 crc kubenswrapper[4840]: I0311 10:26:42.591061 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-699f-account-create-update-ncdx2" Mar 11 10:26:42 crc kubenswrapper[4840]: I0311 10:26:42.597999 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qbnnf" Mar 11 10:26:42 crc kubenswrapper[4840]: I0311 10:26:42.793619 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4jdc\" (UniqueName: \"kubernetes.io/projected/c951d98c-4b85-4bb6-adcb-ddd8e8159304-kube-api-access-s4jdc\") pod \"c951d98c-4b85-4bb6-adcb-ddd8e8159304\" (UID: \"c951d98c-4b85-4bb6-adcb-ddd8e8159304\") " Mar 11 10:26:42 crc kubenswrapper[4840]: I0311 10:26:42.793720 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fc84158-c763-463c-8d98-317545c9f29b-operator-scripts\") pod \"3fc84158-c763-463c-8d98-317545c9f29b\" (UID: \"3fc84158-c763-463c-8d98-317545c9f29b\") " Mar 11 10:26:42 crc kubenswrapper[4840]: I0311 10:26:42.793749 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c951d98c-4b85-4bb6-adcb-ddd8e8159304-operator-scripts\") pod \"c951d98c-4b85-4bb6-adcb-ddd8e8159304\" (UID: \"c951d98c-4b85-4bb6-adcb-ddd8e8159304\") " Mar 11 10:26:42 crc kubenswrapper[4840]: I0311 10:26:42.793921 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km4g6\" (UniqueName: \"kubernetes.io/projected/3fc84158-c763-463c-8d98-317545c9f29b-kube-api-access-km4g6\") pod \"3fc84158-c763-463c-8d98-317545c9f29b\" (UID: \"3fc84158-c763-463c-8d98-317545c9f29b\") " Mar 11 10:26:42 crc kubenswrapper[4840]: I0311 10:26:42.794598 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fc84158-c763-463c-8d98-317545c9f29b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3fc84158-c763-463c-8d98-317545c9f29b" (UID: "3fc84158-c763-463c-8d98-317545c9f29b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:26:42 crc kubenswrapper[4840]: I0311 10:26:42.794693 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c951d98c-4b85-4bb6-adcb-ddd8e8159304-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c951d98c-4b85-4bb6-adcb-ddd8e8159304" (UID: "c951d98c-4b85-4bb6-adcb-ddd8e8159304"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:26:42 crc kubenswrapper[4840]: I0311 10:26:42.798342 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c951d98c-4b85-4bb6-adcb-ddd8e8159304-kube-api-access-s4jdc" (OuterVolumeSpecName: "kube-api-access-s4jdc") pod "c951d98c-4b85-4bb6-adcb-ddd8e8159304" (UID: "c951d98c-4b85-4bb6-adcb-ddd8e8159304"). InnerVolumeSpecName "kube-api-access-s4jdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:26:42 crc kubenswrapper[4840]: I0311 10:26:42.800632 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fc84158-c763-463c-8d98-317545c9f29b-kube-api-access-km4g6" (OuterVolumeSpecName: "kube-api-access-km4g6") pod "3fc84158-c763-463c-8d98-317545c9f29b" (UID: "3fc84158-c763-463c-8d98-317545c9f29b"). InnerVolumeSpecName "kube-api-access-km4g6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:26:42 crc kubenswrapper[4840]: I0311 10:26:42.895647 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4jdc\" (UniqueName: \"kubernetes.io/projected/c951d98c-4b85-4bb6-adcb-ddd8e8159304-kube-api-access-s4jdc\") on node \"crc\" DevicePath \"\"" Mar 11 10:26:42 crc kubenswrapper[4840]: I0311 10:26:42.895682 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fc84158-c763-463c-8d98-317545c9f29b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 10:26:42 crc kubenswrapper[4840]: I0311 10:26:42.895699 4840 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c951d98c-4b85-4bb6-adcb-ddd8e8159304-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 10:26:42 crc kubenswrapper[4840]: I0311 10:26:42.895714 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km4g6\" (UniqueName: \"kubernetes.io/projected/3fc84158-c763-463c-8d98-317545c9f29b-kube-api-access-km4g6\") on node \"crc\" DevicePath \"\"" Mar 11 10:26:43 crc kubenswrapper[4840]: I0311 10:26:43.152950 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qbnnf" event={"ID":"3fc84158-c763-463c-8d98-317545c9f29b","Type":"ContainerDied","Data":"aeed78634053cfa83a30f3cfb63e922f5670dba686e87505e9a5f7c30616d313"} Mar 11 10:26:43 crc kubenswrapper[4840]: I0311 10:26:43.152997 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aeed78634053cfa83a30f3cfb63e922f5670dba686e87505e9a5f7c30616d313" Mar 11 10:26:43 crc kubenswrapper[4840]: I0311 10:26:43.153063 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qbnnf" Mar 11 10:26:43 crc kubenswrapper[4840]: I0311 10:26:43.156923 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-699f-account-create-update-ncdx2" event={"ID":"c951d98c-4b85-4bb6-adcb-ddd8e8159304","Type":"ContainerDied","Data":"a399b0c158e74d3e577703b0e4155aac9e9ec795ceb3a7897961cf840a1b4b5e"} Mar 11 10:26:43 crc kubenswrapper[4840]: I0311 10:26:43.156967 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a399b0c158e74d3e577703b0e4155aac9e9ec795ceb3a7897961cf840a1b4b5e" Mar 11 10:26:43 crc kubenswrapper[4840]: I0311 10:26:43.157017 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-699f-account-create-update-ncdx2" Mar 11 10:26:44 crc kubenswrapper[4840]: I0311 10:26:44.962072 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-4wd8s"] Mar 11 10:26:44 crc kubenswrapper[4840]: E0311 10:26:44.962763 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c951d98c-4b85-4bb6-adcb-ddd8e8159304" containerName="mariadb-account-create-update" Mar 11 10:26:44 crc kubenswrapper[4840]: I0311 10:26:44.962779 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="c951d98c-4b85-4bb6-adcb-ddd8e8159304" containerName="mariadb-account-create-update" Mar 11 10:26:44 crc kubenswrapper[4840]: E0311 10:26:44.962816 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fc84158-c763-463c-8d98-317545c9f29b" containerName="mariadb-database-create" Mar 11 10:26:44 crc kubenswrapper[4840]: I0311 10:26:44.962824 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fc84158-c763-463c-8d98-317545c9f29b" containerName="mariadb-database-create" Mar 11 10:26:44 crc kubenswrapper[4840]: I0311 10:26:44.963031 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="c951d98c-4b85-4bb6-adcb-ddd8e8159304" containerName="mariadb-account-create-update" Mar 11 10:26:44 crc kubenswrapper[4840]: I0311 10:26:44.963061 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fc84158-c763-463c-8d98-317545c9f29b" containerName="mariadb-database-create" Mar 11 10:26:44 crc kubenswrapper[4840]: I0311 10:26:44.963900 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4wd8s" Mar 11 10:26:44 crc kubenswrapper[4840]: I0311 10:26:44.966780 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 11 10:26:44 crc kubenswrapper[4840]: I0311 10:26:44.968674 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 11 10:26:44 crc kubenswrapper[4840]: I0311 10:26:44.971365 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4wd8s"] Mar 11 10:26:44 crc kubenswrapper[4840]: I0311 10:26:44.971850 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 11 10:26:44 crc kubenswrapper[4840]: I0311 10:26:44.972599 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cvj5p" Mar 11 10:26:45 crc kubenswrapper[4840]: I0311 10:26:45.034152 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5752daf0-741c-47a3-a28d-0997d6c400f1-combined-ca-bundle\") pod \"keystone-db-sync-4wd8s\" (UID: \"5752daf0-741c-47a3-a28d-0997d6c400f1\") " pod="openstack/keystone-db-sync-4wd8s" Mar 11 10:26:45 crc kubenswrapper[4840]: I0311 10:26:45.034198 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5752daf0-741c-47a3-a28d-0997d6c400f1-config-data\") pod \"keystone-db-sync-4wd8s\" (UID: \"5752daf0-741c-47a3-a28d-0997d6c400f1\") " pod="openstack/keystone-db-sync-4wd8s" Mar 11 10:26:45 crc kubenswrapper[4840]: I0311 10:26:45.034220 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjxkw\" (UniqueName: \"kubernetes.io/projected/5752daf0-741c-47a3-a28d-0997d6c400f1-kube-api-access-gjxkw\") pod \"keystone-db-sync-4wd8s\" (UID: \"5752daf0-741c-47a3-a28d-0997d6c400f1\") " pod="openstack/keystone-db-sync-4wd8s" Mar 11 10:26:45 crc kubenswrapper[4840]: I0311 10:26:45.136281 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5752daf0-741c-47a3-a28d-0997d6c400f1-combined-ca-bundle\") pod \"keystone-db-sync-4wd8s\" (UID: \"5752daf0-741c-47a3-a28d-0997d6c400f1\") " pod="openstack/keystone-db-sync-4wd8s" Mar 11 10:26:45 crc kubenswrapper[4840]: I0311 10:26:45.136332 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5752daf0-741c-47a3-a28d-0997d6c400f1-config-data\") pod \"keystone-db-sync-4wd8s\" (UID: \"5752daf0-741c-47a3-a28d-0997d6c400f1\") " pod="openstack/keystone-db-sync-4wd8s" Mar 11 10:26:45 crc kubenswrapper[4840]: I0311 10:26:45.136357 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjxkw\" (UniqueName: \"kubernetes.io/projected/5752daf0-741c-47a3-a28d-0997d6c400f1-kube-api-access-gjxkw\") pod \"keystone-db-sync-4wd8s\" (UID: \"5752daf0-741c-47a3-a28d-0997d6c400f1\") " pod="openstack/keystone-db-sync-4wd8s" Mar 11 10:26:45 crc kubenswrapper[4840]: I0311 10:26:45.142243 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5752daf0-741c-47a3-a28d-0997d6c400f1-config-data\") pod \"keystone-db-sync-4wd8s\" (UID: \"5752daf0-741c-47a3-a28d-0997d6c400f1\") " pod="openstack/keystone-db-sync-4wd8s" Mar 11 10:26:45 crc kubenswrapper[4840]: I0311 10:26:45.146000 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5752daf0-741c-47a3-a28d-0997d6c400f1-combined-ca-bundle\") pod \"keystone-db-sync-4wd8s\" (UID: \"5752daf0-741c-47a3-a28d-0997d6c400f1\") " pod="openstack/keystone-db-sync-4wd8s" Mar 11 10:26:45 crc kubenswrapper[4840]: I0311 10:26:45.155373 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjxkw\" (UniqueName: \"kubernetes.io/projected/5752daf0-741c-47a3-a28d-0997d6c400f1-kube-api-access-gjxkw\") pod \"keystone-db-sync-4wd8s\" (UID: \"5752daf0-741c-47a3-a28d-0997d6c400f1\") " pod="openstack/keystone-db-sync-4wd8s" Mar 11 10:26:45 crc kubenswrapper[4840]: I0311 10:26:45.280869 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4wd8s" Mar 11 10:26:45 crc kubenswrapper[4840]: I0311 10:26:45.766910 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-4wd8s"] Mar 11 10:26:46 crc kubenswrapper[4840]: I0311 10:26:46.184503 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4wd8s" event={"ID":"5752daf0-741c-47a3-a28d-0997d6c400f1","Type":"ContainerStarted","Data":"fb5b8bf1ef92e6a289d0ec6deea20a306b2bbf422b714bea6aeec3e817b44140"} Mar 11 10:26:46 crc kubenswrapper[4840]: I0311 10:26:46.184680 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4wd8s" event={"ID":"5752daf0-741c-47a3-a28d-0997d6c400f1","Type":"ContainerStarted","Data":"69803bbdb27e863158c9b97d2d05382499c148c4a7a817c5db27da5bbd15292f"} Mar 11 10:26:46 crc kubenswrapper[4840]: I0311 10:26:46.209969 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-4wd8s" podStartSLOduration=2.2098968 podStartE2EDuration="2.2098968s" podCreationTimestamp="2026-03-11 10:26:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:26:46.202973034 +0000 UTC m=+5404.868642879" watchObservedRunningTime="2026-03-11 10:26:46.2098968 +0000 UTC m=+5404.875566655" Mar 11 10:26:48 crc kubenswrapper[4840]: I0311 10:26:48.207624 4840 generic.go:334] "Generic (PLEG): container finished" podID="5752daf0-741c-47a3-a28d-0997d6c400f1" containerID="fb5b8bf1ef92e6a289d0ec6deea20a306b2bbf422b714bea6aeec3e817b44140" exitCode=0 Mar 11 10:26:48 crc kubenswrapper[4840]: I0311 10:26:48.207693 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4wd8s" event={"ID":"5752daf0-741c-47a3-a28d-0997d6c400f1","Type":"ContainerDied","Data":"fb5b8bf1ef92e6a289d0ec6deea20a306b2bbf422b714bea6aeec3e817b44140"} Mar 11 10:26:49 crc kubenswrapper[4840]: I0311 10:26:49.560708 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4wd8s" Mar 11 10:26:49 crc kubenswrapper[4840]: I0311 10:26:49.626794 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5752daf0-741c-47a3-a28d-0997d6c400f1-combined-ca-bundle\") pod \"5752daf0-741c-47a3-a28d-0997d6c400f1\" (UID: \"5752daf0-741c-47a3-a28d-0997d6c400f1\") " Mar 11 10:26:49 crc kubenswrapper[4840]: I0311 10:26:49.627189 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjxkw\" (UniqueName: \"kubernetes.io/projected/5752daf0-741c-47a3-a28d-0997d6c400f1-kube-api-access-gjxkw\") pod \"5752daf0-741c-47a3-a28d-0997d6c400f1\" (UID: \"5752daf0-741c-47a3-a28d-0997d6c400f1\") " Mar 11 10:26:49 crc kubenswrapper[4840]: I0311 10:26:49.627402 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5752daf0-741c-47a3-a28d-0997d6c400f1-config-data\") pod \"5752daf0-741c-47a3-a28d-0997d6c400f1\" (UID: \"5752daf0-741c-47a3-a28d-0997d6c400f1\") " Mar 11 10:26:49 crc kubenswrapper[4840]: I0311 10:26:49.635555 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5752daf0-741c-47a3-a28d-0997d6c400f1-kube-api-access-gjxkw" (OuterVolumeSpecName: "kube-api-access-gjxkw") pod "5752daf0-741c-47a3-a28d-0997d6c400f1" (UID: "5752daf0-741c-47a3-a28d-0997d6c400f1"). InnerVolumeSpecName "kube-api-access-gjxkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:26:49 crc kubenswrapper[4840]: I0311 10:26:49.663634 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5752daf0-741c-47a3-a28d-0997d6c400f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5752daf0-741c-47a3-a28d-0997d6c400f1" (UID: "5752daf0-741c-47a3-a28d-0997d6c400f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:26:49 crc kubenswrapper[4840]: I0311 10:26:49.669619 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5752daf0-741c-47a3-a28d-0997d6c400f1-config-data" (OuterVolumeSpecName: "config-data") pod "5752daf0-741c-47a3-a28d-0997d6c400f1" (UID: "5752daf0-741c-47a3-a28d-0997d6c400f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:26:49 crc kubenswrapper[4840]: I0311 10:26:49.729801 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5752daf0-741c-47a3-a28d-0997d6c400f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 10:26:49 crc kubenswrapper[4840]: I0311 10:26:49.729828 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjxkw\" (UniqueName: \"kubernetes.io/projected/5752daf0-741c-47a3-a28d-0997d6c400f1-kube-api-access-gjxkw\") on node \"crc\" DevicePath \"\"" Mar 11 10:26:49 crc kubenswrapper[4840]: I0311 10:26:49.729840 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5752daf0-741c-47a3-a28d-0997d6c400f1-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.230519 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-4wd8s" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.231199 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-4wd8s" event={"ID":"5752daf0-741c-47a3-a28d-0997d6c400f1","Type":"ContainerDied","Data":"69803bbdb27e863158c9b97d2d05382499c148c4a7a817c5db27da5bbd15292f"} Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.231249 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69803bbdb27e863158c9b97d2d05382499c148c4a7a817c5db27da5bbd15292f" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.490904 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cbc5d9c5f-w6q8t"] Mar 11 10:26:50 crc kubenswrapper[4840]: E0311 10:26:50.492924 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5752daf0-741c-47a3-a28d-0997d6c400f1" containerName="keystone-db-sync" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.492974 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="5752daf0-741c-47a3-a28d-0997d6c400f1" containerName="keystone-db-sync" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.493336 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="5752daf0-741c-47a3-a28d-0997d6c400f1" containerName="keystone-db-sync" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.494787 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cbc5d9c5f-w6q8t" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.507150 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cbc5d9c5f-w6q8t"] Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.526409 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-85h8r"] Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.528142 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-85h8r" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.534181 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cvj5p" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.534286 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.534444 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.534643 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.534804 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.544966 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1-ovsdbserver-nb\") pod \"dnsmasq-dns-cbc5d9c5f-w6q8t\" (UID: \"0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1\") " pod="openstack/dnsmasq-dns-cbc5d9c5f-w6q8t" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.545040 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cfk8\" (UniqueName: \"kubernetes.io/projected/b533d00d-701f-441f-b2b3-46664c7f23bb-kube-api-access-2cfk8\") pod \"keystone-bootstrap-85h8r\" (UID: \"b533d00d-701f-441f-b2b3-46664c7f23bb\") " pod="openstack/keystone-bootstrap-85h8r" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.545078 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-combined-ca-bundle\") pod \"keystone-bootstrap-85h8r\" (UID: \"b533d00d-701f-441f-b2b3-46664c7f23bb\") " pod="openstack/keystone-bootstrap-85h8r" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.545107 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-scripts\") pod \"keystone-bootstrap-85h8r\" (UID: \"b533d00d-701f-441f-b2b3-46664c7f23bb\") " pod="openstack/keystone-bootstrap-85h8r" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.545133 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-fernet-keys\") pod \"keystone-bootstrap-85h8r\" (UID: \"b533d00d-701f-441f-b2b3-46664c7f23bb\") " pod="openstack/keystone-bootstrap-85h8r" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.545167 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1-config\") pod \"dnsmasq-dns-cbc5d9c5f-w6q8t\" (UID: \"0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1\") " pod="openstack/dnsmasq-dns-cbc5d9c5f-w6q8t" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.545213 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l26wz\" (UniqueName: \"kubernetes.io/projected/0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1-kube-api-access-l26wz\") pod \"dnsmasq-dns-cbc5d9c5f-w6q8t\" (UID: \"0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1\") " pod="openstack/dnsmasq-dns-cbc5d9c5f-w6q8t" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.545271 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1-dns-svc\") pod \"dnsmasq-dns-cbc5d9c5f-w6q8t\" (UID: \"0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1\") " pod="openstack/dnsmasq-dns-cbc5d9c5f-w6q8t" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.545294 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-credential-keys\") pod \"keystone-bootstrap-85h8r\" (UID: \"b533d00d-701f-441f-b2b3-46664c7f23bb\") " pod="openstack/keystone-bootstrap-85h8r" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.545337 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1-ovsdbserver-sb\") pod \"dnsmasq-dns-cbc5d9c5f-w6q8t\" (UID: \"0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1\") " pod="openstack/dnsmasq-dns-cbc5d9c5f-w6q8t" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.545363 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-config-data\") pod \"keystone-bootstrap-85h8r\" (UID: \"b533d00d-701f-441f-b2b3-46664c7f23bb\") " pod="openstack/keystone-bootstrap-85h8r" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.561534 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-85h8r"] Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.647109 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1-dns-svc\") pod \"dnsmasq-dns-cbc5d9c5f-w6q8t\" (UID: \"0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1\") " pod="openstack/dnsmasq-dns-cbc5d9c5f-w6q8t" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.647149 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-credential-keys\") pod \"keystone-bootstrap-85h8r\" (UID: \"b533d00d-701f-441f-b2b3-46664c7f23bb\") " pod="openstack/keystone-bootstrap-85h8r" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.647200 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1-ovsdbserver-sb\") pod \"dnsmasq-dns-cbc5d9c5f-w6q8t\" (UID: \"0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1\") " pod="openstack/dnsmasq-dns-cbc5d9c5f-w6q8t" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.647271 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-config-data\") pod \"keystone-bootstrap-85h8r\" (UID: \"b533d00d-701f-441f-b2b3-46664c7f23bb\") " pod="openstack/keystone-bootstrap-85h8r" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.647309 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1-ovsdbserver-nb\") pod \"dnsmasq-dns-cbc5d9c5f-w6q8t\" (UID: \"0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1\") " pod="openstack/dnsmasq-dns-cbc5d9c5f-w6q8t" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.647330 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cfk8\" (UniqueName: \"kubernetes.io/projected/b533d00d-701f-441f-b2b3-46664c7f23bb-kube-api-access-2cfk8\") pod \"keystone-bootstrap-85h8r\" (UID: \"b533d00d-701f-441f-b2b3-46664c7f23bb\") " pod="openstack/keystone-bootstrap-85h8r" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.647356 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-combined-ca-bundle\") pod \"keystone-bootstrap-85h8r\" (UID: \"b533d00d-701f-441f-b2b3-46664c7f23bb\") " pod="openstack/keystone-bootstrap-85h8r" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.647377 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-scripts\") pod \"keystone-bootstrap-85h8r\" (UID: \"b533d00d-701f-441f-b2b3-46664c7f23bb\") " pod="openstack/keystone-bootstrap-85h8r" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.647394 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-fernet-keys\") pod \"keystone-bootstrap-85h8r\" (UID: \"b533d00d-701f-441f-b2b3-46664c7f23bb\") " pod="openstack/keystone-bootstrap-85h8r" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.647413 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1-config\") pod \"dnsmasq-dns-cbc5d9c5f-w6q8t\" (UID: \"0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1\") " pod="openstack/dnsmasq-dns-cbc5d9c5f-w6q8t" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.647446 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l26wz\" (UniqueName: \"kubernetes.io/projected/0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1-kube-api-access-l26wz\") pod \"dnsmasq-dns-cbc5d9c5f-w6q8t\" (UID: \"0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1\") " pod="openstack/dnsmasq-dns-cbc5d9c5f-w6q8t" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.649809 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1-config\") pod \"dnsmasq-dns-cbc5d9c5f-w6q8t\" (UID: \"0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1\") " pod="openstack/dnsmasq-dns-cbc5d9c5f-w6q8t" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.650293 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1-dns-svc\") pod \"dnsmasq-dns-cbc5d9c5f-w6q8t\" (UID: \"0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1\") " pod="openstack/dnsmasq-dns-cbc5d9c5f-w6q8t" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.650358 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1-ovsdbserver-sb\") pod \"dnsmasq-dns-cbc5d9c5f-w6q8t\" (UID: \"0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1\") " pod="openstack/dnsmasq-dns-cbc5d9c5f-w6q8t" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.650615 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1-ovsdbserver-nb\") pod \"dnsmasq-dns-cbc5d9c5f-w6q8t\" (UID: \"0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1\") " pod="openstack/dnsmasq-dns-cbc5d9c5f-w6q8t" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.652747 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-combined-ca-bundle\") pod \"keystone-bootstrap-85h8r\" (UID: \"b533d00d-701f-441f-b2b3-46664c7f23bb\") " pod="openstack/keystone-bootstrap-85h8r" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.654271 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-credential-keys\") pod \"keystone-bootstrap-85h8r\" (UID: \"b533d00d-701f-441f-b2b3-46664c7f23bb\") " pod="openstack/keystone-bootstrap-85h8r" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.656055 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-scripts\") pod \"keystone-bootstrap-85h8r\" (UID: \"b533d00d-701f-441f-b2b3-46664c7f23bb\") " pod="openstack/keystone-bootstrap-85h8r" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.659269 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-config-data\") pod \"keystone-bootstrap-85h8r\" (UID: \"b533d00d-701f-441f-b2b3-46664c7f23bb\") " pod="openstack/keystone-bootstrap-85h8r" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.667073 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l26wz\" (UniqueName: \"kubernetes.io/projected/0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1-kube-api-access-l26wz\") pod \"dnsmasq-dns-cbc5d9c5f-w6q8t\" (UID: \"0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1\") " pod="openstack/dnsmasq-dns-cbc5d9c5f-w6q8t" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.668347 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-fernet-keys\") pod \"keystone-bootstrap-85h8r\" (UID: \"b533d00d-701f-441f-b2b3-46664c7f23bb\") " pod="openstack/keystone-bootstrap-85h8r" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.682214 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cfk8\" (UniqueName: \"kubernetes.io/projected/b533d00d-701f-441f-b2b3-46664c7f23bb-kube-api-access-2cfk8\") pod \"keystone-bootstrap-85h8r\" (UID: \"b533d00d-701f-441f-b2b3-46664c7f23bb\") " pod="openstack/keystone-bootstrap-85h8r" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.820837 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cbc5d9c5f-w6q8t" Mar 11 10:26:50 crc kubenswrapper[4840]: I0311 10:26:50.851135 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-85h8r" Mar 11 10:26:51 crc kubenswrapper[4840]: I0311 10:26:51.313326 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cbc5d9c5f-w6q8t"] Mar 11 10:26:51 crc kubenswrapper[4840]: W0311 10:26:51.316404 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ba44e89_3431_47ff_a0e0_b8daf1cf7ce1.slice/crio-7337a9aeeff64a1317b3a8178b7d0cd850f96dc600a50a008355e6caeb073c23 WatchSource:0}: Error finding container 7337a9aeeff64a1317b3a8178b7d0cd850f96dc600a50a008355e6caeb073c23: Status 404 returned error can't find the container with id 7337a9aeeff64a1317b3a8178b7d0cd850f96dc600a50a008355e6caeb073c23 Mar 11 10:26:51 crc kubenswrapper[4840]: I0311 10:26:51.387131 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-85h8r"] Mar 11 10:26:51 crc kubenswrapper[4840]: W0311 10:26:51.396764 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb533d00d_701f_441f_b2b3_46664c7f23bb.slice/crio-ba3f02f27a30be79c68fe5a6b45755b9c6b50d175a56c15180bbc81c4ac7753d WatchSource:0}: Error finding container ba3f02f27a30be79c68fe5a6b45755b9c6b50d175a56c15180bbc81c4ac7753d: Status 404 returned error can't find the container with id ba3f02f27a30be79c68fe5a6b45755b9c6b50d175a56c15180bbc81c4ac7753d Mar 11 10:26:52 crc kubenswrapper[4840]: I0311 10:26:52.247043 4840 generic.go:334] "Generic (PLEG): container finished" podID="0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1" containerID="ec0073f56ea36edbb8031cc0e1874c7aa2695c733949e81f7be1309521afad27" exitCode=0 Mar 11 10:26:52 crc kubenswrapper[4840]: I0311 10:26:52.247360 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cbc5d9c5f-w6q8t" event={"ID":"0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1","Type":"ContainerDied","Data":"ec0073f56ea36edbb8031cc0e1874c7aa2695c733949e81f7be1309521afad27"} Mar 11 10:26:52 crc kubenswrapper[4840]: I0311 10:26:52.247388 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cbc5d9c5f-w6q8t" event={"ID":"0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1","Type":"ContainerStarted","Data":"7337a9aeeff64a1317b3a8178b7d0cd850f96dc600a50a008355e6caeb073c23"} Mar 11 10:26:52 crc kubenswrapper[4840]: I0311 10:26:52.256263 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-85h8r" event={"ID":"b533d00d-701f-441f-b2b3-46664c7f23bb","Type":"ContainerStarted","Data":"adbda499061563c301fc0914bd80ca5042a475f1f0f215cff5038698a761579c"} Mar 11 10:26:52 crc kubenswrapper[4840]: I0311 10:26:52.256308 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-85h8r" event={"ID":"b533d00d-701f-441f-b2b3-46664c7f23bb","Type":"ContainerStarted","Data":"ba3f02f27a30be79c68fe5a6b45755b9c6b50d175a56c15180bbc81c4ac7753d"} Mar 11 10:26:52 crc kubenswrapper[4840]: I0311 10:26:52.302071 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-85h8r" podStartSLOduration=2.302054715 podStartE2EDuration="2.302054715s" podCreationTimestamp="2026-03-11 10:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:26:52.298628188 +0000 UTC m=+5410.964298013" watchObservedRunningTime="2026-03-11 10:26:52.302054715 +0000 UTC m=+5410.967724530" Mar 11 10:26:53 crc kubenswrapper[4840]: I0311 10:26:53.266430 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cbc5d9c5f-w6q8t" event={"ID":"0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1","Type":"ContainerStarted","Data":"b0af2108c2a5cb43731d23604c1736f6b0d9cc7334351c91b9de8c9d567d2c22"} Mar 11 10:26:53 crc kubenswrapper[4840]: I0311 10:26:53.266652 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cbc5d9c5f-w6q8t" Mar 11 10:26:53 crc kubenswrapper[4840]: I0311 10:26:53.286640 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cbc5d9c5f-w6q8t" podStartSLOduration=3.286620601 podStartE2EDuration="3.286620601s" podCreationTimestamp="2026-03-11 10:26:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:26:53.284759494 +0000 UTC m=+5411.950429339" watchObservedRunningTime="2026-03-11 10:26:53.286620601 +0000 UTC m=+5411.952290416" Mar 11 10:26:54 crc kubenswrapper[4840]: I0311 10:26:54.534343 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 11 10:26:55 crc kubenswrapper[4840]: I0311 10:26:55.283950 4840 generic.go:334] "Generic (PLEG): container finished" podID="b533d00d-701f-441f-b2b3-46664c7f23bb" containerID="adbda499061563c301fc0914bd80ca5042a475f1f0f215cff5038698a761579c" exitCode=0 Mar 11 10:26:55 crc kubenswrapper[4840]: I0311 10:26:55.284005 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-85h8r" event={"ID":"b533d00d-701f-441f-b2b3-46664c7f23bb","Type":"ContainerDied","Data":"adbda499061563c301fc0914bd80ca5042a475f1f0f215cff5038698a761579c"} Mar 11 10:26:56 crc kubenswrapper[4840]: I0311 10:26:56.879571 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-85h8r" Mar 11 10:26:56 crc kubenswrapper[4840]: I0311 10:26:56.958845 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-scripts\") pod \"b533d00d-701f-441f-b2b3-46664c7f23bb\" (UID: \"b533d00d-701f-441f-b2b3-46664c7f23bb\") " Mar 11 10:26:56 crc kubenswrapper[4840]: I0311 10:26:56.958890 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-combined-ca-bundle\") pod \"b533d00d-701f-441f-b2b3-46664c7f23bb\" (UID: \"b533d00d-701f-441f-b2b3-46664c7f23bb\") " Mar 11 10:26:56 crc kubenswrapper[4840]: I0311 10:26:56.958908 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-fernet-keys\") pod \"b533d00d-701f-441f-b2b3-46664c7f23bb\" (UID: \"b533d00d-701f-441f-b2b3-46664c7f23bb\") " Mar 11 10:26:56 crc kubenswrapper[4840]: I0311 10:26:56.958939 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-credential-keys\") pod \"b533d00d-701f-441f-b2b3-46664c7f23bb\" (UID: \"b533d00d-701f-441f-b2b3-46664c7f23bb\") " Mar 11 10:26:56 crc kubenswrapper[4840]: I0311 10:26:56.958999 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-config-data\") pod \"b533d00d-701f-441f-b2b3-46664c7f23bb\" (UID: \"b533d00d-701f-441f-b2b3-46664c7f23bb\") " Mar 11 10:26:56 crc kubenswrapper[4840]: I0311 10:26:56.959036 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cfk8\" (UniqueName: \"kubernetes.io/projected/b533d00d-701f-441f-b2b3-46664c7f23bb-kube-api-access-2cfk8\") pod \"b533d00d-701f-441f-b2b3-46664c7f23bb\" (UID: \"b533d00d-701f-441f-b2b3-46664c7f23bb\") " Mar 11 10:26:56 crc kubenswrapper[4840]: I0311 10:26:56.964686 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-scripts" (OuterVolumeSpecName: "scripts") pod "b533d00d-701f-441f-b2b3-46664c7f23bb" (UID: "b533d00d-701f-441f-b2b3-46664c7f23bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:26:56 crc kubenswrapper[4840]: I0311 10:26:56.964780 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b533d00d-701f-441f-b2b3-46664c7f23bb-kube-api-access-2cfk8" (OuterVolumeSpecName: "kube-api-access-2cfk8") pod "b533d00d-701f-441f-b2b3-46664c7f23bb" (UID: "b533d00d-701f-441f-b2b3-46664c7f23bb"). InnerVolumeSpecName "kube-api-access-2cfk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:26:56 crc kubenswrapper[4840]: I0311 10:26:56.966568 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b533d00d-701f-441f-b2b3-46664c7f23bb" (UID: "b533d00d-701f-441f-b2b3-46664c7f23bb"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:26:56 crc kubenswrapper[4840]: I0311 10:26:56.978150 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b533d00d-701f-441f-b2b3-46664c7f23bb" (UID: "b533d00d-701f-441f-b2b3-46664c7f23bb"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:26:56 crc kubenswrapper[4840]: I0311 10:26:56.983908 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b533d00d-701f-441f-b2b3-46664c7f23bb" (UID: "b533d00d-701f-441f-b2b3-46664c7f23bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:26:56 crc kubenswrapper[4840]: I0311 10:26:56.985428 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-config-data" (OuterVolumeSpecName: "config-data") pod "b533d00d-701f-441f-b2b3-46664c7f23bb" (UID: "b533d00d-701f-441f-b2b3-46664c7f23bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.060460 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.060510 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.060522 4840 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.060530 4840 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.060540 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b533d00d-701f-441f-b2b3-46664c7f23bb-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.060565 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cfk8\" (UniqueName: \"kubernetes.io/projected/b533d00d-701f-441f-b2b3-46664c7f23bb-kube-api-access-2cfk8\") on node \"crc\" DevicePath \"\"" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.305500 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-85h8r" event={"ID":"b533d00d-701f-441f-b2b3-46664c7f23bb","Type":"ContainerDied","Data":"ba3f02f27a30be79c68fe5a6b45755b9c6b50d175a56c15180bbc81c4ac7753d"} Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.305546 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba3f02f27a30be79c68fe5a6b45755b9c6b50d175a56c15180bbc81c4ac7753d" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.305574 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-85h8r" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.479405 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-85h8r"] Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.486331 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-85h8r"] Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.604848 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mmzff"] Mar 11 10:26:57 crc kubenswrapper[4840]: E0311 10:26:57.605397 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b533d00d-701f-441f-b2b3-46664c7f23bb" containerName="keystone-bootstrap" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.605434 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b533d00d-701f-441f-b2b3-46664c7f23bb" containerName="keystone-bootstrap" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.605872 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b533d00d-701f-441f-b2b3-46664c7f23bb" containerName="keystone-bootstrap" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.606879 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mmzff" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.609638 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.609648 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.610210 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cvj5p" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.610936 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.611137 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.623116 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mmzff"] Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.670154 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-combined-ca-bundle\") pod \"keystone-bootstrap-mmzff\" (UID: \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\") " pod="openstack/keystone-bootstrap-mmzff" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.670388 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-scripts\") pod \"keystone-bootstrap-mmzff\" (UID: \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\") " pod="openstack/keystone-bootstrap-mmzff" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.670639 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24kht\" (UniqueName: \"kubernetes.io/projected/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-kube-api-access-24kht\") pod \"keystone-bootstrap-mmzff\" (UID: \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\") " pod="openstack/keystone-bootstrap-mmzff" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.670715 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-fernet-keys\") pod \"keystone-bootstrap-mmzff\" (UID: \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\") " pod="openstack/keystone-bootstrap-mmzff" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.670756 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-config-data\") pod \"keystone-bootstrap-mmzff\" (UID: \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\") " pod="openstack/keystone-bootstrap-mmzff" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.670851 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-credential-keys\") pod \"keystone-bootstrap-mmzff\" (UID: \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\") " pod="openstack/keystone-bootstrap-mmzff" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.772023 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24kht\" (UniqueName: \"kubernetes.io/projected/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-kube-api-access-24kht\") pod \"keystone-bootstrap-mmzff\" (UID: \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\") " pod="openstack/keystone-bootstrap-mmzff" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.772191 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-fernet-keys\") pod \"keystone-bootstrap-mmzff\" (UID: \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\") " pod="openstack/keystone-bootstrap-mmzff" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.773294 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-config-data\") pod \"keystone-bootstrap-mmzff\" (UID: \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\") " pod="openstack/keystone-bootstrap-mmzff" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.773396 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-credential-keys\") pod \"keystone-bootstrap-mmzff\" (UID: \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\") " pod="openstack/keystone-bootstrap-mmzff" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.773581 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-combined-ca-bundle\") pod \"keystone-bootstrap-mmzff\" (UID: \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\") " pod="openstack/keystone-bootstrap-mmzff" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.773689 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-scripts\") pod \"keystone-bootstrap-mmzff\" (UID: \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\") " pod="openstack/keystone-bootstrap-mmzff" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.777939 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-combined-ca-bundle\") pod \"keystone-bootstrap-mmzff\" (UID: \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\") " pod="openstack/keystone-bootstrap-mmzff" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.777962 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-fernet-keys\") pod \"keystone-bootstrap-mmzff\" (UID: \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\") " pod="openstack/keystone-bootstrap-mmzff" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.778100 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-credential-keys\") pod \"keystone-bootstrap-mmzff\" (UID: \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\") " pod="openstack/keystone-bootstrap-mmzff" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.780220 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-config-data\") pod \"keystone-bootstrap-mmzff\" (UID: \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\") " pod="openstack/keystone-bootstrap-mmzff" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.781288 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-scripts\") pod \"keystone-bootstrap-mmzff\" (UID: \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\") " pod="openstack/keystone-bootstrap-mmzff" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.796683 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24kht\" (UniqueName: \"kubernetes.io/projected/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-kube-api-access-24kht\") pod \"keystone-bootstrap-mmzff\" (UID: \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\") " pod="openstack/keystone-bootstrap-mmzff" Mar 11 10:26:57 crc kubenswrapper[4840]: I0311 10:26:57.936624 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mmzff" Mar 11 10:26:58 crc kubenswrapper[4840]: I0311 10:26:58.080318 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b533d00d-701f-441f-b2b3-46664c7f23bb" path="/var/lib/kubelet/pods/b533d00d-701f-441f-b2b3-46664c7f23bb/volumes" Mar 11 10:26:58 crc kubenswrapper[4840]: I0311 10:26:58.865450 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mmzff"] Mar 11 10:26:59 crc kubenswrapper[4840]: I0311 10:26:59.325394 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mmzff" event={"ID":"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68","Type":"ContainerStarted","Data":"f6604405be5d3d01272e1c73ef7e5372f5edce5bd15205b9ea41f1de14273938"} Mar 11 10:26:59 crc kubenswrapper[4840]: I0311 10:26:59.325820 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mmzff" event={"ID":"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68","Type":"ContainerStarted","Data":"168e799e5c7b94fb9a8176d9f627823130489d60273de64185fb111158fe580b"} Mar 11 10:26:59 crc kubenswrapper[4840]: I0311 10:26:59.356811 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mmzff" podStartSLOduration=2.356788067 podStartE2EDuration="2.356788067s" podCreationTimestamp="2026-03-11 10:26:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:26:59.349532183 +0000 UTC m=+5418.015202058" watchObservedRunningTime="2026-03-11 10:26:59.356788067 +0000 UTC m=+5418.022457922" Mar 11 10:27:00 crc kubenswrapper[4840]: I0311 10:27:00.823734 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cbc5d9c5f-w6q8t" Mar 11 10:27:00 crc kubenswrapper[4840]: I0311 10:27:00.896613 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf4cccc77-szj74"] Mar 11 10:27:00 crc kubenswrapper[4840]: I0311 10:27:00.896877 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf4cccc77-szj74" podUID="84734188-5a59-409c-b48d-7c68235005b5" containerName="dnsmasq-dns" containerID="cri-o://a8c8ca8c645618865f874401e3808bf0c9bd5bc7c9d42267da19faf426f66ec4" gracePeriod=10 Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.349721 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf4cccc77-szj74" Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.355319 4840 generic.go:334] "Generic (PLEG): container finished" podID="84734188-5a59-409c-b48d-7c68235005b5" containerID="a8c8ca8c645618865f874401e3808bf0c9bd5bc7c9d42267da19faf426f66ec4" exitCode=0 Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.355426 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf4cccc77-szj74" event={"ID":"84734188-5a59-409c-b48d-7c68235005b5","Type":"ContainerDied","Data":"a8c8ca8c645618865f874401e3808bf0c9bd5bc7c9d42267da19faf426f66ec4"} Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.355519 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf4cccc77-szj74" event={"ID":"84734188-5a59-409c-b48d-7c68235005b5","Type":"ContainerDied","Data":"cc037ac93f78df62a6ef08fc441c1e739bdba0acd5fef201e8d8cf429b5e44d4"} Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.355594 4840 scope.go:117] "RemoveContainer" containerID="a8c8ca8c645618865f874401e3808bf0c9bd5bc7c9d42267da19faf426f66ec4" Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.355734 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf4cccc77-szj74" Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.383862 4840 scope.go:117] "RemoveContainer" containerID="3c6ddef317340f1dc7f72e6b129251da4be9c8aee050bc037d7a6f11136b046b" Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.420126 4840 scope.go:117] "RemoveContainer" containerID="a8c8ca8c645618865f874401e3808bf0c9bd5bc7c9d42267da19faf426f66ec4" Mar 11 10:27:01 crc kubenswrapper[4840]: E0311 10:27:01.420699 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8c8ca8c645618865f874401e3808bf0c9bd5bc7c9d42267da19faf426f66ec4\": container with ID starting with a8c8ca8c645618865f874401e3808bf0c9bd5bc7c9d42267da19faf426f66ec4 not found: ID does not exist" containerID="a8c8ca8c645618865f874401e3808bf0c9bd5bc7c9d42267da19faf426f66ec4" Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.420749 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8c8ca8c645618865f874401e3808bf0c9bd5bc7c9d42267da19faf426f66ec4"} err="failed to get container status \"a8c8ca8c645618865f874401e3808bf0c9bd5bc7c9d42267da19faf426f66ec4\": rpc error: code = NotFound desc = could not find container \"a8c8ca8c645618865f874401e3808bf0c9bd5bc7c9d42267da19faf426f66ec4\": container with ID starting with a8c8ca8c645618865f874401e3808bf0c9bd5bc7c9d42267da19faf426f66ec4 not found: ID does not exist" Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.420780 4840 scope.go:117] "RemoveContainer" containerID="3c6ddef317340f1dc7f72e6b129251da4be9c8aee050bc037d7a6f11136b046b" Mar 11 10:27:01 crc kubenswrapper[4840]: E0311 10:27:01.421036 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c6ddef317340f1dc7f72e6b129251da4be9c8aee050bc037d7a6f11136b046b\": container with ID starting with 3c6ddef317340f1dc7f72e6b129251da4be9c8aee050bc037d7a6f11136b046b not found: ID does not exist" containerID="3c6ddef317340f1dc7f72e6b129251da4be9c8aee050bc037d7a6f11136b046b" Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.421062 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c6ddef317340f1dc7f72e6b129251da4be9c8aee050bc037d7a6f11136b046b"} err="failed to get container status \"3c6ddef317340f1dc7f72e6b129251da4be9c8aee050bc037d7a6f11136b046b\": rpc error: code = NotFound desc = could not find container \"3c6ddef317340f1dc7f72e6b129251da4be9c8aee050bc037d7a6f11136b046b\": container with ID starting with 3c6ddef317340f1dc7f72e6b129251da4be9c8aee050bc037d7a6f11136b046b not found: ID does not exist" Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.434257 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg6w7\" (UniqueName: \"kubernetes.io/projected/84734188-5a59-409c-b48d-7c68235005b5-kube-api-access-cg6w7\") pod \"84734188-5a59-409c-b48d-7c68235005b5\" (UID: \"84734188-5a59-409c-b48d-7c68235005b5\") " Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.434530 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84734188-5a59-409c-b48d-7c68235005b5-ovsdbserver-sb\") pod \"84734188-5a59-409c-b48d-7c68235005b5\" (UID: \"84734188-5a59-409c-b48d-7c68235005b5\") " Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.434644 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84734188-5a59-409c-b48d-7c68235005b5-ovsdbserver-nb\") pod \"84734188-5a59-409c-b48d-7c68235005b5\" (UID: \"84734188-5a59-409c-b48d-7c68235005b5\") " Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.434832 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84734188-5a59-409c-b48d-7c68235005b5-dns-svc\") pod \"84734188-5a59-409c-b48d-7c68235005b5\" (UID: \"84734188-5a59-409c-b48d-7c68235005b5\") " Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.434912 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84734188-5a59-409c-b48d-7c68235005b5-config\") pod \"84734188-5a59-409c-b48d-7c68235005b5\" (UID: \"84734188-5a59-409c-b48d-7c68235005b5\") " Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.445662 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84734188-5a59-409c-b48d-7c68235005b5-kube-api-access-cg6w7" (OuterVolumeSpecName: "kube-api-access-cg6w7") pod "84734188-5a59-409c-b48d-7c68235005b5" (UID: "84734188-5a59-409c-b48d-7c68235005b5"). InnerVolumeSpecName "kube-api-access-cg6w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.472049 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84734188-5a59-409c-b48d-7c68235005b5-config" (OuterVolumeSpecName: "config") pod "84734188-5a59-409c-b48d-7c68235005b5" (UID: "84734188-5a59-409c-b48d-7c68235005b5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.476255 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84734188-5a59-409c-b48d-7c68235005b5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "84734188-5a59-409c-b48d-7c68235005b5" (UID: "84734188-5a59-409c-b48d-7c68235005b5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.500067 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84734188-5a59-409c-b48d-7c68235005b5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "84734188-5a59-409c-b48d-7c68235005b5" (UID: "84734188-5a59-409c-b48d-7c68235005b5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.509066 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84734188-5a59-409c-b48d-7c68235005b5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "84734188-5a59-409c-b48d-7c68235005b5" (UID: "84734188-5a59-409c-b48d-7c68235005b5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.537439 4840 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84734188-5a59-409c-b48d-7c68235005b5-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.537492 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84734188-5a59-409c-b48d-7c68235005b5-config\") on node \"crc\" DevicePath \"\"" Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.537509 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg6w7\" (UniqueName: \"kubernetes.io/projected/84734188-5a59-409c-b48d-7c68235005b5-kube-api-access-cg6w7\") on node \"crc\" DevicePath \"\"" Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.537525 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84734188-5a59-409c-b48d-7c68235005b5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.537540 4840 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84734188-5a59-409c-b48d-7c68235005b5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.688213 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf4cccc77-szj74"] Mar 11 10:27:01 crc kubenswrapper[4840]: I0311 10:27:01.694769 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf4cccc77-szj74"] Mar 11 10:27:02 crc kubenswrapper[4840]: I0311 10:27:02.072090 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84734188-5a59-409c-b48d-7c68235005b5" path="/var/lib/kubelet/pods/84734188-5a59-409c-b48d-7c68235005b5/volumes" Mar 11 10:27:02 crc kubenswrapper[4840]: I0311 10:27:02.370583 4840 generic.go:334] "Generic (PLEG): container finished" podID="a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68" containerID="f6604405be5d3d01272e1c73ef7e5372f5edce5bd15205b9ea41f1de14273938" exitCode=0 Mar 11 10:27:02 crc kubenswrapper[4840]: I0311 10:27:02.370713 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mmzff" event={"ID":"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68","Type":"ContainerDied","Data":"f6604405be5d3d01272e1c73ef7e5372f5edce5bd15205b9ea41f1de14273938"} Mar 11 10:27:03 crc kubenswrapper[4840]: I0311 10:27:03.784453 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mmzff" Mar 11 10:27:03 crc kubenswrapper[4840]: I0311 10:27:03.879947 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-combined-ca-bundle\") pod \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\" (UID: \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\") " Mar 11 10:27:03 crc kubenswrapper[4840]: I0311 10:27:03.880572 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-fernet-keys\") pod \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\" (UID: \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\") " Mar 11 10:27:03 crc kubenswrapper[4840]: I0311 10:27:03.880678 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-config-data\") pod \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\" (UID: \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\") " Mar 11 10:27:03 crc kubenswrapper[4840]: I0311 10:27:03.880822 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24kht\" (UniqueName: \"kubernetes.io/projected/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-kube-api-access-24kht\") pod \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\" (UID: \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\") " Mar 11 10:27:03 crc kubenswrapper[4840]: I0311 10:27:03.880930 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-scripts\") pod \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\" (UID: \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\") " Mar 11 10:27:03 crc kubenswrapper[4840]: I0311 10:27:03.881066 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-credential-keys\") pod \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\" (UID: \"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68\") " Mar 11 10:27:03 crc kubenswrapper[4840]: I0311 10:27:03.884998 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-kube-api-access-24kht" (OuterVolumeSpecName: "kube-api-access-24kht") pod "a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68" (UID: "a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68"). InnerVolumeSpecName "kube-api-access-24kht". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:27:03 crc kubenswrapper[4840]: I0311 10:27:03.885171 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-scripts" (OuterVolumeSpecName: "scripts") pod "a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68" (UID: "a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:27:03 crc kubenswrapper[4840]: I0311 10:27:03.885542 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68" (UID: "a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:27:03 crc kubenswrapper[4840]: I0311 10:27:03.885680 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68" (UID: "a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:27:03 crc kubenswrapper[4840]: I0311 10:27:03.901305 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-config-data" (OuterVolumeSpecName: "config-data") pod "a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68" (UID: "a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:27:03 crc kubenswrapper[4840]: I0311 10:27:03.905900 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68" (UID: "a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:27:03 crc kubenswrapper[4840]: I0311 10:27:03.982698 4840 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-config-data\") on node \"crc\" DevicePath \"\"" Mar 11 10:27:03 crc kubenswrapper[4840]: I0311 10:27:03.982729 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24kht\" (UniqueName: \"kubernetes.io/projected/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-kube-api-access-24kht\") on node \"crc\" DevicePath \"\"" Mar 11 10:27:03 crc kubenswrapper[4840]: I0311 10:27:03.982738 4840 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-scripts\") on node \"crc\" DevicePath \"\"" Mar 11 10:27:03 crc kubenswrapper[4840]: I0311 10:27:03.982747 4840 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 11 10:27:03 crc kubenswrapper[4840]: I0311 10:27:03.982758 4840 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 11 10:27:03 crc kubenswrapper[4840]: I0311 10:27:03.982768 4840 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.396538 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mmzff" event={"ID":"a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68","Type":"ContainerDied","Data":"168e799e5c7b94fb9a8176d9f627823130489d60273de64185fb111158fe580b"} Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.396583 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="168e799e5c7b94fb9a8176d9f627823130489d60273de64185fb111158fe580b" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.396649 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mmzff" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.501622 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-76548565b-rrmpc"] Mar 11 10:27:04 crc kubenswrapper[4840]: E0311 10:27:04.502142 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84734188-5a59-409c-b48d-7c68235005b5" containerName="init" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.502170 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="84734188-5a59-409c-b48d-7c68235005b5" containerName="init" Mar 11 10:27:04 crc kubenswrapper[4840]: E0311 10:27:04.502219 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68" containerName="keystone-bootstrap" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.502236 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68" containerName="keystone-bootstrap" Mar 11 10:27:04 crc kubenswrapper[4840]: E0311 10:27:04.502276 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84734188-5a59-409c-b48d-7c68235005b5" containerName="dnsmasq-dns" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.502289 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="84734188-5a59-409c-b48d-7c68235005b5" containerName="dnsmasq-dns" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.502708 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68" containerName="keystone-bootstrap" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.502752 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="84734188-5a59-409c-b48d-7c68235005b5" containerName="dnsmasq-dns" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.503738 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.507848 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.508430 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.509349 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.513129 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.513198 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.513736 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cvj5p" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.527257 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-76548565b-rrmpc"] Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.592752 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8225c1e8-6b6f-4d6b-a744-29b0fdc802df-config-data\") pod \"keystone-76548565b-rrmpc\" (UID: \"8225c1e8-6b6f-4d6b-a744-29b0fdc802df\") " pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.592824 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8225c1e8-6b6f-4d6b-a744-29b0fdc802df-combined-ca-bundle\") pod \"keystone-76548565b-rrmpc\" (UID: \"8225c1e8-6b6f-4d6b-a744-29b0fdc802df\") " pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.592862 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8225c1e8-6b6f-4d6b-a744-29b0fdc802df-fernet-keys\") pod \"keystone-76548565b-rrmpc\" (UID: \"8225c1e8-6b6f-4d6b-a744-29b0fdc802df\") " pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.592941 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8225c1e8-6b6f-4d6b-a744-29b0fdc802df-scripts\") pod \"keystone-76548565b-rrmpc\" (UID: \"8225c1e8-6b6f-4d6b-a744-29b0fdc802df\") " pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.592971 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hqj4\" (UniqueName: \"kubernetes.io/projected/8225c1e8-6b6f-4d6b-a744-29b0fdc802df-kube-api-access-5hqj4\") pod \"keystone-76548565b-rrmpc\" (UID: \"8225c1e8-6b6f-4d6b-a744-29b0fdc802df\") " pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.593045 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8225c1e8-6b6f-4d6b-a744-29b0fdc802df-internal-tls-certs\") pod \"keystone-76548565b-rrmpc\" (UID: \"8225c1e8-6b6f-4d6b-a744-29b0fdc802df\") " pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.593070 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8225c1e8-6b6f-4d6b-a744-29b0fdc802df-credential-keys\") pod \"keystone-76548565b-rrmpc\" (UID: \"8225c1e8-6b6f-4d6b-a744-29b0fdc802df\") " pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.593096 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8225c1e8-6b6f-4d6b-a744-29b0fdc802df-public-tls-certs\") pod \"keystone-76548565b-rrmpc\" (UID: \"8225c1e8-6b6f-4d6b-a744-29b0fdc802df\") " pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.695831 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8225c1e8-6b6f-4d6b-a744-29b0fdc802df-scripts\") pod \"keystone-76548565b-rrmpc\" (UID: \"8225c1e8-6b6f-4d6b-a744-29b0fdc802df\") " pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.695870 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hqj4\" (UniqueName: \"kubernetes.io/projected/8225c1e8-6b6f-4d6b-a744-29b0fdc802df-kube-api-access-5hqj4\") pod \"keystone-76548565b-rrmpc\" (UID: \"8225c1e8-6b6f-4d6b-a744-29b0fdc802df\") " pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.695912 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8225c1e8-6b6f-4d6b-a744-29b0fdc802df-internal-tls-certs\") pod \"keystone-76548565b-rrmpc\" (UID: \"8225c1e8-6b6f-4d6b-a744-29b0fdc802df\") " pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.695931 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8225c1e8-6b6f-4d6b-a744-29b0fdc802df-credential-keys\") pod \"keystone-76548565b-rrmpc\" (UID: \"8225c1e8-6b6f-4d6b-a744-29b0fdc802df\") " pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.695950 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8225c1e8-6b6f-4d6b-a744-29b0fdc802df-public-tls-certs\") pod \"keystone-76548565b-rrmpc\" (UID: \"8225c1e8-6b6f-4d6b-a744-29b0fdc802df\") " pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.695990 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8225c1e8-6b6f-4d6b-a744-29b0fdc802df-config-data\") pod \"keystone-76548565b-rrmpc\" (UID: \"8225c1e8-6b6f-4d6b-a744-29b0fdc802df\") " pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.696015 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8225c1e8-6b6f-4d6b-a744-29b0fdc802df-combined-ca-bundle\") pod \"keystone-76548565b-rrmpc\" (UID: \"8225c1e8-6b6f-4d6b-a744-29b0fdc802df\") " pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.696038 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8225c1e8-6b6f-4d6b-a744-29b0fdc802df-fernet-keys\") pod \"keystone-76548565b-rrmpc\" (UID: \"8225c1e8-6b6f-4d6b-a744-29b0fdc802df\") " pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.701372 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8225c1e8-6b6f-4d6b-a744-29b0fdc802df-fernet-keys\") pod \"keystone-76548565b-rrmpc\" (UID: \"8225c1e8-6b6f-4d6b-a744-29b0fdc802df\") " pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.707822 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8225c1e8-6b6f-4d6b-a744-29b0fdc802df-combined-ca-bundle\") pod \"keystone-76548565b-rrmpc\" (UID: \"8225c1e8-6b6f-4d6b-a744-29b0fdc802df\") " pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.711974 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8225c1e8-6b6f-4d6b-a744-29b0fdc802df-credential-keys\") pod \"keystone-76548565b-rrmpc\" (UID: \"8225c1e8-6b6f-4d6b-a744-29b0fdc802df\") " pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.719798 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8225c1e8-6b6f-4d6b-a744-29b0fdc802df-config-data\") pod \"keystone-76548565b-rrmpc\" (UID: \"8225c1e8-6b6f-4d6b-a744-29b0fdc802df\") " pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.722740 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8225c1e8-6b6f-4d6b-a744-29b0fdc802df-public-tls-certs\") pod \"keystone-76548565b-rrmpc\" (UID: \"8225c1e8-6b6f-4d6b-a744-29b0fdc802df\") " pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.723158 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8225c1e8-6b6f-4d6b-a744-29b0fdc802df-internal-tls-certs\") pod \"keystone-76548565b-rrmpc\" (UID: \"8225c1e8-6b6f-4d6b-a744-29b0fdc802df\") " pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.724328 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8225c1e8-6b6f-4d6b-a744-29b0fdc802df-scripts\") pod \"keystone-76548565b-rrmpc\" (UID: \"8225c1e8-6b6f-4d6b-a744-29b0fdc802df\") " pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.733342 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hqj4\" (UniqueName: \"kubernetes.io/projected/8225c1e8-6b6f-4d6b-a744-29b0fdc802df-kube-api-access-5hqj4\") pod \"keystone-76548565b-rrmpc\" (UID: \"8225c1e8-6b6f-4d6b-a744-29b0fdc802df\") " pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:04 crc kubenswrapper[4840]: I0311 10:27:04.852968 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:05 crc kubenswrapper[4840]: I0311 10:27:05.292921 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-76548565b-rrmpc"] Mar 11 10:27:05 crc kubenswrapper[4840]: I0311 10:27:05.406430 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-76548565b-rrmpc" event={"ID":"8225c1e8-6b6f-4d6b-a744-29b0fdc802df","Type":"ContainerStarted","Data":"ff75c22c4782bca76c29be3366f673a5fad90f3bc02b4563933e7b7668e14f86"} Mar 11 10:27:06 crc kubenswrapper[4840]: I0311 10:27:06.418833 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-76548565b-rrmpc" event={"ID":"8225c1e8-6b6f-4d6b-a744-29b0fdc802df","Type":"ContainerStarted","Data":"b5be03f925b18fae4b40efac5b14ddfedffaa17dc0863de21373c8cf59b40035"} Mar 11 10:27:06 crc kubenswrapper[4840]: I0311 10:27:06.420769 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:06 crc kubenswrapper[4840]: I0311 10:27:06.452055 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-76548565b-rrmpc" podStartSLOduration=2.452031479 podStartE2EDuration="2.452031479s" podCreationTimestamp="2026-03-11 10:27:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:27:06.442508277 +0000 UTC m=+5425.108178132" watchObservedRunningTime="2026-03-11 10:27:06.452031479 +0000 UTC m=+5425.117701304" Mar 11 10:27:36 crc kubenswrapper[4840]: I0311 10:27:36.365171 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-76548565b-rrmpc" Mar 11 10:27:39 crc kubenswrapper[4840]: I0311 10:27:39.449778 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 11 10:27:39 crc kubenswrapper[4840]: I0311 10:27:39.452306 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 11 10:27:39 crc kubenswrapper[4840]: I0311 10:27:39.454959 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 11 10:27:39 crc kubenswrapper[4840]: I0311 10:27:39.455041 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-qk4wj" Mar 11 10:27:39 crc kubenswrapper[4840]: I0311 10:27:39.455129 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 11 10:27:39 crc kubenswrapper[4840]: I0311 10:27:39.465902 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 11 10:27:39 crc kubenswrapper[4840]: I0311 10:27:39.576220 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4zbf\" (UniqueName: \"kubernetes.io/projected/366976d6-0ec3-444e-864b-0e721ec24799-kube-api-access-v4zbf\") pod \"openstackclient\" (UID: \"366976d6-0ec3-444e-864b-0e721ec24799\") " pod="openstack/openstackclient" Mar 11 10:27:39 crc kubenswrapper[4840]: I0311 10:27:39.576272 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/366976d6-0ec3-444e-864b-0e721ec24799-combined-ca-bundle\") pod \"openstackclient\" (UID: \"366976d6-0ec3-444e-864b-0e721ec24799\") " pod="openstack/openstackclient" Mar 11 10:27:39 crc kubenswrapper[4840]: I0311 10:27:39.576314 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/366976d6-0ec3-444e-864b-0e721ec24799-openstack-config\") pod \"openstackclient\" (UID: \"366976d6-0ec3-444e-864b-0e721ec24799\") " pod="openstack/openstackclient" Mar 11 10:27:39 crc kubenswrapper[4840]: I0311 10:27:39.576398 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/366976d6-0ec3-444e-864b-0e721ec24799-openstack-config-secret\") pod \"openstackclient\" (UID: \"366976d6-0ec3-444e-864b-0e721ec24799\") " pod="openstack/openstackclient" Mar 11 10:27:39 crc kubenswrapper[4840]: I0311 10:27:39.678276 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/366976d6-0ec3-444e-864b-0e721ec24799-openstack-config-secret\") pod \"openstackclient\" (UID: \"366976d6-0ec3-444e-864b-0e721ec24799\") " pod="openstack/openstackclient" Mar 11 10:27:39 crc kubenswrapper[4840]: I0311 10:27:39.678621 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4zbf\" (UniqueName: \"kubernetes.io/projected/366976d6-0ec3-444e-864b-0e721ec24799-kube-api-access-v4zbf\") pod \"openstackclient\" (UID: \"366976d6-0ec3-444e-864b-0e721ec24799\") " pod="openstack/openstackclient" Mar 11 10:27:39 crc kubenswrapper[4840]: I0311 10:27:39.678641 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/366976d6-0ec3-444e-864b-0e721ec24799-combined-ca-bundle\") pod \"openstackclient\" (UID: \"366976d6-0ec3-444e-864b-0e721ec24799\") " pod="openstack/openstackclient" Mar 11 10:27:39 crc kubenswrapper[4840]: I0311 10:27:39.678670 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/366976d6-0ec3-444e-864b-0e721ec24799-openstack-config\") pod \"openstackclient\" (UID: \"366976d6-0ec3-444e-864b-0e721ec24799\") " pod="openstack/openstackclient" Mar 11 10:27:39 crc kubenswrapper[4840]: I0311 10:27:39.679551 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/366976d6-0ec3-444e-864b-0e721ec24799-openstack-config\") pod \"openstackclient\" (UID: \"366976d6-0ec3-444e-864b-0e721ec24799\") " pod="openstack/openstackclient" Mar 11 10:27:39 crc kubenswrapper[4840]: I0311 10:27:39.684526 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/366976d6-0ec3-444e-864b-0e721ec24799-openstack-config-secret\") pod \"openstackclient\" (UID: \"366976d6-0ec3-444e-864b-0e721ec24799\") " pod="openstack/openstackclient" Mar 11 10:27:39 crc kubenswrapper[4840]: I0311 10:27:39.686542 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/366976d6-0ec3-444e-864b-0e721ec24799-combined-ca-bundle\") pod \"openstackclient\" (UID: \"366976d6-0ec3-444e-864b-0e721ec24799\") " pod="openstack/openstackclient" Mar 11 10:27:39 crc kubenswrapper[4840]: I0311 10:27:39.698004 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4zbf\" (UniqueName: \"kubernetes.io/projected/366976d6-0ec3-444e-864b-0e721ec24799-kube-api-access-v4zbf\") pod \"openstackclient\" (UID: \"366976d6-0ec3-444e-864b-0e721ec24799\") " pod="openstack/openstackclient" Mar 11 10:27:39 crc kubenswrapper[4840]: I0311 10:27:39.782202 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 11 10:27:40 crc kubenswrapper[4840]: I0311 10:27:40.223704 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 11 10:27:40 crc kubenswrapper[4840]: I0311 10:27:40.766064 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"366976d6-0ec3-444e-864b-0e721ec24799","Type":"ContainerStarted","Data":"3752cd014cb08c225fa8ffc580359a678ec689128aaf6f0e4d3ce68fc7933f2b"} Mar 11 10:27:40 crc kubenswrapper[4840]: I0311 10:27:40.766443 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"366976d6-0ec3-444e-864b-0e721ec24799","Type":"ContainerStarted","Data":"ca9c1be14c3a8fc0b60787d2da0f89c8a2e9771f186a112ebd79e34b2d589e1c"} Mar 11 10:27:40 crc kubenswrapper[4840]: I0311 10:27:40.793549 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.793527275 podStartE2EDuration="1.793527275s" podCreationTimestamp="2026-03-11 10:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:27:40.78625405 +0000 UTC m=+5459.451923865" watchObservedRunningTime="2026-03-11 10:27:40.793527275 +0000 UTC m=+5459.459197090" Mar 11 10:27:57 crc kubenswrapper[4840]: I0311 10:27:57.445670 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:27:57 crc kubenswrapper[4840]: I0311 10:27:57.446463 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:28:00 crc kubenswrapper[4840]: I0311 10:28:00.147043 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553748-4hcgd"] Mar 11 10:28:00 crc kubenswrapper[4840]: I0311 10:28:00.148639 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553748-4hcgd" Mar 11 10:28:00 crc kubenswrapper[4840]: I0311 10:28:00.151316 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:28:00 crc kubenswrapper[4840]: I0311 10:28:00.153836 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:28:00 crc kubenswrapper[4840]: I0311 10:28:00.153866 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:28:00 crc kubenswrapper[4840]: I0311 10:28:00.160887 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553748-4hcgd"] Mar 11 10:28:00 crc kubenswrapper[4840]: I0311 10:28:00.308129 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jztgk\" (UniqueName: \"kubernetes.io/projected/04916aa0-c92b-4b8e-8375-179fc4d96ba0-kube-api-access-jztgk\") pod \"auto-csr-approver-29553748-4hcgd\" (UID: \"04916aa0-c92b-4b8e-8375-179fc4d96ba0\") " pod="openshift-infra/auto-csr-approver-29553748-4hcgd" Mar 11 10:28:00 crc kubenswrapper[4840]: I0311 10:28:00.410274 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jztgk\" (UniqueName: \"kubernetes.io/projected/04916aa0-c92b-4b8e-8375-179fc4d96ba0-kube-api-access-jztgk\") pod \"auto-csr-approver-29553748-4hcgd\" (UID: \"04916aa0-c92b-4b8e-8375-179fc4d96ba0\") " pod="openshift-infra/auto-csr-approver-29553748-4hcgd" Mar 11 10:28:00 crc kubenswrapper[4840]: I0311 10:28:00.447764 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jztgk\" (UniqueName: \"kubernetes.io/projected/04916aa0-c92b-4b8e-8375-179fc4d96ba0-kube-api-access-jztgk\") pod \"auto-csr-approver-29553748-4hcgd\" (UID: \"04916aa0-c92b-4b8e-8375-179fc4d96ba0\") " pod="openshift-infra/auto-csr-approver-29553748-4hcgd" Mar 11 10:28:00 crc kubenswrapper[4840]: I0311 10:28:00.474669 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553748-4hcgd" Mar 11 10:28:01 crc kubenswrapper[4840]: W0311 10:28:01.005563 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04916aa0_c92b_4b8e_8375_179fc4d96ba0.slice/crio-1e003c7301e6d5b4e818a3bb0f01fe8708cb4623fc8cf92eb393324f321d255b WatchSource:0}: Error finding container 1e003c7301e6d5b4e818a3bb0f01fe8708cb4623fc8cf92eb393324f321d255b: Status 404 returned error can't find the container with id 1e003c7301e6d5b4e818a3bb0f01fe8708cb4623fc8cf92eb393324f321d255b Mar 11 10:28:01 crc kubenswrapper[4840]: I0311 10:28:01.006230 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553748-4hcgd"] Mar 11 10:28:01 crc kubenswrapper[4840]: I0311 10:28:01.989570 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553748-4hcgd" event={"ID":"04916aa0-c92b-4b8e-8375-179fc4d96ba0","Type":"ContainerStarted","Data":"1e003c7301e6d5b4e818a3bb0f01fe8708cb4623fc8cf92eb393324f321d255b"} Mar 11 10:28:03 crc kubenswrapper[4840]: I0311 10:28:03.015999 4840 generic.go:334] "Generic (PLEG): container finished" podID="04916aa0-c92b-4b8e-8375-179fc4d96ba0" containerID="f9e7d522490fb11974615937edf1f31d3bd8272b383ee62d00846ae450599438" exitCode=0 Mar 11 10:28:03 crc kubenswrapper[4840]: I0311 10:28:03.016077 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553748-4hcgd" event={"ID":"04916aa0-c92b-4b8e-8375-179fc4d96ba0","Type":"ContainerDied","Data":"f9e7d522490fb11974615937edf1f31d3bd8272b383ee62d00846ae450599438"} Mar 11 10:28:04 crc kubenswrapper[4840]: I0311 10:28:04.400526 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553748-4hcgd" Mar 11 10:28:04 crc kubenswrapper[4840]: I0311 10:28:04.512152 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jztgk\" (UniqueName: \"kubernetes.io/projected/04916aa0-c92b-4b8e-8375-179fc4d96ba0-kube-api-access-jztgk\") pod \"04916aa0-c92b-4b8e-8375-179fc4d96ba0\" (UID: \"04916aa0-c92b-4b8e-8375-179fc4d96ba0\") " Mar 11 10:28:04 crc kubenswrapper[4840]: I0311 10:28:04.521315 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04916aa0-c92b-4b8e-8375-179fc4d96ba0-kube-api-access-jztgk" (OuterVolumeSpecName: "kube-api-access-jztgk") pod "04916aa0-c92b-4b8e-8375-179fc4d96ba0" (UID: "04916aa0-c92b-4b8e-8375-179fc4d96ba0"). InnerVolumeSpecName "kube-api-access-jztgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:28:04 crc kubenswrapper[4840]: I0311 10:28:04.614743 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jztgk\" (UniqueName: \"kubernetes.io/projected/04916aa0-c92b-4b8e-8375-179fc4d96ba0-kube-api-access-jztgk\") on node \"crc\" DevicePath \"\"" Mar 11 10:28:05 crc kubenswrapper[4840]: I0311 10:28:05.035732 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553748-4hcgd" event={"ID":"04916aa0-c92b-4b8e-8375-179fc4d96ba0","Type":"ContainerDied","Data":"1e003c7301e6d5b4e818a3bb0f01fe8708cb4623fc8cf92eb393324f321d255b"} Mar 11 10:28:05 crc kubenswrapper[4840]: I0311 10:28:05.036131 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e003c7301e6d5b4e818a3bb0f01fe8708cb4623fc8cf92eb393324f321d255b" Mar 11 10:28:05 crc kubenswrapper[4840]: I0311 10:28:05.035781 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553748-4hcgd" Mar 11 10:28:05 crc kubenswrapper[4840]: I0311 10:28:05.488578 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553742-dx2mq"] Mar 11 10:28:05 crc kubenswrapper[4840]: I0311 10:28:05.514203 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553742-dx2mq"] Mar 11 10:28:06 crc kubenswrapper[4840]: I0311 10:28:06.072377 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12b5624a-876b-4f92-8019-6d096072ffbf" path="/var/lib/kubelet/pods/12b5624a-876b-4f92-8019-6d096072ffbf/volumes" Mar 11 10:28:12 crc kubenswrapper[4840]: I0311 10:28:12.710726 4840 scope.go:117] "RemoveContainer" containerID="825406a9ebb76948474ff5d1f1c62bd07d50dc438d94522761491cd8a3abf229" Mar 11 10:28:16 crc kubenswrapper[4840]: E0311 10:28:16.015559 4840 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.30:50692->38.102.83.30:45639: write tcp 38.102.83.30:50692->38.102.83.30:45639: write: broken pipe Mar 11 10:28:27 crc kubenswrapper[4840]: I0311 10:28:27.446359 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:28:27 crc kubenswrapper[4840]: I0311 10:28:27.447059 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:28:34 crc kubenswrapper[4840]: I0311 10:28:34.406606 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gqqt7"] Mar 11 10:28:34 crc kubenswrapper[4840]: E0311 10:28:34.407582 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04916aa0-c92b-4b8e-8375-179fc4d96ba0" containerName="oc" Mar 11 10:28:34 crc kubenswrapper[4840]: I0311 10:28:34.407596 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="04916aa0-c92b-4b8e-8375-179fc4d96ba0" containerName="oc" Mar 11 10:28:34 crc kubenswrapper[4840]: I0311 10:28:34.407754 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="04916aa0-c92b-4b8e-8375-179fc4d96ba0" containerName="oc" Mar 11 10:28:34 crc kubenswrapper[4840]: I0311 10:28:34.409045 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqqt7" Mar 11 10:28:34 crc kubenswrapper[4840]: I0311 10:28:34.428605 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gqqt7"] Mar 11 10:28:34 crc kubenswrapper[4840]: I0311 10:28:34.508844 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd33baba-4296-4984-80d5-07c7cb72b401-utilities\") pod \"redhat-operators-gqqt7\" (UID: \"bd33baba-4296-4984-80d5-07c7cb72b401\") " pod="openshift-marketplace/redhat-operators-gqqt7" Mar 11 10:28:34 crc kubenswrapper[4840]: I0311 10:28:34.509508 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkh7q\" (UniqueName: \"kubernetes.io/projected/bd33baba-4296-4984-80d5-07c7cb72b401-kube-api-access-kkh7q\") pod \"redhat-operators-gqqt7\" (UID: \"bd33baba-4296-4984-80d5-07c7cb72b401\") " pod="openshift-marketplace/redhat-operators-gqqt7" Mar 11 10:28:34 crc kubenswrapper[4840]: I0311 10:28:34.509754 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd33baba-4296-4984-80d5-07c7cb72b401-catalog-content\") pod \"redhat-operators-gqqt7\" (UID: \"bd33baba-4296-4984-80d5-07c7cb72b401\") " pod="openshift-marketplace/redhat-operators-gqqt7" Mar 11 10:28:34 crc kubenswrapper[4840]: I0311 10:28:34.611258 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd33baba-4296-4984-80d5-07c7cb72b401-utilities\") pod \"redhat-operators-gqqt7\" (UID: \"bd33baba-4296-4984-80d5-07c7cb72b401\") " pod="openshift-marketplace/redhat-operators-gqqt7" Mar 11 10:28:34 crc kubenswrapper[4840]: I0311 10:28:34.611388 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkh7q\" (UniqueName: \"kubernetes.io/projected/bd33baba-4296-4984-80d5-07c7cb72b401-kube-api-access-kkh7q\") pod \"redhat-operators-gqqt7\" (UID: \"bd33baba-4296-4984-80d5-07c7cb72b401\") " pod="openshift-marketplace/redhat-operators-gqqt7" Mar 11 10:28:34 crc kubenswrapper[4840]: I0311 10:28:34.611459 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd33baba-4296-4984-80d5-07c7cb72b401-catalog-content\") pod \"redhat-operators-gqqt7\" (UID: \"bd33baba-4296-4984-80d5-07c7cb72b401\") " pod="openshift-marketplace/redhat-operators-gqqt7" Mar 11 10:28:34 crc kubenswrapper[4840]: I0311 10:28:34.611986 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd33baba-4296-4984-80d5-07c7cb72b401-utilities\") pod \"redhat-operators-gqqt7\" (UID: \"bd33baba-4296-4984-80d5-07c7cb72b401\") " pod="openshift-marketplace/redhat-operators-gqqt7" Mar 11 10:28:34 crc kubenswrapper[4840]: I0311 10:28:34.612000 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd33baba-4296-4984-80d5-07c7cb72b401-catalog-content\") pod \"redhat-operators-gqqt7\" (UID: \"bd33baba-4296-4984-80d5-07c7cb72b401\") " pod="openshift-marketplace/redhat-operators-gqqt7" Mar 11 10:28:34 crc kubenswrapper[4840]: I0311 10:28:34.640274 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkh7q\" (UniqueName: \"kubernetes.io/projected/bd33baba-4296-4984-80d5-07c7cb72b401-kube-api-access-kkh7q\") pod \"redhat-operators-gqqt7\" (UID: \"bd33baba-4296-4984-80d5-07c7cb72b401\") " pod="openshift-marketplace/redhat-operators-gqqt7" Mar 11 10:28:34 crc kubenswrapper[4840]: I0311 10:28:34.730178 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqqt7" Mar 11 10:28:35 crc kubenswrapper[4840]: I0311 10:28:35.201760 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gqqt7"] Mar 11 10:28:35 crc kubenswrapper[4840]: W0311 10:28:35.210590 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd33baba_4296_4984_80d5_07c7cb72b401.slice/crio-5ef49565e4682e6afcd39763e1fe2337fce47e3d5f024bc3a027c79b56f72ea2 WatchSource:0}: Error finding container 5ef49565e4682e6afcd39763e1fe2337fce47e3d5f024bc3a027c79b56f72ea2: Status 404 returned error can't find the container with id 5ef49565e4682e6afcd39763e1fe2337fce47e3d5f024bc3a027c79b56f72ea2 Mar 11 10:28:35 crc kubenswrapper[4840]: I0311 10:28:35.312433 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqqt7" event={"ID":"bd33baba-4296-4984-80d5-07c7cb72b401","Type":"ContainerStarted","Data":"5ef49565e4682e6afcd39763e1fe2337fce47e3d5f024bc3a027c79b56f72ea2"} Mar 11 10:28:36 crc kubenswrapper[4840]: I0311 10:28:36.320821 4840 generic.go:334] "Generic (PLEG): container finished" podID="bd33baba-4296-4984-80d5-07c7cb72b401" containerID="aac954934d65f15ce642dd4529a68035ef3aac34660c8640e29ed44597701677" exitCode=0 Mar 11 10:28:36 crc kubenswrapper[4840]: I0311 10:28:36.320890 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqqt7" event={"ID":"bd33baba-4296-4984-80d5-07c7cb72b401","Type":"ContainerDied","Data":"aac954934d65f15ce642dd4529a68035ef3aac34660c8640e29ed44597701677"} Mar 11 10:28:37 crc kubenswrapper[4840]: I0311 10:28:37.330230 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqqt7" event={"ID":"bd33baba-4296-4984-80d5-07c7cb72b401","Type":"ContainerStarted","Data":"88672eeba44436ad954bdb9933ed4cd59c0c2975b05652cfef25d6f2804f3780"} Mar 11 10:28:38 crc kubenswrapper[4840]: I0311 10:28:38.366007 4840 generic.go:334] "Generic (PLEG): container finished" podID="bd33baba-4296-4984-80d5-07c7cb72b401" containerID="88672eeba44436ad954bdb9933ed4cd59c0c2975b05652cfef25d6f2804f3780" exitCode=0 Mar 11 10:28:38 crc kubenswrapper[4840]: I0311 10:28:38.366223 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqqt7" event={"ID":"bd33baba-4296-4984-80d5-07c7cb72b401","Type":"ContainerDied","Data":"88672eeba44436ad954bdb9933ed4cd59c0c2975b05652cfef25d6f2804f3780"} Mar 11 10:28:39 crc kubenswrapper[4840]: I0311 10:28:39.377769 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqqt7" event={"ID":"bd33baba-4296-4984-80d5-07c7cb72b401","Type":"ContainerStarted","Data":"f07ee09de061608d4935a4cde890d6befc8e3aad33dc514068211ad184ff3e2b"} Mar 11 10:28:39 crc kubenswrapper[4840]: I0311 10:28:39.401372 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gqqt7" podStartSLOduration=2.917113768 podStartE2EDuration="5.401355121s" podCreationTimestamp="2026-03-11 10:28:34 +0000 UTC" firstStartedPulling="2026-03-11 10:28:36.32324363 +0000 UTC m=+5514.988913445" lastFinishedPulling="2026-03-11 10:28:38.807484973 +0000 UTC m=+5517.473154798" observedRunningTime="2026-03-11 10:28:39.399569576 +0000 UTC m=+5518.065239401" watchObservedRunningTime="2026-03-11 10:28:39.401355121 +0000 UTC m=+5518.067024936" Mar 11 10:28:44 crc kubenswrapper[4840]: I0311 10:28:44.731736 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gqqt7" Mar 11 10:28:44 crc kubenswrapper[4840]: I0311 10:28:44.732457 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gqqt7" Mar 11 10:28:45 crc kubenswrapper[4840]: I0311 10:28:45.775750 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gqqt7" podUID="bd33baba-4296-4984-80d5-07c7cb72b401" containerName="registry-server" probeResult="failure" output=< Mar 11 10:28:45 crc kubenswrapper[4840]: timeout: failed to connect service ":50051" within 1s Mar 11 10:28:45 crc kubenswrapper[4840]: > Mar 11 10:28:54 crc kubenswrapper[4840]: I0311 10:28:54.800686 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gqqt7" Mar 11 10:28:54 crc kubenswrapper[4840]: I0311 10:28:54.864379 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gqqt7" Mar 11 10:28:55 crc kubenswrapper[4840]: I0311 10:28:55.060257 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gqqt7"] Mar 11 10:28:56 crc kubenswrapper[4840]: I0311 10:28:56.528686 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gqqt7" podUID="bd33baba-4296-4984-80d5-07c7cb72b401" containerName="registry-server" containerID="cri-o://f07ee09de061608d4935a4cde890d6befc8e3aad33dc514068211ad184ff3e2b" gracePeriod=2 Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.039214 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqqt7" Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.199862 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd33baba-4296-4984-80d5-07c7cb72b401-utilities\") pod \"bd33baba-4296-4984-80d5-07c7cb72b401\" (UID: \"bd33baba-4296-4984-80d5-07c7cb72b401\") " Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.199964 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkh7q\" (UniqueName: \"kubernetes.io/projected/bd33baba-4296-4984-80d5-07c7cb72b401-kube-api-access-kkh7q\") pod \"bd33baba-4296-4984-80d5-07c7cb72b401\" (UID: \"bd33baba-4296-4984-80d5-07c7cb72b401\") " Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.199997 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd33baba-4296-4984-80d5-07c7cb72b401-catalog-content\") pod \"bd33baba-4296-4984-80d5-07c7cb72b401\" (UID: \"bd33baba-4296-4984-80d5-07c7cb72b401\") " Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.201213 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd33baba-4296-4984-80d5-07c7cb72b401-utilities" (OuterVolumeSpecName: "utilities") pod "bd33baba-4296-4984-80d5-07c7cb72b401" (UID: "bd33baba-4296-4984-80d5-07c7cb72b401"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.207665 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd33baba-4296-4984-80d5-07c7cb72b401-kube-api-access-kkh7q" (OuterVolumeSpecName: "kube-api-access-kkh7q") pod "bd33baba-4296-4984-80d5-07c7cb72b401" (UID: "bd33baba-4296-4984-80d5-07c7cb72b401"). InnerVolumeSpecName "kube-api-access-kkh7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.305263 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd33baba-4296-4984-80d5-07c7cb72b401-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.305313 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkh7q\" (UniqueName: \"kubernetes.io/projected/bd33baba-4296-4984-80d5-07c7cb72b401-kube-api-access-kkh7q\") on node \"crc\" DevicePath \"\"" Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.358495 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd33baba-4296-4984-80d5-07c7cb72b401-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd33baba-4296-4984-80d5-07c7cb72b401" (UID: "bd33baba-4296-4984-80d5-07c7cb72b401"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.407528 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd33baba-4296-4984-80d5-07c7cb72b401-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.446184 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.446542 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.446727 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.541787 4840 generic.go:334] "Generic (PLEG): container finished" podID="bd33baba-4296-4984-80d5-07c7cb72b401" containerID="f07ee09de061608d4935a4cde890d6befc8e3aad33dc514068211ad184ff3e2b" exitCode=0 Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.542834 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ecd5e8792daa32307a0f073aedb0fb34456f86b6adb94e207aa2934b241391a"} pod="openshift-machine-config-operator/machine-config-daemon-brtht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.542943 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" containerID="cri-o://8ecd5e8792daa32307a0f073aedb0fb34456f86b6adb94e207aa2934b241391a" gracePeriod=600 Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.543339 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gqqt7" Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.545601 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqqt7" event={"ID":"bd33baba-4296-4984-80d5-07c7cb72b401","Type":"ContainerDied","Data":"f07ee09de061608d4935a4cde890d6befc8e3aad33dc514068211ad184ff3e2b"} Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.545652 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gqqt7" event={"ID":"bd33baba-4296-4984-80d5-07c7cb72b401","Type":"ContainerDied","Data":"5ef49565e4682e6afcd39763e1fe2337fce47e3d5f024bc3a027c79b56f72ea2"} Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.545697 4840 scope.go:117] "RemoveContainer" containerID="f07ee09de061608d4935a4cde890d6befc8e3aad33dc514068211ad184ff3e2b" Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.574890 4840 scope.go:117] "RemoveContainer" containerID="88672eeba44436ad954bdb9933ed4cd59c0c2975b05652cfef25d6f2804f3780" Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.597999 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gqqt7"] Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.604260 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gqqt7"] Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.621350 4840 scope.go:117] "RemoveContainer" containerID="aac954934d65f15ce642dd4529a68035ef3aac34660c8640e29ed44597701677" Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.661632 4840 scope.go:117] "RemoveContainer" containerID="f07ee09de061608d4935a4cde890d6befc8e3aad33dc514068211ad184ff3e2b" Mar 11 10:28:57 crc kubenswrapper[4840]: E0311 10:28:57.662142 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f07ee09de061608d4935a4cde890d6befc8e3aad33dc514068211ad184ff3e2b\": container with ID starting with f07ee09de061608d4935a4cde890d6befc8e3aad33dc514068211ad184ff3e2b not found: ID does not exist" containerID="f07ee09de061608d4935a4cde890d6befc8e3aad33dc514068211ad184ff3e2b" Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.662189 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f07ee09de061608d4935a4cde890d6befc8e3aad33dc514068211ad184ff3e2b"} err="failed to get container status \"f07ee09de061608d4935a4cde890d6befc8e3aad33dc514068211ad184ff3e2b\": rpc error: code = NotFound desc = could not find container \"f07ee09de061608d4935a4cde890d6befc8e3aad33dc514068211ad184ff3e2b\": container with ID starting with f07ee09de061608d4935a4cde890d6befc8e3aad33dc514068211ad184ff3e2b not found: ID does not exist" Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.662219 4840 scope.go:117] "RemoveContainer" containerID="88672eeba44436ad954bdb9933ed4cd59c0c2975b05652cfef25d6f2804f3780" Mar 11 10:28:57 crc kubenswrapper[4840]: E0311 10:28:57.662657 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88672eeba44436ad954bdb9933ed4cd59c0c2975b05652cfef25d6f2804f3780\": container with ID starting with 88672eeba44436ad954bdb9933ed4cd59c0c2975b05652cfef25d6f2804f3780 not found: ID does not exist" containerID="88672eeba44436ad954bdb9933ed4cd59c0c2975b05652cfef25d6f2804f3780" Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.662711 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88672eeba44436ad954bdb9933ed4cd59c0c2975b05652cfef25d6f2804f3780"} err="failed to get container status \"88672eeba44436ad954bdb9933ed4cd59c0c2975b05652cfef25d6f2804f3780\": rpc error: code = NotFound desc = could not find container \"88672eeba44436ad954bdb9933ed4cd59c0c2975b05652cfef25d6f2804f3780\": container with ID starting with 88672eeba44436ad954bdb9933ed4cd59c0c2975b05652cfef25d6f2804f3780 not found: ID does not exist" Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.662749 4840 scope.go:117] "RemoveContainer" containerID="aac954934d65f15ce642dd4529a68035ef3aac34660c8640e29ed44597701677" Mar 11 10:28:57 crc kubenswrapper[4840]: E0311 10:28:57.663191 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aac954934d65f15ce642dd4529a68035ef3aac34660c8640e29ed44597701677\": container with ID starting with aac954934d65f15ce642dd4529a68035ef3aac34660c8640e29ed44597701677 not found: ID does not exist" containerID="aac954934d65f15ce642dd4529a68035ef3aac34660c8640e29ed44597701677" Mar 11 10:28:57 crc kubenswrapper[4840]: I0311 10:28:57.663248 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aac954934d65f15ce642dd4529a68035ef3aac34660c8640e29ed44597701677"} err="failed to get container status \"aac954934d65f15ce642dd4529a68035ef3aac34660c8640e29ed44597701677\": rpc error: code = NotFound desc = could not find container \"aac954934d65f15ce642dd4529a68035ef3aac34660c8640e29ed44597701677\": container with ID starting with aac954934d65f15ce642dd4529a68035ef3aac34660c8640e29ed44597701677 not found: ID does not exist" Mar 11 10:28:58 crc kubenswrapper[4840]: I0311 10:28:58.074709 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd33baba-4296-4984-80d5-07c7cb72b401" path="/var/lib/kubelet/pods/bd33baba-4296-4984-80d5-07c7cb72b401/volumes" Mar 11 10:28:58 crc kubenswrapper[4840]: I0311 10:28:58.556488 4840 generic.go:334] "Generic (PLEG): container finished" podID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerID="8ecd5e8792daa32307a0f073aedb0fb34456f86b6adb94e207aa2934b241391a" exitCode=0 Mar 11 10:28:58 crc kubenswrapper[4840]: I0311 10:28:58.556534 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerDied","Data":"8ecd5e8792daa32307a0f073aedb0fb34456f86b6adb94e207aa2934b241391a"} Mar 11 10:28:58 crc kubenswrapper[4840]: I0311 10:28:58.556560 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873"} Mar 11 10:28:58 crc kubenswrapper[4840]: I0311 10:28:58.556579 4840 scope.go:117] "RemoveContainer" containerID="27425edf605a65072c6ef9c1e0b8cea49f613bc8e8eb94b258e15f23bc4a814b" Mar 11 10:29:07 crc kubenswrapper[4840]: E0311 10:29:07.356663 4840 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.30:45062->38.102.83.30:45639: write tcp 38.102.83.30:45062->38.102.83.30:45639: write: broken pipe Mar 11 10:29:17 crc kubenswrapper[4840]: I0311 10:29:17.583942 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xjckn"] Mar 11 10:29:17 crc kubenswrapper[4840]: E0311 10:29:17.584869 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd33baba-4296-4984-80d5-07c7cb72b401" containerName="registry-server" Mar 11 10:29:17 crc kubenswrapper[4840]: I0311 10:29:17.584887 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd33baba-4296-4984-80d5-07c7cb72b401" containerName="registry-server" Mar 11 10:29:17 crc kubenswrapper[4840]: E0311 10:29:17.584911 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd33baba-4296-4984-80d5-07c7cb72b401" containerName="extract-content" Mar 11 10:29:17 crc kubenswrapper[4840]: I0311 10:29:17.584918 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd33baba-4296-4984-80d5-07c7cb72b401" containerName="extract-content" Mar 11 10:29:17 crc kubenswrapper[4840]: E0311 10:29:17.584936 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd33baba-4296-4984-80d5-07c7cb72b401" containerName="extract-utilities" Mar 11 10:29:17 crc kubenswrapper[4840]: I0311 10:29:17.584943 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd33baba-4296-4984-80d5-07c7cb72b401" containerName="extract-utilities" Mar 11 10:29:17 crc kubenswrapper[4840]: I0311 10:29:17.585095 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd33baba-4296-4984-80d5-07c7cb72b401" containerName="registry-server" Mar 11 10:29:17 crc kubenswrapper[4840]: I0311 10:29:17.586359 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjckn" Mar 11 10:29:17 crc kubenswrapper[4840]: I0311 10:29:17.600159 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xjckn"] Mar 11 10:29:17 crc kubenswrapper[4840]: I0311 10:29:17.729628 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f84ba7-10ef-49ff-9e8e-9424f27d721c-catalog-content\") pod \"certified-operators-xjckn\" (UID: \"07f84ba7-10ef-49ff-9e8e-9424f27d721c\") " pod="openshift-marketplace/certified-operators-xjckn" Mar 11 10:29:17 crc kubenswrapper[4840]: I0311 10:29:17.729853 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f84ba7-10ef-49ff-9e8e-9424f27d721c-utilities\") pod \"certified-operators-xjckn\" (UID: \"07f84ba7-10ef-49ff-9e8e-9424f27d721c\") " pod="openshift-marketplace/certified-operators-xjckn" Mar 11 10:29:17 crc kubenswrapper[4840]: I0311 10:29:17.729924 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lf7q\" (UniqueName: \"kubernetes.io/projected/07f84ba7-10ef-49ff-9e8e-9424f27d721c-kube-api-access-6lf7q\") pod \"certified-operators-xjckn\" (UID: \"07f84ba7-10ef-49ff-9e8e-9424f27d721c\") " pod="openshift-marketplace/certified-operators-xjckn" Mar 11 10:29:17 crc kubenswrapper[4840]: I0311 10:29:17.831492 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f84ba7-10ef-49ff-9e8e-9424f27d721c-utilities\") pod \"certified-operators-xjckn\" (UID: \"07f84ba7-10ef-49ff-9e8e-9424f27d721c\") " pod="openshift-marketplace/certified-operators-xjckn" Mar 11 10:29:17 crc kubenswrapper[4840]: I0311 10:29:17.831763 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lf7q\" (UniqueName: \"kubernetes.io/projected/07f84ba7-10ef-49ff-9e8e-9424f27d721c-kube-api-access-6lf7q\") pod \"certified-operators-xjckn\" (UID: \"07f84ba7-10ef-49ff-9e8e-9424f27d721c\") " pod="openshift-marketplace/certified-operators-xjckn" Mar 11 10:29:17 crc kubenswrapper[4840]: I0311 10:29:17.831904 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f84ba7-10ef-49ff-9e8e-9424f27d721c-catalog-content\") pod \"certified-operators-xjckn\" (UID: \"07f84ba7-10ef-49ff-9e8e-9424f27d721c\") " pod="openshift-marketplace/certified-operators-xjckn" Mar 11 10:29:17 crc kubenswrapper[4840]: I0311 10:29:17.832072 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f84ba7-10ef-49ff-9e8e-9424f27d721c-utilities\") pod \"certified-operators-xjckn\" (UID: \"07f84ba7-10ef-49ff-9e8e-9424f27d721c\") " pod="openshift-marketplace/certified-operators-xjckn" Mar 11 10:29:17 crc kubenswrapper[4840]: I0311 10:29:17.832488 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f84ba7-10ef-49ff-9e8e-9424f27d721c-catalog-content\") pod \"certified-operators-xjckn\" (UID: \"07f84ba7-10ef-49ff-9e8e-9424f27d721c\") " pod="openshift-marketplace/certified-operators-xjckn" Mar 11 10:29:17 crc kubenswrapper[4840]: I0311 10:29:17.858247 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lf7q\" (UniqueName: \"kubernetes.io/projected/07f84ba7-10ef-49ff-9e8e-9424f27d721c-kube-api-access-6lf7q\") pod \"certified-operators-xjckn\" (UID: \"07f84ba7-10ef-49ff-9e8e-9424f27d721c\") " pod="openshift-marketplace/certified-operators-xjckn" Mar 11 10:29:17 crc kubenswrapper[4840]: I0311 10:29:17.910060 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjckn" Mar 11 10:29:18 crc kubenswrapper[4840]: I0311 10:29:18.425940 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xjckn"] Mar 11 10:29:18 crc kubenswrapper[4840]: I0311 10:29:18.748812 4840 generic.go:334] "Generic (PLEG): container finished" podID="07f84ba7-10ef-49ff-9e8e-9424f27d721c" containerID="97a39acb9a28defbb99304d6ab2b8a53d8e669940abe70087176877ed74f730b" exitCode=0 Mar 11 10:29:18 crc kubenswrapper[4840]: I0311 10:29:18.748986 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjckn" event={"ID":"07f84ba7-10ef-49ff-9e8e-9424f27d721c","Type":"ContainerDied","Data":"97a39acb9a28defbb99304d6ab2b8a53d8e669940abe70087176877ed74f730b"} Mar 11 10:29:18 crc kubenswrapper[4840]: I0311 10:29:18.749089 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjckn" event={"ID":"07f84ba7-10ef-49ff-9e8e-9424f27d721c","Type":"ContainerStarted","Data":"734f2a969d16d60f28d0576ecb9277b2f470e92781844847b9bd5cca36b7941a"} Mar 11 10:29:20 crc kubenswrapper[4840]: I0311 10:29:20.772787 4840 generic.go:334] "Generic (PLEG): container finished" podID="07f84ba7-10ef-49ff-9e8e-9424f27d721c" containerID="a1beaea319d5b5b5c86337b00f8944271b905432be7499c29470a267a9de5aff" exitCode=0 Mar 11 10:29:20 crc kubenswrapper[4840]: I0311 10:29:20.772872 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjckn" event={"ID":"07f84ba7-10ef-49ff-9e8e-9424f27d721c","Type":"ContainerDied","Data":"a1beaea319d5b5b5c86337b00f8944271b905432be7499c29470a267a9de5aff"} Mar 11 10:29:21 crc kubenswrapper[4840]: I0311 10:29:21.785969 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjckn" event={"ID":"07f84ba7-10ef-49ff-9e8e-9424f27d721c","Type":"ContainerStarted","Data":"beb0f9ff2cb0697e72a95d68d39423fb832c7a38f115ddc8f952477482a57f9b"} Mar 11 10:29:21 crc kubenswrapper[4840]: I0311 10:29:21.811738 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xjckn" podStartSLOduration=2.363511232 podStartE2EDuration="4.811718889s" podCreationTimestamp="2026-03-11 10:29:17 +0000 UTC" firstStartedPulling="2026-03-11 10:29:18.750291692 +0000 UTC m=+5557.415961507" lastFinishedPulling="2026-03-11 10:29:21.198499349 +0000 UTC m=+5559.864169164" observedRunningTime="2026-03-11 10:29:21.80744674 +0000 UTC m=+5560.473116565" watchObservedRunningTime="2026-03-11 10:29:21.811718889 +0000 UTC m=+5560.477388704" Mar 11 10:29:27 crc kubenswrapper[4840]: I0311 10:29:27.911213 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xjckn" Mar 11 10:29:27 crc kubenswrapper[4840]: I0311 10:29:27.913375 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xjckn" Mar 11 10:29:27 crc kubenswrapper[4840]: I0311 10:29:27.988890 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xjckn" Mar 11 10:29:28 crc kubenswrapper[4840]: I0311 10:29:28.915758 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xjckn" Mar 11 10:29:28 crc kubenswrapper[4840]: I0311 10:29:28.967088 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xjckn"] Mar 11 10:29:30 crc kubenswrapper[4840]: I0311 10:29:30.864366 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xjckn" podUID="07f84ba7-10ef-49ff-9e8e-9424f27d721c" containerName="registry-server" containerID="cri-o://beb0f9ff2cb0697e72a95d68d39423fb832c7a38f115ddc8f952477482a57f9b" gracePeriod=2 Mar 11 10:29:31 crc kubenswrapper[4840]: I0311 10:29:31.309117 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjckn" Mar 11 10:29:31 crc kubenswrapper[4840]: I0311 10:29:31.409863 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f84ba7-10ef-49ff-9e8e-9424f27d721c-utilities\") pod \"07f84ba7-10ef-49ff-9e8e-9424f27d721c\" (UID: \"07f84ba7-10ef-49ff-9e8e-9424f27d721c\") " Mar 11 10:29:31 crc kubenswrapper[4840]: I0311 10:29:31.409924 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f84ba7-10ef-49ff-9e8e-9424f27d721c-catalog-content\") pod \"07f84ba7-10ef-49ff-9e8e-9424f27d721c\" (UID: \"07f84ba7-10ef-49ff-9e8e-9424f27d721c\") " Mar 11 10:29:31 crc kubenswrapper[4840]: I0311 10:29:31.410056 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lf7q\" (UniqueName: \"kubernetes.io/projected/07f84ba7-10ef-49ff-9e8e-9424f27d721c-kube-api-access-6lf7q\") pod \"07f84ba7-10ef-49ff-9e8e-9424f27d721c\" (UID: \"07f84ba7-10ef-49ff-9e8e-9424f27d721c\") " Mar 11 10:29:31 crc kubenswrapper[4840]: I0311 10:29:31.410634 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07f84ba7-10ef-49ff-9e8e-9424f27d721c-utilities" (OuterVolumeSpecName: "utilities") pod "07f84ba7-10ef-49ff-9e8e-9424f27d721c" (UID: "07f84ba7-10ef-49ff-9e8e-9424f27d721c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:29:31 crc kubenswrapper[4840]: I0311 10:29:31.415917 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07f84ba7-10ef-49ff-9e8e-9424f27d721c-kube-api-access-6lf7q" (OuterVolumeSpecName: "kube-api-access-6lf7q") pod "07f84ba7-10ef-49ff-9e8e-9424f27d721c" (UID: "07f84ba7-10ef-49ff-9e8e-9424f27d721c"). InnerVolumeSpecName "kube-api-access-6lf7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:29:31 crc kubenswrapper[4840]: I0311 10:29:31.478636 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07f84ba7-10ef-49ff-9e8e-9424f27d721c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07f84ba7-10ef-49ff-9e8e-9424f27d721c" (UID: "07f84ba7-10ef-49ff-9e8e-9424f27d721c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:29:31 crc kubenswrapper[4840]: I0311 10:29:31.511964 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lf7q\" (UniqueName: \"kubernetes.io/projected/07f84ba7-10ef-49ff-9e8e-9424f27d721c-kube-api-access-6lf7q\") on node \"crc\" DevicePath \"\"" Mar 11 10:29:31 crc kubenswrapper[4840]: I0311 10:29:31.511997 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07f84ba7-10ef-49ff-9e8e-9424f27d721c-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:29:31 crc kubenswrapper[4840]: I0311 10:29:31.512006 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07f84ba7-10ef-49ff-9e8e-9424f27d721c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:29:31 crc kubenswrapper[4840]: I0311 10:29:31.878549 4840 generic.go:334] "Generic (PLEG): container finished" podID="07f84ba7-10ef-49ff-9e8e-9424f27d721c" containerID="beb0f9ff2cb0697e72a95d68d39423fb832c7a38f115ddc8f952477482a57f9b" exitCode=0 Mar 11 10:29:31 crc kubenswrapper[4840]: I0311 10:29:31.878659 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjckn" event={"ID":"07f84ba7-10ef-49ff-9e8e-9424f27d721c","Type":"ContainerDied","Data":"beb0f9ff2cb0697e72a95d68d39423fb832c7a38f115ddc8f952477482a57f9b"} Mar 11 10:29:31 crc kubenswrapper[4840]: I0311 10:29:31.878932 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xjckn" event={"ID":"07f84ba7-10ef-49ff-9e8e-9424f27d721c","Type":"ContainerDied","Data":"734f2a969d16d60f28d0576ecb9277b2f470e92781844847b9bd5cca36b7941a"} Mar 11 10:29:31 crc kubenswrapper[4840]: I0311 10:29:31.878968 4840 scope.go:117] "RemoveContainer" containerID="beb0f9ff2cb0697e72a95d68d39423fb832c7a38f115ddc8f952477482a57f9b" Mar 11 10:29:31 crc kubenswrapper[4840]: I0311 10:29:31.878679 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xjckn" Mar 11 10:29:31 crc kubenswrapper[4840]: I0311 10:29:31.897458 4840 scope.go:117] "RemoveContainer" containerID="a1beaea319d5b5b5c86337b00f8944271b905432be7499c29470a267a9de5aff" Mar 11 10:29:31 crc kubenswrapper[4840]: I0311 10:29:31.923056 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xjckn"] Mar 11 10:29:31 crc kubenswrapper[4840]: I0311 10:29:31.925886 4840 scope.go:117] "RemoveContainer" containerID="97a39acb9a28defbb99304d6ab2b8a53d8e669940abe70087176877ed74f730b" Mar 11 10:29:31 crc kubenswrapper[4840]: I0311 10:29:31.928857 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xjckn"] Mar 11 10:29:31 crc kubenswrapper[4840]: I0311 10:29:31.960557 4840 scope.go:117] "RemoveContainer" containerID="beb0f9ff2cb0697e72a95d68d39423fb832c7a38f115ddc8f952477482a57f9b" Mar 11 10:29:31 crc kubenswrapper[4840]: E0311 10:29:31.961344 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"beb0f9ff2cb0697e72a95d68d39423fb832c7a38f115ddc8f952477482a57f9b\": container with ID starting with beb0f9ff2cb0697e72a95d68d39423fb832c7a38f115ddc8f952477482a57f9b not found: ID does not exist" containerID="beb0f9ff2cb0697e72a95d68d39423fb832c7a38f115ddc8f952477482a57f9b" Mar 11 10:29:31 crc kubenswrapper[4840]: I0311 10:29:31.961415 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"beb0f9ff2cb0697e72a95d68d39423fb832c7a38f115ddc8f952477482a57f9b"} err="failed to get container status \"beb0f9ff2cb0697e72a95d68d39423fb832c7a38f115ddc8f952477482a57f9b\": rpc error: code = NotFound desc = could not find container \"beb0f9ff2cb0697e72a95d68d39423fb832c7a38f115ddc8f952477482a57f9b\": container with ID starting with beb0f9ff2cb0697e72a95d68d39423fb832c7a38f115ddc8f952477482a57f9b not found: ID does not exist" Mar 11 10:29:31 crc kubenswrapper[4840]: I0311 10:29:31.961446 4840 scope.go:117] "RemoveContainer" containerID="a1beaea319d5b5b5c86337b00f8944271b905432be7499c29470a267a9de5aff" Mar 11 10:29:31 crc kubenswrapper[4840]: E0311 10:29:31.962019 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1beaea319d5b5b5c86337b00f8944271b905432be7499c29470a267a9de5aff\": container with ID starting with a1beaea319d5b5b5c86337b00f8944271b905432be7499c29470a267a9de5aff not found: ID does not exist" containerID="a1beaea319d5b5b5c86337b00f8944271b905432be7499c29470a267a9de5aff" Mar 11 10:29:31 crc kubenswrapper[4840]: I0311 10:29:31.962076 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1beaea319d5b5b5c86337b00f8944271b905432be7499c29470a267a9de5aff"} err="failed to get container status \"a1beaea319d5b5b5c86337b00f8944271b905432be7499c29470a267a9de5aff\": rpc error: code = NotFound desc = could not find container \"a1beaea319d5b5b5c86337b00f8944271b905432be7499c29470a267a9de5aff\": container with ID starting with a1beaea319d5b5b5c86337b00f8944271b905432be7499c29470a267a9de5aff not found: ID does not exist" Mar 11 10:29:31 crc kubenswrapper[4840]: I0311 10:29:31.962115 4840 scope.go:117] "RemoveContainer" containerID="97a39acb9a28defbb99304d6ab2b8a53d8e669940abe70087176877ed74f730b" Mar 11 10:29:31 crc kubenswrapper[4840]: E0311 10:29:31.962423 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97a39acb9a28defbb99304d6ab2b8a53d8e669940abe70087176877ed74f730b\": container with ID starting with 97a39acb9a28defbb99304d6ab2b8a53d8e669940abe70087176877ed74f730b not found: ID does not exist" containerID="97a39acb9a28defbb99304d6ab2b8a53d8e669940abe70087176877ed74f730b" Mar 11 10:29:31 crc kubenswrapper[4840]: I0311 10:29:31.962510 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97a39acb9a28defbb99304d6ab2b8a53d8e669940abe70087176877ed74f730b"} err="failed to get container status \"97a39acb9a28defbb99304d6ab2b8a53d8e669940abe70087176877ed74f730b\": rpc error: code = NotFound desc = could not find container \"97a39acb9a28defbb99304d6ab2b8a53d8e669940abe70087176877ed74f730b\": container with ID starting with 97a39acb9a28defbb99304d6ab2b8a53d8e669940abe70087176877ed74f730b not found: ID does not exist" Mar 11 10:29:32 crc kubenswrapper[4840]: I0311 10:29:32.072342 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07f84ba7-10ef-49ff-9e8e-9424f27d721c" path="/var/lib/kubelet/pods/07f84ba7-10ef-49ff-9e8e-9424f27d721c/volumes" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.164253 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553750-zg7qd"] Mar 11 10:30:00 crc kubenswrapper[4840]: E0311 10:30:00.165515 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f84ba7-10ef-49ff-9e8e-9424f27d721c" containerName="extract-content" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.165544 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f84ba7-10ef-49ff-9e8e-9424f27d721c" containerName="extract-content" Mar 11 10:30:00 crc kubenswrapper[4840]: E0311 10:30:00.165572 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f84ba7-10ef-49ff-9e8e-9424f27d721c" containerName="extract-utilities" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.165589 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f84ba7-10ef-49ff-9e8e-9424f27d721c" containerName="extract-utilities" Mar 11 10:30:00 crc kubenswrapper[4840]: E0311 10:30:00.165620 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f84ba7-10ef-49ff-9e8e-9424f27d721c" containerName="registry-server" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.165639 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f84ba7-10ef-49ff-9e8e-9424f27d721c" containerName="registry-server" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.166050 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f84ba7-10ef-49ff-9e8e-9424f27d721c" containerName="registry-server" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.166973 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553750-zg7qd" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.175184 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.175370 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.176512 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.181050 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553750-d6v6g"] Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.182392 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-d6v6g" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.185892 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.190725 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.195088 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553750-zg7qd"] Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.201255 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553750-d6v6g"] Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.354814 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drkkt\" (UniqueName: \"kubernetes.io/projected/4ad286ee-6683-4c48-b195-935c29b048b7-kube-api-access-drkkt\") pod \"auto-csr-approver-29553750-zg7qd\" (UID: \"4ad286ee-6683-4c48-b195-935c29b048b7\") " pod="openshift-infra/auto-csr-approver-29553750-zg7qd" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.355628 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6126bfef-f4d7-45f0-928d-060a4f90665c-config-volume\") pod \"collect-profiles-29553750-d6v6g\" (UID: \"6126bfef-f4d7-45f0-928d-060a4f90665c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-d6v6g" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.356006 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6126bfef-f4d7-45f0-928d-060a4f90665c-secret-volume\") pod \"collect-profiles-29553750-d6v6g\" (UID: \"6126bfef-f4d7-45f0-928d-060a4f90665c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-d6v6g" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.356069 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpfgs\" (UniqueName: \"kubernetes.io/projected/6126bfef-f4d7-45f0-928d-060a4f90665c-kube-api-access-jpfgs\") pod \"collect-profiles-29553750-d6v6g\" (UID: \"6126bfef-f4d7-45f0-928d-060a4f90665c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-d6v6g" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.457539 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6126bfef-f4d7-45f0-928d-060a4f90665c-secret-volume\") pod \"collect-profiles-29553750-d6v6g\" (UID: \"6126bfef-f4d7-45f0-928d-060a4f90665c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-d6v6g" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.457611 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpfgs\" (UniqueName: \"kubernetes.io/projected/6126bfef-f4d7-45f0-928d-060a4f90665c-kube-api-access-jpfgs\") pod \"collect-profiles-29553750-d6v6g\" (UID: \"6126bfef-f4d7-45f0-928d-060a4f90665c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-d6v6g" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.457716 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drkkt\" (UniqueName: \"kubernetes.io/projected/4ad286ee-6683-4c48-b195-935c29b048b7-kube-api-access-drkkt\") pod \"auto-csr-approver-29553750-zg7qd\" (UID: \"4ad286ee-6683-4c48-b195-935c29b048b7\") " pod="openshift-infra/auto-csr-approver-29553750-zg7qd" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.457799 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6126bfef-f4d7-45f0-928d-060a4f90665c-config-volume\") pod \"collect-profiles-29553750-d6v6g\" (UID: \"6126bfef-f4d7-45f0-928d-060a4f90665c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-d6v6g" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.459056 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6126bfef-f4d7-45f0-928d-060a4f90665c-config-volume\") pod \"collect-profiles-29553750-d6v6g\" (UID: \"6126bfef-f4d7-45f0-928d-060a4f90665c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-d6v6g" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.465994 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6126bfef-f4d7-45f0-928d-060a4f90665c-secret-volume\") pod \"collect-profiles-29553750-d6v6g\" (UID: \"6126bfef-f4d7-45f0-928d-060a4f90665c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-d6v6g" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.477960 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpfgs\" (UniqueName: \"kubernetes.io/projected/6126bfef-f4d7-45f0-928d-060a4f90665c-kube-api-access-jpfgs\") pod \"collect-profiles-29553750-d6v6g\" (UID: \"6126bfef-f4d7-45f0-928d-060a4f90665c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-d6v6g" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.480305 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drkkt\" (UniqueName: \"kubernetes.io/projected/4ad286ee-6683-4c48-b195-935c29b048b7-kube-api-access-drkkt\") pod \"auto-csr-approver-29553750-zg7qd\" (UID: \"4ad286ee-6683-4c48-b195-935c29b048b7\") " pod="openshift-infra/auto-csr-approver-29553750-zg7qd" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.509524 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553750-zg7qd" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.528630 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-d6v6g" Mar 11 10:30:00 crc kubenswrapper[4840]: I0311 10:30:00.948568 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553750-zg7qd"] Mar 11 10:30:01 crc kubenswrapper[4840]: I0311 10:30:01.021501 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553750-d6v6g"] Mar 11 10:30:01 crc kubenswrapper[4840]: W0311 10:30:01.032347 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6126bfef_f4d7_45f0_928d_060a4f90665c.slice/crio-9668158016c43a0108bdeadc6f3047cc9bcd4739f4c9a6e254498f003ebcc1cc WatchSource:0}: Error finding container 9668158016c43a0108bdeadc6f3047cc9bcd4739f4c9a6e254498f003ebcc1cc: Status 404 returned error can't find the container with id 9668158016c43a0108bdeadc6f3047cc9bcd4739f4c9a6e254498f003ebcc1cc Mar 11 10:30:01 crc kubenswrapper[4840]: I0311 10:30:01.203670 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-d6v6g" event={"ID":"6126bfef-f4d7-45f0-928d-060a4f90665c","Type":"ContainerStarted","Data":"318f7930321d56af2c0cd5664e13c22425a6bf6185ce2844ef9d89675ebbdcd2"} Mar 11 10:30:01 crc kubenswrapper[4840]: I0311 10:30:01.203713 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-d6v6g" event={"ID":"6126bfef-f4d7-45f0-928d-060a4f90665c","Type":"ContainerStarted","Data":"9668158016c43a0108bdeadc6f3047cc9bcd4739f4c9a6e254498f003ebcc1cc"} Mar 11 10:30:01 crc kubenswrapper[4840]: I0311 10:30:01.206329 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553750-zg7qd" event={"ID":"4ad286ee-6683-4c48-b195-935c29b048b7","Type":"ContainerStarted","Data":"b8ea632e870c9bf265ec6e8b6290a37a2b482a64db90960cb77a8fedc6282d6f"} Mar 11 10:30:01 crc kubenswrapper[4840]: I0311 10:30:01.224972 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-d6v6g" podStartSLOduration=1.224948642 podStartE2EDuration="1.224948642s" podCreationTimestamp="2026-03-11 10:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-11 10:30:01.222585642 +0000 UTC m=+5599.888255467" watchObservedRunningTime="2026-03-11 10:30:01.224948642 +0000 UTC m=+5599.890618497" Mar 11 10:30:02 crc kubenswrapper[4840]: I0311 10:30:02.107440 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-6lfc2"] Mar 11 10:30:02 crc kubenswrapper[4840]: I0311 10:30:02.107545 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-6lfc2"] Mar 11 10:30:02 crc kubenswrapper[4840]: I0311 10:30:02.220709 4840 generic.go:334] "Generic (PLEG): container finished" podID="6126bfef-f4d7-45f0-928d-060a4f90665c" containerID="318f7930321d56af2c0cd5664e13c22425a6bf6185ce2844ef9d89675ebbdcd2" exitCode=0 Mar 11 10:30:02 crc kubenswrapper[4840]: I0311 10:30:02.220759 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-d6v6g" event={"ID":"6126bfef-f4d7-45f0-928d-060a4f90665c","Type":"ContainerDied","Data":"318f7930321d56af2c0cd5664e13c22425a6bf6185ce2844ef9d89675ebbdcd2"} Mar 11 10:30:03 crc kubenswrapper[4840]: I0311 10:30:03.619180 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-d6v6g" Mar 11 10:30:03 crc kubenswrapper[4840]: I0311 10:30:03.641206 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6126bfef-f4d7-45f0-928d-060a4f90665c-secret-volume\") pod \"6126bfef-f4d7-45f0-928d-060a4f90665c\" (UID: \"6126bfef-f4d7-45f0-928d-060a4f90665c\") " Mar 11 10:30:03 crc kubenswrapper[4840]: I0311 10:30:03.641311 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpfgs\" (UniqueName: \"kubernetes.io/projected/6126bfef-f4d7-45f0-928d-060a4f90665c-kube-api-access-jpfgs\") pod \"6126bfef-f4d7-45f0-928d-060a4f90665c\" (UID: \"6126bfef-f4d7-45f0-928d-060a4f90665c\") " Mar 11 10:30:03 crc kubenswrapper[4840]: I0311 10:30:03.641372 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6126bfef-f4d7-45f0-928d-060a4f90665c-config-volume\") pod \"6126bfef-f4d7-45f0-928d-060a4f90665c\" (UID: \"6126bfef-f4d7-45f0-928d-060a4f90665c\") " Mar 11 10:30:03 crc kubenswrapper[4840]: I0311 10:30:03.641926 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6126bfef-f4d7-45f0-928d-060a4f90665c-config-volume" (OuterVolumeSpecName: "config-volume") pod "6126bfef-f4d7-45f0-928d-060a4f90665c" (UID: "6126bfef-f4d7-45f0-928d-060a4f90665c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:30:03 crc kubenswrapper[4840]: I0311 10:30:03.642124 4840 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6126bfef-f4d7-45f0-928d-060a4f90665c-config-volume\") on node \"crc\" DevicePath \"\"" Mar 11 10:30:03 crc kubenswrapper[4840]: I0311 10:30:03.648283 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6126bfef-f4d7-45f0-928d-060a4f90665c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6126bfef-f4d7-45f0-928d-060a4f90665c" (UID: "6126bfef-f4d7-45f0-928d-060a4f90665c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:30:03 crc kubenswrapper[4840]: I0311 10:30:03.663427 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6126bfef-f4d7-45f0-928d-060a4f90665c-kube-api-access-jpfgs" (OuterVolumeSpecName: "kube-api-access-jpfgs") pod "6126bfef-f4d7-45f0-928d-060a4f90665c" (UID: "6126bfef-f4d7-45f0-928d-060a4f90665c"). InnerVolumeSpecName "kube-api-access-jpfgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:30:03 crc kubenswrapper[4840]: I0311 10:30:03.743357 4840 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6126bfef-f4d7-45f0-928d-060a4f90665c-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 11 10:30:03 crc kubenswrapper[4840]: I0311 10:30:03.743725 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpfgs\" (UniqueName: \"kubernetes.io/projected/6126bfef-f4d7-45f0-928d-060a4f90665c-kube-api-access-jpfgs\") on node \"crc\" DevicePath \"\"" Mar 11 10:30:04 crc kubenswrapper[4840]: I0311 10:30:04.075454 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e22efbed-3cf1-49ef-8e84-16d5e3b9b066" path="/var/lib/kubelet/pods/e22efbed-3cf1-49ef-8e84-16d5e3b9b066/volumes" Mar 11 10:30:04 crc kubenswrapper[4840]: I0311 10:30:04.246184 4840 generic.go:334] "Generic (PLEG): container finished" podID="4ad286ee-6683-4c48-b195-935c29b048b7" containerID="f85135619e8d7cb197c10cfe395fb2fec9b5ecd26a1b4bdedc0f3bc8263a48dd" exitCode=0 Mar 11 10:30:04 crc kubenswrapper[4840]: I0311 10:30:04.246377 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553750-zg7qd" event={"ID":"4ad286ee-6683-4c48-b195-935c29b048b7","Type":"ContainerDied","Data":"f85135619e8d7cb197c10cfe395fb2fec9b5ecd26a1b4bdedc0f3bc8263a48dd"} Mar 11 10:30:04 crc kubenswrapper[4840]: I0311 10:30:04.252462 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-d6v6g" event={"ID":"6126bfef-f4d7-45f0-928d-060a4f90665c","Type":"ContainerDied","Data":"9668158016c43a0108bdeadc6f3047cc9bcd4739f4c9a6e254498f003ebcc1cc"} Mar 11 10:30:04 crc kubenswrapper[4840]: I0311 10:30:04.252548 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9668158016c43a0108bdeadc6f3047cc9bcd4739f4c9a6e254498f003ebcc1cc" Mar 11 10:30:04 crc kubenswrapper[4840]: I0311 10:30:04.252787 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553750-d6v6g" Mar 11 10:30:04 crc kubenswrapper[4840]: I0311 10:30:04.315909 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553705-mqh9q"] Mar 11 10:30:04 crc kubenswrapper[4840]: I0311 10:30:04.323978 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553705-mqh9q"] Mar 11 10:30:05 crc kubenswrapper[4840]: I0311 10:30:05.710507 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553750-zg7qd" Mar 11 10:30:05 crc kubenswrapper[4840]: I0311 10:30:05.778446 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drkkt\" (UniqueName: \"kubernetes.io/projected/4ad286ee-6683-4c48-b195-935c29b048b7-kube-api-access-drkkt\") pod \"4ad286ee-6683-4c48-b195-935c29b048b7\" (UID: \"4ad286ee-6683-4c48-b195-935c29b048b7\") " Mar 11 10:30:05 crc kubenswrapper[4840]: I0311 10:30:05.809736 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ad286ee-6683-4c48-b195-935c29b048b7-kube-api-access-drkkt" (OuterVolumeSpecName: "kube-api-access-drkkt") pod "4ad286ee-6683-4c48-b195-935c29b048b7" (UID: "4ad286ee-6683-4c48-b195-935c29b048b7"). InnerVolumeSpecName "kube-api-access-drkkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:30:05 crc kubenswrapper[4840]: I0311 10:30:05.891513 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drkkt\" (UniqueName: \"kubernetes.io/projected/4ad286ee-6683-4c48-b195-935c29b048b7-kube-api-access-drkkt\") on node \"crc\" DevicePath \"\"" Mar 11 10:30:06 crc kubenswrapper[4840]: I0311 10:30:06.071846 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5663935d-2c68-4f3e-872c-c950bd096bd8" path="/var/lib/kubelet/pods/5663935d-2c68-4f3e-872c-c950bd096bd8/volumes" Mar 11 10:30:06 crc kubenswrapper[4840]: I0311 10:30:06.275870 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553750-zg7qd" event={"ID":"4ad286ee-6683-4c48-b195-935c29b048b7","Type":"ContainerDied","Data":"b8ea632e870c9bf265ec6e8b6290a37a2b482a64db90960cb77a8fedc6282d6f"} Mar 11 10:30:06 crc kubenswrapper[4840]: I0311 10:30:06.275924 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8ea632e870c9bf265ec6e8b6290a37a2b482a64db90960cb77a8fedc6282d6f" Mar 11 10:30:06 crc kubenswrapper[4840]: I0311 10:30:06.275982 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553750-zg7qd" Mar 11 10:30:06 crc kubenswrapper[4840]: I0311 10:30:06.787276 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553744-lkgm2"] Mar 11 10:30:06 crc kubenswrapper[4840]: I0311 10:30:06.802369 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553744-lkgm2"] Mar 11 10:30:08 crc kubenswrapper[4840]: I0311 10:30:08.074389 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="069f80b6-5230-4478-bef5-17a562bf0cad" path="/var/lib/kubelet/pods/069f80b6-5230-4478-bef5-17a562bf0cad/volumes" Mar 11 10:30:12 crc kubenswrapper[4840]: I0311 10:30:12.846261 4840 scope.go:117] "RemoveContainer" containerID="457633fffec461d81f61871a7542a4ad6c93ced02061c56490b90986f8a1067b" Mar 11 10:30:12 crc kubenswrapper[4840]: I0311 10:30:12.900010 4840 scope.go:117] "RemoveContainer" containerID="9ff08548cc5d4779be39cae9261d6d7416e2fdc4a83eebd34617a094df78b036" Mar 11 10:30:12 crc kubenswrapper[4840]: I0311 10:30:12.929798 4840 scope.go:117] "RemoveContainer" containerID="e8f9384659d429f1e7cdc8f5b27878c4d32e7138a700eb63bd36002b9815fcb0" Mar 11 10:30:50 crc kubenswrapper[4840]: I0311 10:30:50.733749 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lj5nj"] Mar 11 10:30:50 crc kubenswrapper[4840]: E0311 10:30:50.734539 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6126bfef-f4d7-45f0-928d-060a4f90665c" containerName="collect-profiles" Mar 11 10:30:50 crc kubenswrapper[4840]: I0311 10:30:50.734553 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="6126bfef-f4d7-45f0-928d-060a4f90665c" containerName="collect-profiles" Mar 11 10:30:50 crc kubenswrapper[4840]: E0311 10:30:50.734565 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad286ee-6683-4c48-b195-935c29b048b7" containerName="oc" Mar 11 10:30:50 crc kubenswrapper[4840]: I0311 10:30:50.734571 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad286ee-6683-4c48-b195-935c29b048b7" containerName="oc" Mar 11 10:30:50 crc kubenswrapper[4840]: I0311 10:30:50.734711 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="6126bfef-f4d7-45f0-928d-060a4f90665c" containerName="collect-profiles" Mar 11 10:30:50 crc kubenswrapper[4840]: I0311 10:30:50.734723 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ad286ee-6683-4c48-b195-935c29b048b7" containerName="oc" Mar 11 10:30:50 crc kubenswrapper[4840]: I0311 10:30:50.735843 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lj5nj" Mar 11 10:30:50 crc kubenswrapper[4840]: I0311 10:30:50.747138 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lj5nj"] Mar 11 10:30:50 crc kubenswrapper[4840]: I0311 10:30:50.808346 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25b6f47-db18-4c86-9dae-74dc8c8db68f-utilities\") pod \"redhat-marketplace-lj5nj\" (UID: \"a25b6f47-db18-4c86-9dae-74dc8c8db68f\") " pod="openshift-marketplace/redhat-marketplace-lj5nj" Mar 11 10:30:50 crc kubenswrapper[4840]: I0311 10:30:50.808614 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58vxs\" (UniqueName: \"kubernetes.io/projected/a25b6f47-db18-4c86-9dae-74dc8c8db68f-kube-api-access-58vxs\") pod \"redhat-marketplace-lj5nj\" (UID: \"a25b6f47-db18-4c86-9dae-74dc8c8db68f\") " pod="openshift-marketplace/redhat-marketplace-lj5nj" Mar 11 10:30:50 crc kubenswrapper[4840]: I0311 10:30:50.808717 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25b6f47-db18-4c86-9dae-74dc8c8db68f-catalog-content\") pod \"redhat-marketplace-lj5nj\" (UID: \"a25b6f47-db18-4c86-9dae-74dc8c8db68f\") " pod="openshift-marketplace/redhat-marketplace-lj5nj" Mar 11 10:30:50 crc kubenswrapper[4840]: I0311 10:30:50.910216 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25b6f47-db18-4c86-9dae-74dc8c8db68f-catalog-content\") pod \"redhat-marketplace-lj5nj\" (UID: \"a25b6f47-db18-4c86-9dae-74dc8c8db68f\") " pod="openshift-marketplace/redhat-marketplace-lj5nj" Mar 11 10:30:50 crc kubenswrapper[4840]: I0311 10:30:50.910286 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25b6f47-db18-4c86-9dae-74dc8c8db68f-utilities\") pod \"redhat-marketplace-lj5nj\" (UID: \"a25b6f47-db18-4c86-9dae-74dc8c8db68f\") " pod="openshift-marketplace/redhat-marketplace-lj5nj" Mar 11 10:30:50 crc kubenswrapper[4840]: I0311 10:30:50.910315 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58vxs\" (UniqueName: \"kubernetes.io/projected/a25b6f47-db18-4c86-9dae-74dc8c8db68f-kube-api-access-58vxs\") pod \"redhat-marketplace-lj5nj\" (UID: \"a25b6f47-db18-4c86-9dae-74dc8c8db68f\") " pod="openshift-marketplace/redhat-marketplace-lj5nj" Mar 11 10:30:50 crc kubenswrapper[4840]: I0311 10:30:50.910825 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25b6f47-db18-4c86-9dae-74dc8c8db68f-catalog-content\") pod \"redhat-marketplace-lj5nj\" (UID: \"a25b6f47-db18-4c86-9dae-74dc8c8db68f\") " pod="openshift-marketplace/redhat-marketplace-lj5nj" Mar 11 10:30:50 crc kubenswrapper[4840]: I0311 10:30:50.910858 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25b6f47-db18-4c86-9dae-74dc8c8db68f-utilities\") pod \"redhat-marketplace-lj5nj\" (UID: \"a25b6f47-db18-4c86-9dae-74dc8c8db68f\") " pod="openshift-marketplace/redhat-marketplace-lj5nj" Mar 11 10:30:50 crc kubenswrapper[4840]: I0311 10:30:50.929399 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58vxs\" (UniqueName: \"kubernetes.io/projected/a25b6f47-db18-4c86-9dae-74dc8c8db68f-kube-api-access-58vxs\") pod \"redhat-marketplace-lj5nj\" (UID: \"a25b6f47-db18-4c86-9dae-74dc8c8db68f\") " pod="openshift-marketplace/redhat-marketplace-lj5nj" Mar 11 10:30:51 crc kubenswrapper[4840]: I0311 10:30:51.054317 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lj5nj" Mar 11 10:30:51 crc kubenswrapper[4840]: I0311 10:30:51.556127 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lj5nj"] Mar 11 10:30:51 crc kubenswrapper[4840]: I0311 10:30:51.694834 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lj5nj" event={"ID":"a25b6f47-db18-4c86-9dae-74dc8c8db68f","Type":"ContainerStarted","Data":"5c671933caf7425a14d1cd147ee65a9aa9d0713ce1a7ff7be3cf1ae4c2208edd"} Mar 11 10:30:52 crc kubenswrapper[4840]: I0311 10:30:52.707917 4840 generic.go:334] "Generic (PLEG): container finished" podID="a25b6f47-db18-4c86-9dae-74dc8c8db68f" containerID="5db34966d5f3d3fc13607fc8ca69d777b3344ace4e59198541ed3ac542995c64" exitCode=0 Mar 11 10:30:52 crc kubenswrapper[4840]: I0311 10:30:52.707987 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lj5nj" event={"ID":"a25b6f47-db18-4c86-9dae-74dc8c8db68f","Type":"ContainerDied","Data":"5db34966d5f3d3fc13607fc8ca69d777b3344ace4e59198541ed3ac542995c64"} Mar 11 10:30:53 crc kubenswrapper[4840]: I0311 10:30:53.720699 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lj5nj" event={"ID":"a25b6f47-db18-4c86-9dae-74dc8c8db68f","Type":"ContainerStarted","Data":"2749afddbda31a9a80a9b69f8260b1e533a9e6728f14d6090653f1757e6e1eb0"} Mar 11 10:30:54 crc kubenswrapper[4840]: I0311 10:30:54.733860 4840 generic.go:334] "Generic (PLEG): container finished" podID="a25b6f47-db18-4c86-9dae-74dc8c8db68f" containerID="2749afddbda31a9a80a9b69f8260b1e533a9e6728f14d6090653f1757e6e1eb0" exitCode=0 Mar 11 10:30:54 crc kubenswrapper[4840]: I0311 10:30:54.733917 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lj5nj" event={"ID":"a25b6f47-db18-4c86-9dae-74dc8c8db68f","Type":"ContainerDied","Data":"2749afddbda31a9a80a9b69f8260b1e533a9e6728f14d6090653f1757e6e1eb0"} Mar 11 10:30:54 crc kubenswrapper[4840]: I0311 10:30:54.734285 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lj5nj" event={"ID":"a25b6f47-db18-4c86-9dae-74dc8c8db68f","Type":"ContainerStarted","Data":"9413250cdf8ac466a4261a92050262c5a1e7bf5319e988f22cc9da55abc9e60c"} Mar 11 10:30:54 crc kubenswrapper[4840]: I0311 10:30:54.761811 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lj5nj" podStartSLOduration=3.345253685 podStartE2EDuration="4.7617817s" podCreationTimestamp="2026-03-11 10:30:50 +0000 UTC" firstStartedPulling="2026-03-11 10:30:52.713862135 +0000 UTC m=+5651.379531990" lastFinishedPulling="2026-03-11 10:30:54.13039015 +0000 UTC m=+5652.796060005" observedRunningTime="2026-03-11 10:30:54.754178147 +0000 UTC m=+5653.419848062" watchObservedRunningTime="2026-03-11 10:30:54.7617817 +0000 UTC m=+5653.427451555" Mar 11 10:30:57 crc kubenswrapper[4840]: I0311 10:30:57.445541 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:30:57 crc kubenswrapper[4840]: I0311 10:30:57.445927 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:31:01 crc kubenswrapper[4840]: I0311 10:31:01.054697 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lj5nj" Mar 11 10:31:01 crc kubenswrapper[4840]: I0311 10:31:01.054985 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lj5nj" Mar 11 10:31:01 crc kubenswrapper[4840]: I0311 10:31:01.129710 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lj5nj" Mar 11 10:31:01 crc kubenswrapper[4840]: I0311 10:31:01.860183 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lj5nj" Mar 11 10:31:01 crc kubenswrapper[4840]: I0311 10:31:01.923652 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lj5nj"] Mar 11 10:31:03 crc kubenswrapper[4840]: I0311 10:31:03.813704 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lj5nj" podUID="a25b6f47-db18-4c86-9dae-74dc8c8db68f" containerName="registry-server" containerID="cri-o://9413250cdf8ac466a4261a92050262c5a1e7bf5319e988f22cc9da55abc9e60c" gracePeriod=2 Mar 11 10:31:04 crc kubenswrapper[4840]: I0311 10:31:04.320902 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lj5nj" Mar 11 10:31:04 crc kubenswrapper[4840]: I0311 10:31:04.404034 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25b6f47-db18-4c86-9dae-74dc8c8db68f-catalog-content\") pod \"a25b6f47-db18-4c86-9dae-74dc8c8db68f\" (UID: \"a25b6f47-db18-4c86-9dae-74dc8c8db68f\") " Mar 11 10:31:04 crc kubenswrapper[4840]: I0311 10:31:04.404081 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25b6f47-db18-4c86-9dae-74dc8c8db68f-utilities\") pod \"a25b6f47-db18-4c86-9dae-74dc8c8db68f\" (UID: \"a25b6f47-db18-4c86-9dae-74dc8c8db68f\") " Mar 11 10:31:04 crc kubenswrapper[4840]: I0311 10:31:04.404138 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58vxs\" (UniqueName: \"kubernetes.io/projected/a25b6f47-db18-4c86-9dae-74dc8c8db68f-kube-api-access-58vxs\") pod \"a25b6f47-db18-4c86-9dae-74dc8c8db68f\" (UID: \"a25b6f47-db18-4c86-9dae-74dc8c8db68f\") " Mar 11 10:31:04 crc kubenswrapper[4840]: I0311 10:31:04.405441 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25b6f47-db18-4c86-9dae-74dc8c8db68f-utilities" (OuterVolumeSpecName: "utilities") pod "a25b6f47-db18-4c86-9dae-74dc8c8db68f" (UID: "a25b6f47-db18-4c86-9dae-74dc8c8db68f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:31:04 crc kubenswrapper[4840]: I0311 10:31:04.406320 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25b6f47-db18-4c86-9dae-74dc8c8db68f-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:31:04 crc kubenswrapper[4840]: I0311 10:31:04.413674 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25b6f47-db18-4c86-9dae-74dc8c8db68f-kube-api-access-58vxs" (OuterVolumeSpecName: "kube-api-access-58vxs") pod "a25b6f47-db18-4c86-9dae-74dc8c8db68f" (UID: "a25b6f47-db18-4c86-9dae-74dc8c8db68f"). InnerVolumeSpecName "kube-api-access-58vxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:31:04 crc kubenswrapper[4840]: I0311 10:31:04.451828 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25b6f47-db18-4c86-9dae-74dc8c8db68f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a25b6f47-db18-4c86-9dae-74dc8c8db68f" (UID: "a25b6f47-db18-4c86-9dae-74dc8c8db68f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:31:04 crc kubenswrapper[4840]: I0311 10:31:04.507535 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25b6f47-db18-4c86-9dae-74dc8c8db68f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:31:04 crc kubenswrapper[4840]: I0311 10:31:04.507564 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58vxs\" (UniqueName: \"kubernetes.io/projected/a25b6f47-db18-4c86-9dae-74dc8c8db68f-kube-api-access-58vxs\") on node \"crc\" DevicePath \"\"" Mar 11 10:31:04 crc kubenswrapper[4840]: I0311 10:31:04.824818 4840 generic.go:334] "Generic (PLEG): container finished" podID="a25b6f47-db18-4c86-9dae-74dc8c8db68f" containerID="9413250cdf8ac466a4261a92050262c5a1e7bf5319e988f22cc9da55abc9e60c" exitCode=0 Mar 11 10:31:04 crc kubenswrapper[4840]: I0311 10:31:04.824867 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lj5nj" event={"ID":"a25b6f47-db18-4c86-9dae-74dc8c8db68f","Type":"ContainerDied","Data":"9413250cdf8ac466a4261a92050262c5a1e7bf5319e988f22cc9da55abc9e60c"} Mar 11 10:31:04 crc kubenswrapper[4840]: I0311 10:31:04.824893 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lj5nj" event={"ID":"a25b6f47-db18-4c86-9dae-74dc8c8db68f","Type":"ContainerDied","Data":"5c671933caf7425a14d1cd147ee65a9aa9d0713ce1a7ff7be3cf1ae4c2208edd"} Mar 11 10:31:04 crc kubenswrapper[4840]: I0311 10:31:04.824912 4840 scope.go:117] "RemoveContainer" containerID="9413250cdf8ac466a4261a92050262c5a1e7bf5319e988f22cc9da55abc9e60c" Mar 11 10:31:04 crc kubenswrapper[4840]: I0311 10:31:04.824912 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lj5nj" Mar 11 10:31:04 crc kubenswrapper[4840]: I0311 10:31:04.843661 4840 scope.go:117] "RemoveContainer" containerID="2749afddbda31a9a80a9b69f8260b1e533a9e6728f14d6090653f1757e6e1eb0" Mar 11 10:31:04 crc kubenswrapper[4840]: I0311 10:31:04.866543 4840 scope.go:117] "RemoveContainer" containerID="5db34966d5f3d3fc13607fc8ca69d777b3344ace4e59198541ed3ac542995c64" Mar 11 10:31:04 crc kubenswrapper[4840]: I0311 10:31:04.943291 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lj5nj"] Mar 11 10:31:04 crc kubenswrapper[4840]: I0311 10:31:04.967323 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lj5nj"] Mar 11 10:31:04 crc kubenswrapper[4840]: I0311 10:31:04.969999 4840 scope.go:117] "RemoveContainer" containerID="9413250cdf8ac466a4261a92050262c5a1e7bf5319e988f22cc9da55abc9e60c" Mar 11 10:31:04 crc kubenswrapper[4840]: E0311 10:31:04.970511 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9413250cdf8ac466a4261a92050262c5a1e7bf5319e988f22cc9da55abc9e60c\": container with ID starting with 9413250cdf8ac466a4261a92050262c5a1e7bf5319e988f22cc9da55abc9e60c not found: ID does not exist" containerID="9413250cdf8ac466a4261a92050262c5a1e7bf5319e988f22cc9da55abc9e60c" Mar 11 10:31:04 crc kubenswrapper[4840]: I0311 10:31:04.970561 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9413250cdf8ac466a4261a92050262c5a1e7bf5319e988f22cc9da55abc9e60c"} err="failed to get container status \"9413250cdf8ac466a4261a92050262c5a1e7bf5319e988f22cc9da55abc9e60c\": rpc error: code = NotFound desc = could not find container \"9413250cdf8ac466a4261a92050262c5a1e7bf5319e988f22cc9da55abc9e60c\": container with ID starting with 9413250cdf8ac466a4261a92050262c5a1e7bf5319e988f22cc9da55abc9e60c not found: ID does not exist" Mar 11 10:31:04 crc kubenswrapper[4840]: I0311 10:31:04.970602 4840 scope.go:117] "RemoveContainer" containerID="2749afddbda31a9a80a9b69f8260b1e533a9e6728f14d6090653f1757e6e1eb0" Mar 11 10:31:04 crc kubenswrapper[4840]: E0311 10:31:04.971578 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2749afddbda31a9a80a9b69f8260b1e533a9e6728f14d6090653f1757e6e1eb0\": container with ID starting with 2749afddbda31a9a80a9b69f8260b1e533a9e6728f14d6090653f1757e6e1eb0 not found: ID does not exist" containerID="2749afddbda31a9a80a9b69f8260b1e533a9e6728f14d6090653f1757e6e1eb0" Mar 11 10:31:04 crc kubenswrapper[4840]: I0311 10:31:04.971602 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2749afddbda31a9a80a9b69f8260b1e533a9e6728f14d6090653f1757e6e1eb0"} err="failed to get container status \"2749afddbda31a9a80a9b69f8260b1e533a9e6728f14d6090653f1757e6e1eb0\": rpc error: code = NotFound desc = could not find container \"2749afddbda31a9a80a9b69f8260b1e533a9e6728f14d6090653f1757e6e1eb0\": container with ID starting with 2749afddbda31a9a80a9b69f8260b1e533a9e6728f14d6090653f1757e6e1eb0 not found: ID does not exist" Mar 11 10:31:04 crc kubenswrapper[4840]: I0311 10:31:04.971615 4840 scope.go:117] "RemoveContainer" containerID="5db34966d5f3d3fc13607fc8ca69d777b3344ace4e59198541ed3ac542995c64" Mar 11 10:31:04 crc kubenswrapper[4840]: E0311 10:31:04.971871 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5db34966d5f3d3fc13607fc8ca69d777b3344ace4e59198541ed3ac542995c64\": container with ID starting with 5db34966d5f3d3fc13607fc8ca69d777b3344ace4e59198541ed3ac542995c64 not found: ID does not exist" containerID="5db34966d5f3d3fc13607fc8ca69d777b3344ace4e59198541ed3ac542995c64" Mar 11 10:31:04 crc kubenswrapper[4840]: I0311 10:31:04.971911 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5db34966d5f3d3fc13607fc8ca69d777b3344ace4e59198541ed3ac542995c64"} err="failed to get container status \"5db34966d5f3d3fc13607fc8ca69d777b3344ace4e59198541ed3ac542995c64\": rpc error: code = NotFound desc = could not find container \"5db34966d5f3d3fc13607fc8ca69d777b3344ace4e59198541ed3ac542995c64\": container with ID starting with 5db34966d5f3d3fc13607fc8ca69d777b3344ace4e59198541ed3ac542995c64 not found: ID does not exist" Mar 11 10:31:06 crc kubenswrapper[4840]: I0311 10:31:06.076912 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a25b6f47-db18-4c86-9dae-74dc8c8db68f" path="/var/lib/kubelet/pods/a25b6f47-db18-4c86-9dae-74dc8c8db68f/volumes" Mar 11 10:31:20 crc kubenswrapper[4840]: I0311 10:31:20.353240 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ztf8l"] Mar 11 10:31:20 crc kubenswrapper[4840]: E0311 10:31:20.354295 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25b6f47-db18-4c86-9dae-74dc8c8db68f" containerName="registry-server" Mar 11 10:31:20 crc kubenswrapper[4840]: I0311 10:31:20.354319 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25b6f47-db18-4c86-9dae-74dc8c8db68f" containerName="registry-server" Mar 11 10:31:20 crc kubenswrapper[4840]: E0311 10:31:20.354346 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25b6f47-db18-4c86-9dae-74dc8c8db68f" containerName="extract-content" Mar 11 10:31:20 crc kubenswrapper[4840]: I0311 10:31:20.354359 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25b6f47-db18-4c86-9dae-74dc8c8db68f" containerName="extract-content" Mar 11 10:31:20 crc kubenswrapper[4840]: E0311 10:31:20.354394 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25b6f47-db18-4c86-9dae-74dc8c8db68f" containerName="extract-utilities" Mar 11 10:31:20 crc kubenswrapper[4840]: I0311 10:31:20.354407 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25b6f47-db18-4c86-9dae-74dc8c8db68f" containerName="extract-utilities" Mar 11 10:31:20 crc kubenswrapper[4840]: I0311 10:31:20.354754 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25b6f47-db18-4c86-9dae-74dc8c8db68f" containerName="registry-server" Mar 11 10:31:20 crc kubenswrapper[4840]: I0311 10:31:20.356944 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ztf8l" Mar 11 10:31:20 crc kubenswrapper[4840]: I0311 10:31:20.389608 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ztf8l"] Mar 11 10:31:20 crc kubenswrapper[4840]: I0311 10:31:20.538925 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b583fafd-25bf-4a4b-86c0-1edf529c5251-utilities\") pod \"community-operators-ztf8l\" (UID: \"b583fafd-25bf-4a4b-86c0-1edf529c5251\") " pod="openshift-marketplace/community-operators-ztf8l" Mar 11 10:31:20 crc kubenswrapper[4840]: I0311 10:31:20.539017 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b583fafd-25bf-4a4b-86c0-1edf529c5251-catalog-content\") pod \"community-operators-ztf8l\" (UID: \"b583fafd-25bf-4a4b-86c0-1edf529c5251\") " pod="openshift-marketplace/community-operators-ztf8l" Mar 11 10:31:20 crc kubenswrapper[4840]: I0311 10:31:20.539085 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd8fw\" (UniqueName: \"kubernetes.io/projected/b583fafd-25bf-4a4b-86c0-1edf529c5251-kube-api-access-fd8fw\") pod \"community-operators-ztf8l\" (UID: \"b583fafd-25bf-4a4b-86c0-1edf529c5251\") " pod="openshift-marketplace/community-operators-ztf8l" Mar 11 10:31:20 crc kubenswrapper[4840]: I0311 10:31:20.640439 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b583fafd-25bf-4a4b-86c0-1edf529c5251-utilities\") pod \"community-operators-ztf8l\" (UID: \"b583fafd-25bf-4a4b-86c0-1edf529c5251\") " pod="openshift-marketplace/community-operators-ztf8l" Mar 11 10:31:20 crc kubenswrapper[4840]: I0311 10:31:20.640578 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b583fafd-25bf-4a4b-86c0-1edf529c5251-catalog-content\") pod \"community-operators-ztf8l\" (UID: \"b583fafd-25bf-4a4b-86c0-1edf529c5251\") " pod="openshift-marketplace/community-operators-ztf8l" Mar 11 10:31:20 crc kubenswrapper[4840]: I0311 10:31:20.640621 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd8fw\" (UniqueName: \"kubernetes.io/projected/b583fafd-25bf-4a4b-86c0-1edf529c5251-kube-api-access-fd8fw\") pod \"community-operators-ztf8l\" (UID: \"b583fafd-25bf-4a4b-86c0-1edf529c5251\") " pod="openshift-marketplace/community-operators-ztf8l" Mar 11 10:31:20 crc kubenswrapper[4840]: I0311 10:31:20.641019 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b583fafd-25bf-4a4b-86c0-1edf529c5251-utilities\") pod \"community-operators-ztf8l\" (UID: \"b583fafd-25bf-4a4b-86c0-1edf529c5251\") " pod="openshift-marketplace/community-operators-ztf8l" Mar 11 10:31:20 crc kubenswrapper[4840]: I0311 10:31:20.641085 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b583fafd-25bf-4a4b-86c0-1edf529c5251-catalog-content\") pod \"community-operators-ztf8l\" (UID: \"b583fafd-25bf-4a4b-86c0-1edf529c5251\") " pod="openshift-marketplace/community-operators-ztf8l" Mar 11 10:31:20 crc kubenswrapper[4840]: I0311 10:31:20.660596 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd8fw\" (UniqueName: \"kubernetes.io/projected/b583fafd-25bf-4a4b-86c0-1edf529c5251-kube-api-access-fd8fw\") pod \"community-operators-ztf8l\" (UID: \"b583fafd-25bf-4a4b-86c0-1edf529c5251\") " pod="openshift-marketplace/community-operators-ztf8l" Mar 11 10:31:20 crc kubenswrapper[4840]: I0311 10:31:20.687618 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ztf8l" Mar 11 10:31:21 crc kubenswrapper[4840]: I0311 10:31:21.201653 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ztf8l"] Mar 11 10:31:22 crc kubenswrapper[4840]: I0311 10:31:22.007062 4840 generic.go:334] "Generic (PLEG): container finished" podID="b583fafd-25bf-4a4b-86c0-1edf529c5251" containerID="6ebe98e3681f87fe96ecf1a18486735703f7c29a1c2412280d3476079594a691" exitCode=0 Mar 11 10:31:22 crc kubenswrapper[4840]: I0311 10:31:22.007151 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ztf8l" event={"ID":"b583fafd-25bf-4a4b-86c0-1edf529c5251","Type":"ContainerDied","Data":"6ebe98e3681f87fe96ecf1a18486735703f7c29a1c2412280d3476079594a691"} Mar 11 10:31:22 crc kubenswrapper[4840]: I0311 10:31:22.007511 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ztf8l" event={"ID":"b583fafd-25bf-4a4b-86c0-1edf529c5251","Type":"ContainerStarted","Data":"613c8b321090cc824f9c58e6f5a31b209dec767d317bad3657349ef68244b69a"} Mar 11 10:31:23 crc kubenswrapper[4840]: I0311 10:31:23.019089 4840 generic.go:334] "Generic (PLEG): container finished" podID="b583fafd-25bf-4a4b-86c0-1edf529c5251" containerID="fadc2d2189d0b3a4c29ffbd7f0f237c3becd5da4d05b8f53a4014071ba11fb73" exitCode=0 Mar 11 10:31:23 crc kubenswrapper[4840]: I0311 10:31:23.019205 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ztf8l" event={"ID":"b583fafd-25bf-4a4b-86c0-1edf529c5251","Type":"ContainerDied","Data":"fadc2d2189d0b3a4c29ffbd7f0f237c3becd5da4d05b8f53a4014071ba11fb73"} Mar 11 10:31:24 crc kubenswrapper[4840]: I0311 10:31:24.036094 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ztf8l" event={"ID":"b583fafd-25bf-4a4b-86c0-1edf529c5251","Type":"ContainerStarted","Data":"f0f01d792f69f0b265c8777afff7e2b626764c24967f5731fe98dfbf1630ec9f"} Mar 11 10:31:24 crc kubenswrapper[4840]: I0311 10:31:24.077257 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ztf8l" podStartSLOduration=2.443177113 podStartE2EDuration="4.077229131s" podCreationTimestamp="2026-03-11 10:31:20 +0000 UTC" firstStartedPulling="2026-03-11 10:31:22.009727278 +0000 UTC m=+5680.675397123" lastFinishedPulling="2026-03-11 10:31:23.643779286 +0000 UTC m=+5682.309449141" observedRunningTime="2026-03-11 10:31:24.068299404 +0000 UTC m=+5682.733969269" watchObservedRunningTime="2026-03-11 10:31:24.077229131 +0000 UTC m=+5682.742898976" Mar 11 10:31:27 crc kubenswrapper[4840]: I0311 10:31:27.446557 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:31:27 crc kubenswrapper[4840]: I0311 10:31:27.447061 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:31:30 crc kubenswrapper[4840]: I0311 10:31:30.688112 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ztf8l" Mar 11 10:31:30 crc kubenswrapper[4840]: I0311 10:31:30.688888 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ztf8l" Mar 11 10:31:30 crc kubenswrapper[4840]: I0311 10:31:30.762125 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ztf8l" Mar 11 10:31:31 crc kubenswrapper[4840]: I0311 10:31:31.165291 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ztf8l" Mar 11 10:31:31 crc kubenswrapper[4840]: I0311 10:31:31.215322 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ztf8l"] Mar 11 10:31:33 crc kubenswrapper[4840]: I0311 10:31:33.114408 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ztf8l" podUID="b583fafd-25bf-4a4b-86c0-1edf529c5251" containerName="registry-server" containerID="cri-o://f0f01d792f69f0b265c8777afff7e2b626764c24967f5731fe98dfbf1630ec9f" gracePeriod=2 Mar 11 10:31:33 crc kubenswrapper[4840]: I0311 10:31:33.694395 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ztf8l" Mar 11 10:31:33 crc kubenswrapper[4840]: I0311 10:31:33.825926 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b583fafd-25bf-4a4b-86c0-1edf529c5251-utilities\") pod \"b583fafd-25bf-4a4b-86c0-1edf529c5251\" (UID: \"b583fafd-25bf-4a4b-86c0-1edf529c5251\") " Mar 11 10:31:33 crc kubenswrapper[4840]: I0311 10:31:33.826119 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd8fw\" (UniqueName: \"kubernetes.io/projected/b583fafd-25bf-4a4b-86c0-1edf529c5251-kube-api-access-fd8fw\") pod \"b583fafd-25bf-4a4b-86c0-1edf529c5251\" (UID: \"b583fafd-25bf-4a4b-86c0-1edf529c5251\") " Mar 11 10:31:33 crc kubenswrapper[4840]: I0311 10:31:33.826295 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b583fafd-25bf-4a4b-86c0-1edf529c5251-catalog-content\") pod \"b583fafd-25bf-4a4b-86c0-1edf529c5251\" (UID: \"b583fafd-25bf-4a4b-86c0-1edf529c5251\") " Mar 11 10:31:33 crc kubenswrapper[4840]: I0311 10:31:33.827253 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b583fafd-25bf-4a4b-86c0-1edf529c5251-utilities" (OuterVolumeSpecName: "utilities") pod "b583fafd-25bf-4a4b-86c0-1edf529c5251" (UID: "b583fafd-25bf-4a4b-86c0-1edf529c5251"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:31:33 crc kubenswrapper[4840]: I0311 10:31:33.838054 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b583fafd-25bf-4a4b-86c0-1edf529c5251-kube-api-access-fd8fw" (OuterVolumeSpecName: "kube-api-access-fd8fw") pod "b583fafd-25bf-4a4b-86c0-1edf529c5251" (UID: "b583fafd-25bf-4a4b-86c0-1edf529c5251"). InnerVolumeSpecName "kube-api-access-fd8fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:31:33 crc kubenswrapper[4840]: I0311 10:31:33.900271 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b583fafd-25bf-4a4b-86c0-1edf529c5251-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b583fafd-25bf-4a4b-86c0-1edf529c5251" (UID: "b583fafd-25bf-4a4b-86c0-1edf529c5251"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:31:33 crc kubenswrapper[4840]: I0311 10:31:33.929009 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b583fafd-25bf-4a4b-86c0-1edf529c5251-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:31:33 crc kubenswrapper[4840]: I0311 10:31:33.929059 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b583fafd-25bf-4a4b-86c0-1edf529c5251-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:31:33 crc kubenswrapper[4840]: I0311 10:31:33.929076 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd8fw\" (UniqueName: \"kubernetes.io/projected/b583fafd-25bf-4a4b-86c0-1edf529c5251-kube-api-access-fd8fw\") on node \"crc\" DevicePath \"\"" Mar 11 10:31:34 crc kubenswrapper[4840]: I0311 10:31:34.128578 4840 generic.go:334] "Generic (PLEG): container finished" podID="b583fafd-25bf-4a4b-86c0-1edf529c5251" containerID="f0f01d792f69f0b265c8777afff7e2b626764c24967f5731fe98dfbf1630ec9f" exitCode=0 Mar 11 10:31:34 crc kubenswrapper[4840]: I0311 10:31:34.128648 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ztf8l" event={"ID":"b583fafd-25bf-4a4b-86c0-1edf529c5251","Type":"ContainerDied","Data":"f0f01d792f69f0b265c8777afff7e2b626764c24967f5731fe98dfbf1630ec9f"} Mar 11 10:31:34 crc kubenswrapper[4840]: I0311 10:31:34.128689 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ztf8l" event={"ID":"b583fafd-25bf-4a4b-86c0-1edf529c5251","Type":"ContainerDied","Data":"613c8b321090cc824f9c58e6f5a31b209dec767d317bad3657349ef68244b69a"} Mar 11 10:31:34 crc kubenswrapper[4840]: I0311 10:31:34.128717 4840 scope.go:117] "RemoveContainer" containerID="f0f01d792f69f0b265c8777afff7e2b626764c24967f5731fe98dfbf1630ec9f" Mar 11 10:31:34 crc kubenswrapper[4840]: I0311 10:31:34.128901 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ztf8l" Mar 11 10:31:34 crc kubenswrapper[4840]: I0311 10:31:34.166852 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ztf8l"] Mar 11 10:31:34 crc kubenswrapper[4840]: I0311 10:31:34.169482 4840 scope.go:117] "RemoveContainer" containerID="fadc2d2189d0b3a4c29ffbd7f0f237c3becd5da4d05b8f53a4014071ba11fb73" Mar 11 10:31:34 crc kubenswrapper[4840]: I0311 10:31:34.177242 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ztf8l"] Mar 11 10:31:34 crc kubenswrapper[4840]: I0311 10:31:34.202958 4840 scope.go:117] "RemoveContainer" containerID="6ebe98e3681f87fe96ecf1a18486735703f7c29a1c2412280d3476079594a691" Mar 11 10:31:34 crc kubenswrapper[4840]: I0311 10:31:34.260475 4840 scope.go:117] "RemoveContainer" containerID="f0f01d792f69f0b265c8777afff7e2b626764c24967f5731fe98dfbf1630ec9f" Mar 11 10:31:34 crc kubenswrapper[4840]: E0311 10:31:34.261524 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0f01d792f69f0b265c8777afff7e2b626764c24967f5731fe98dfbf1630ec9f\": container with ID starting with f0f01d792f69f0b265c8777afff7e2b626764c24967f5731fe98dfbf1630ec9f not found: ID does not exist" containerID="f0f01d792f69f0b265c8777afff7e2b626764c24967f5731fe98dfbf1630ec9f" Mar 11 10:31:34 crc kubenswrapper[4840]: I0311 10:31:34.261557 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0f01d792f69f0b265c8777afff7e2b626764c24967f5731fe98dfbf1630ec9f"} err="failed to get container status \"f0f01d792f69f0b265c8777afff7e2b626764c24967f5731fe98dfbf1630ec9f\": rpc error: code = NotFound desc = could not find container \"f0f01d792f69f0b265c8777afff7e2b626764c24967f5731fe98dfbf1630ec9f\": container with ID starting with f0f01d792f69f0b265c8777afff7e2b626764c24967f5731fe98dfbf1630ec9f not found: ID does not exist" Mar 11 10:31:34 crc kubenswrapper[4840]: I0311 10:31:34.261579 4840 scope.go:117] "RemoveContainer" containerID="fadc2d2189d0b3a4c29ffbd7f0f237c3becd5da4d05b8f53a4014071ba11fb73" Mar 11 10:31:34 crc kubenswrapper[4840]: E0311 10:31:34.262174 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fadc2d2189d0b3a4c29ffbd7f0f237c3becd5da4d05b8f53a4014071ba11fb73\": container with ID starting with fadc2d2189d0b3a4c29ffbd7f0f237c3becd5da4d05b8f53a4014071ba11fb73 not found: ID does not exist" containerID="fadc2d2189d0b3a4c29ffbd7f0f237c3becd5da4d05b8f53a4014071ba11fb73" Mar 11 10:31:34 crc kubenswrapper[4840]: I0311 10:31:34.262205 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fadc2d2189d0b3a4c29ffbd7f0f237c3becd5da4d05b8f53a4014071ba11fb73"} err="failed to get container status \"fadc2d2189d0b3a4c29ffbd7f0f237c3becd5da4d05b8f53a4014071ba11fb73\": rpc error: code = NotFound desc = could not find container \"fadc2d2189d0b3a4c29ffbd7f0f237c3becd5da4d05b8f53a4014071ba11fb73\": container with ID starting with fadc2d2189d0b3a4c29ffbd7f0f237c3becd5da4d05b8f53a4014071ba11fb73 not found: ID does not exist" Mar 11 10:31:34 crc kubenswrapper[4840]: I0311 10:31:34.262218 4840 scope.go:117] "RemoveContainer" containerID="6ebe98e3681f87fe96ecf1a18486735703f7c29a1c2412280d3476079594a691" Mar 11 10:31:34 crc kubenswrapper[4840]: E0311 10:31:34.263733 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ebe98e3681f87fe96ecf1a18486735703f7c29a1c2412280d3476079594a691\": container with ID starting with 6ebe98e3681f87fe96ecf1a18486735703f7c29a1c2412280d3476079594a691 not found: ID does not exist" containerID="6ebe98e3681f87fe96ecf1a18486735703f7c29a1c2412280d3476079594a691" Mar 11 10:31:34 crc kubenswrapper[4840]: I0311 10:31:34.263806 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ebe98e3681f87fe96ecf1a18486735703f7c29a1c2412280d3476079594a691"} err="failed to get container status \"6ebe98e3681f87fe96ecf1a18486735703f7c29a1c2412280d3476079594a691\": rpc error: code = NotFound desc = could not find container \"6ebe98e3681f87fe96ecf1a18486735703f7c29a1c2412280d3476079594a691\": container with ID starting with 6ebe98e3681f87fe96ecf1a18486735703f7c29a1c2412280d3476079594a691 not found: ID does not exist" Mar 11 10:31:36 crc kubenswrapper[4840]: I0311 10:31:36.079015 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b583fafd-25bf-4a4b-86c0-1edf529c5251" path="/var/lib/kubelet/pods/b583fafd-25bf-4a4b-86c0-1edf529c5251/volumes" Mar 11 10:31:57 crc kubenswrapper[4840]: I0311 10:31:57.445774 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:31:57 crc kubenswrapper[4840]: I0311 10:31:57.446646 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:31:57 crc kubenswrapper[4840]: I0311 10:31:57.446719 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 10:31:57 crc kubenswrapper[4840]: I0311 10:31:57.447969 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873"} pod="openshift-machine-config-operator/machine-config-daemon-brtht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 10:31:57 crc kubenswrapper[4840]: I0311 10:31:57.448104 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" containerID="cri-o://0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" gracePeriod=600 Mar 11 10:31:57 crc kubenswrapper[4840]: E0311 10:31:57.591377 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:31:58 crc kubenswrapper[4840]: I0311 10:31:58.371150 4840 generic.go:334] "Generic (PLEG): container finished" podID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" exitCode=0 Mar 11 10:31:58 crc kubenswrapper[4840]: I0311 10:31:58.371219 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerDied","Data":"0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873"} Mar 11 10:31:58 crc kubenswrapper[4840]: I0311 10:31:58.371270 4840 scope.go:117] "RemoveContainer" containerID="8ecd5e8792daa32307a0f073aedb0fb34456f86b6adb94e207aa2934b241391a" Mar 11 10:31:58 crc kubenswrapper[4840]: I0311 10:31:58.372173 4840 scope.go:117] "RemoveContainer" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" Mar 11 10:31:58 crc kubenswrapper[4840]: E0311 10:31:58.372856 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:32:00 crc kubenswrapper[4840]: I0311 10:32:00.141190 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553752-2gkm4"] Mar 11 10:32:00 crc kubenswrapper[4840]: E0311 10:32:00.141894 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b583fafd-25bf-4a4b-86c0-1edf529c5251" containerName="extract-content" Mar 11 10:32:00 crc kubenswrapper[4840]: I0311 10:32:00.141908 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b583fafd-25bf-4a4b-86c0-1edf529c5251" containerName="extract-content" Mar 11 10:32:00 crc kubenswrapper[4840]: E0311 10:32:00.141928 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b583fafd-25bf-4a4b-86c0-1edf529c5251" containerName="registry-server" Mar 11 10:32:00 crc kubenswrapper[4840]: I0311 10:32:00.141934 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b583fafd-25bf-4a4b-86c0-1edf529c5251" containerName="registry-server" Mar 11 10:32:00 crc kubenswrapper[4840]: E0311 10:32:00.141949 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b583fafd-25bf-4a4b-86c0-1edf529c5251" containerName="extract-utilities" Mar 11 10:32:00 crc kubenswrapper[4840]: I0311 10:32:00.141954 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b583fafd-25bf-4a4b-86c0-1edf529c5251" containerName="extract-utilities" Mar 11 10:32:00 crc kubenswrapper[4840]: I0311 10:32:00.142092 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b583fafd-25bf-4a4b-86c0-1edf529c5251" containerName="registry-server" Mar 11 10:32:00 crc kubenswrapper[4840]: I0311 10:32:00.142626 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553752-2gkm4" Mar 11 10:32:00 crc kubenswrapper[4840]: I0311 10:32:00.146669 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:32:00 crc kubenswrapper[4840]: I0311 10:32:00.147045 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:32:00 crc kubenswrapper[4840]: I0311 10:32:00.147208 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:32:00 crc kubenswrapper[4840]: I0311 10:32:00.148599 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553752-2gkm4"] Mar 11 10:32:00 crc kubenswrapper[4840]: I0311 10:32:00.170854 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tztjf\" (UniqueName: \"kubernetes.io/projected/3797a739-a078-4a85-8d90-b5c91b870816-kube-api-access-tztjf\") pod \"auto-csr-approver-29553752-2gkm4\" (UID: \"3797a739-a078-4a85-8d90-b5c91b870816\") " pod="openshift-infra/auto-csr-approver-29553752-2gkm4" Mar 11 10:32:00 crc kubenswrapper[4840]: I0311 10:32:00.271493 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tztjf\" (UniqueName: \"kubernetes.io/projected/3797a739-a078-4a85-8d90-b5c91b870816-kube-api-access-tztjf\") pod \"auto-csr-approver-29553752-2gkm4\" (UID: \"3797a739-a078-4a85-8d90-b5c91b870816\") " pod="openshift-infra/auto-csr-approver-29553752-2gkm4" Mar 11 10:32:00 crc kubenswrapper[4840]: I0311 10:32:00.290914 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tztjf\" (UniqueName: \"kubernetes.io/projected/3797a739-a078-4a85-8d90-b5c91b870816-kube-api-access-tztjf\") pod \"auto-csr-approver-29553752-2gkm4\" (UID: \"3797a739-a078-4a85-8d90-b5c91b870816\") " pod="openshift-infra/auto-csr-approver-29553752-2gkm4" Mar 11 10:32:00 crc kubenswrapper[4840]: I0311 10:32:00.462396 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553752-2gkm4" Mar 11 10:32:00 crc kubenswrapper[4840]: I0311 10:32:00.725498 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553752-2gkm4"] Mar 11 10:32:00 crc kubenswrapper[4840]: I0311 10:32:00.738269 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 10:32:01 crc kubenswrapper[4840]: I0311 10:32:01.422692 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553752-2gkm4" event={"ID":"3797a739-a078-4a85-8d90-b5c91b870816","Type":"ContainerStarted","Data":"67fd49ff75649e9331a8868a2903ae0c3fcbcc2b5a263099630a230b77e396ef"} Mar 11 10:32:02 crc kubenswrapper[4840]: I0311 10:32:02.434694 4840 generic.go:334] "Generic (PLEG): container finished" podID="3797a739-a078-4a85-8d90-b5c91b870816" containerID="39352bb0ccc7f7f09e913882dff99b43401e9a895ed462f426c1e66e5853f3f4" exitCode=0 Mar 11 10:32:02 crc kubenswrapper[4840]: I0311 10:32:02.434775 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553752-2gkm4" event={"ID":"3797a739-a078-4a85-8d90-b5c91b870816","Type":"ContainerDied","Data":"39352bb0ccc7f7f09e913882dff99b43401e9a895ed462f426c1e66e5853f3f4"} Mar 11 10:32:03 crc kubenswrapper[4840]: I0311 10:32:03.862900 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553752-2gkm4" Mar 11 10:32:03 crc kubenswrapper[4840]: I0311 10:32:03.943435 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tztjf\" (UniqueName: \"kubernetes.io/projected/3797a739-a078-4a85-8d90-b5c91b870816-kube-api-access-tztjf\") pod \"3797a739-a078-4a85-8d90-b5c91b870816\" (UID: \"3797a739-a078-4a85-8d90-b5c91b870816\") " Mar 11 10:32:03 crc kubenswrapper[4840]: I0311 10:32:03.951937 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3797a739-a078-4a85-8d90-b5c91b870816-kube-api-access-tztjf" (OuterVolumeSpecName: "kube-api-access-tztjf") pod "3797a739-a078-4a85-8d90-b5c91b870816" (UID: "3797a739-a078-4a85-8d90-b5c91b870816"). InnerVolumeSpecName "kube-api-access-tztjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:32:04 crc kubenswrapper[4840]: I0311 10:32:04.045313 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tztjf\" (UniqueName: \"kubernetes.io/projected/3797a739-a078-4a85-8d90-b5c91b870816-kube-api-access-tztjf\") on node \"crc\" DevicePath \"\"" Mar 11 10:32:04 crc kubenswrapper[4840]: I0311 10:32:04.455643 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553752-2gkm4" event={"ID":"3797a739-a078-4a85-8d90-b5c91b870816","Type":"ContainerDied","Data":"67fd49ff75649e9331a8868a2903ae0c3fcbcc2b5a263099630a230b77e396ef"} Mar 11 10:32:04 crc kubenswrapper[4840]: I0311 10:32:04.455972 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67fd49ff75649e9331a8868a2903ae0c3fcbcc2b5a263099630a230b77e396ef" Mar 11 10:32:04 crc kubenswrapper[4840]: I0311 10:32:04.455693 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553752-2gkm4" Mar 11 10:32:04 crc kubenswrapper[4840]: I0311 10:32:04.948921 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553746-hhwj8"] Mar 11 10:32:04 crc kubenswrapper[4840]: I0311 10:32:04.955851 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553746-hhwj8"] Mar 11 10:32:06 crc kubenswrapper[4840]: I0311 10:32:06.079761 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="939d8d3a-89e2-438a-8c26-f54960a3b0b7" path="/var/lib/kubelet/pods/939d8d3a-89e2-438a-8c26-f54960a3b0b7/volumes" Mar 11 10:32:11 crc kubenswrapper[4840]: I0311 10:32:11.061316 4840 scope.go:117] "RemoveContainer" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" Mar 11 10:32:11 crc kubenswrapper[4840]: E0311 10:32:11.062748 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:32:13 crc kubenswrapper[4840]: I0311 10:32:13.070566 4840 scope.go:117] "RemoveContainer" containerID="58c50eafa84688b091e12d0027ebb087a753a472bc3d686484f0f31baa67ec96" Mar 11 10:32:13 crc kubenswrapper[4840]: I0311 10:32:13.137613 4840 scope.go:117] "RemoveContainer" containerID="513b5527e4ba800c455c6d9f966ecf1ed8db6ae4b1831555b71535ec9a9bd388" Mar 11 10:32:24 crc kubenswrapper[4840]: I0311 10:32:24.060814 4840 scope.go:117] "RemoveContainer" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" Mar 11 10:32:24 crc kubenswrapper[4840]: E0311 10:32:24.062071 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:32:37 crc kubenswrapper[4840]: I0311 10:32:37.064149 4840 scope.go:117] "RemoveContainer" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" Mar 11 10:32:37 crc kubenswrapper[4840]: E0311 10:32:37.065065 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:32:49 crc kubenswrapper[4840]: I0311 10:32:49.061448 4840 scope.go:117] "RemoveContainer" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" Mar 11 10:32:49 crc kubenswrapper[4840]: E0311 10:32:49.062233 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:33:04 crc kubenswrapper[4840]: I0311 10:33:04.062003 4840 scope.go:117] "RemoveContainer" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" Mar 11 10:33:04 crc kubenswrapper[4840]: E0311 10:33:04.064817 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:33:13 crc kubenswrapper[4840]: I0311 10:33:13.248280 4840 scope.go:117] "RemoveContainer" containerID="adbda499061563c301fc0914bd80ca5042a475f1f0f215cff5038698a761579c" Mar 11 10:33:18 crc kubenswrapper[4840]: I0311 10:33:18.060934 4840 scope.go:117] "RemoveContainer" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" Mar 11 10:33:18 crc kubenswrapper[4840]: E0311 10:33:18.062241 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:33:33 crc kubenswrapper[4840]: I0311 10:33:33.063068 4840 scope.go:117] "RemoveContainer" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" Mar 11 10:33:33 crc kubenswrapper[4840]: E0311 10:33:33.063816 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:33:44 crc kubenswrapper[4840]: I0311 10:33:44.061085 4840 scope.go:117] "RemoveContainer" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" Mar 11 10:33:44 crc kubenswrapper[4840]: E0311 10:33:44.062352 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:33:59 crc kubenswrapper[4840]: I0311 10:33:59.059844 4840 scope.go:117] "RemoveContainer" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" Mar 11 10:33:59 crc kubenswrapper[4840]: E0311 10:33:59.060807 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:34:00 crc kubenswrapper[4840]: I0311 10:34:00.164892 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553754-xplpc"] Mar 11 10:34:00 crc kubenswrapper[4840]: E0311 10:34:00.165423 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3797a739-a078-4a85-8d90-b5c91b870816" containerName="oc" Mar 11 10:34:00 crc kubenswrapper[4840]: I0311 10:34:00.165446 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="3797a739-a078-4a85-8d90-b5c91b870816" containerName="oc" Mar 11 10:34:00 crc kubenswrapper[4840]: I0311 10:34:00.165873 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="3797a739-a078-4a85-8d90-b5c91b870816" containerName="oc" Mar 11 10:34:00 crc kubenswrapper[4840]: I0311 10:34:00.166759 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553754-xplpc" Mar 11 10:34:00 crc kubenswrapper[4840]: I0311 10:34:00.169492 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:34:00 crc kubenswrapper[4840]: I0311 10:34:00.170537 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:34:00 crc kubenswrapper[4840]: I0311 10:34:00.170651 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:34:00 crc kubenswrapper[4840]: I0311 10:34:00.176961 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553754-xplpc"] Mar 11 10:34:00 crc kubenswrapper[4840]: I0311 10:34:00.290123 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzshn\" (UniqueName: \"kubernetes.io/projected/71ef9adb-f0e7-4c7c-971e-894bd7e62ebf-kube-api-access-pzshn\") pod \"auto-csr-approver-29553754-xplpc\" (UID: \"71ef9adb-f0e7-4c7c-971e-894bd7e62ebf\") " pod="openshift-infra/auto-csr-approver-29553754-xplpc" Mar 11 10:34:00 crc kubenswrapper[4840]: I0311 10:34:00.391876 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzshn\" (UniqueName: \"kubernetes.io/projected/71ef9adb-f0e7-4c7c-971e-894bd7e62ebf-kube-api-access-pzshn\") pod \"auto-csr-approver-29553754-xplpc\" (UID: \"71ef9adb-f0e7-4c7c-971e-894bd7e62ebf\") " pod="openshift-infra/auto-csr-approver-29553754-xplpc" Mar 11 10:34:00 crc kubenswrapper[4840]: I0311 10:34:00.414964 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzshn\" (UniqueName: \"kubernetes.io/projected/71ef9adb-f0e7-4c7c-971e-894bd7e62ebf-kube-api-access-pzshn\") pod \"auto-csr-approver-29553754-xplpc\" (UID: \"71ef9adb-f0e7-4c7c-971e-894bd7e62ebf\") " pod="openshift-infra/auto-csr-approver-29553754-xplpc" Mar 11 10:34:00 crc kubenswrapper[4840]: I0311 10:34:00.495660 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553754-xplpc" Mar 11 10:34:00 crc kubenswrapper[4840]: I0311 10:34:00.983963 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553754-xplpc"] Mar 11 10:34:01 crc kubenswrapper[4840]: I0311 10:34:01.540234 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553754-xplpc" event={"ID":"71ef9adb-f0e7-4c7c-971e-894bd7e62ebf","Type":"ContainerStarted","Data":"9a982da56bfc9ff2950ecbe4adfdc356693e7fa11f93d532a8fc45d34c94abba"} Mar 11 10:34:02 crc kubenswrapper[4840]: I0311 10:34:02.551642 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553754-xplpc" event={"ID":"71ef9adb-f0e7-4c7c-971e-894bd7e62ebf","Type":"ContainerStarted","Data":"92edfe5ac4048e527ecccf00a58f89273d1a7c5fc41684a969927949e477e800"} Mar 11 10:34:02 crc kubenswrapper[4840]: I0311 10:34:02.570241 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553754-xplpc" podStartSLOduration=1.363736176 podStartE2EDuration="2.570221162s" podCreationTimestamp="2026-03-11 10:34:00 +0000 UTC" firstStartedPulling="2026-03-11 10:34:01.003370238 +0000 UTC m=+5839.669040053" lastFinishedPulling="2026-03-11 10:34:02.209855234 +0000 UTC m=+5840.875525039" observedRunningTime="2026-03-11 10:34:02.568587481 +0000 UTC m=+5841.234257296" watchObservedRunningTime="2026-03-11 10:34:02.570221162 +0000 UTC m=+5841.235890977" Mar 11 10:34:03 crc kubenswrapper[4840]: I0311 10:34:03.568925 4840 generic.go:334] "Generic (PLEG): container finished" podID="71ef9adb-f0e7-4c7c-971e-894bd7e62ebf" containerID="92edfe5ac4048e527ecccf00a58f89273d1a7c5fc41684a969927949e477e800" exitCode=0 Mar 11 10:34:03 crc kubenswrapper[4840]: I0311 10:34:03.569094 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553754-xplpc" event={"ID":"71ef9adb-f0e7-4c7c-971e-894bd7e62ebf","Type":"ContainerDied","Data":"92edfe5ac4048e527ecccf00a58f89273d1a7c5fc41684a969927949e477e800"} Mar 11 10:34:04 crc kubenswrapper[4840]: I0311 10:34:04.967236 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553754-xplpc" Mar 11 10:34:05 crc kubenswrapper[4840]: I0311 10:34:05.087957 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzshn\" (UniqueName: \"kubernetes.io/projected/71ef9adb-f0e7-4c7c-971e-894bd7e62ebf-kube-api-access-pzshn\") pod \"71ef9adb-f0e7-4c7c-971e-894bd7e62ebf\" (UID: \"71ef9adb-f0e7-4c7c-971e-894bd7e62ebf\") " Mar 11 10:34:05 crc kubenswrapper[4840]: I0311 10:34:05.094027 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71ef9adb-f0e7-4c7c-971e-894bd7e62ebf-kube-api-access-pzshn" (OuterVolumeSpecName: "kube-api-access-pzshn") pod "71ef9adb-f0e7-4c7c-971e-894bd7e62ebf" (UID: "71ef9adb-f0e7-4c7c-971e-894bd7e62ebf"). InnerVolumeSpecName "kube-api-access-pzshn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:34:05 crc kubenswrapper[4840]: I0311 10:34:05.159246 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553748-4hcgd"] Mar 11 10:34:05 crc kubenswrapper[4840]: I0311 10:34:05.164581 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553748-4hcgd"] Mar 11 10:34:05 crc kubenswrapper[4840]: I0311 10:34:05.189530 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzshn\" (UniqueName: \"kubernetes.io/projected/71ef9adb-f0e7-4c7c-971e-894bd7e62ebf-kube-api-access-pzshn\") on node \"crc\" DevicePath \"\"" Mar 11 10:34:05 crc kubenswrapper[4840]: I0311 10:34:05.596174 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553754-xplpc" event={"ID":"71ef9adb-f0e7-4c7c-971e-894bd7e62ebf","Type":"ContainerDied","Data":"9a982da56bfc9ff2950ecbe4adfdc356693e7fa11f93d532a8fc45d34c94abba"} Mar 11 10:34:05 crc kubenswrapper[4840]: I0311 10:34:05.596226 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a982da56bfc9ff2950ecbe4adfdc356693e7fa11f93d532a8fc45d34c94abba" Mar 11 10:34:05 crc kubenswrapper[4840]: I0311 10:34:05.596287 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553754-xplpc" Mar 11 10:34:06 crc kubenswrapper[4840]: I0311 10:34:06.076242 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04916aa0-c92b-4b8e-8375-179fc4d96ba0" path="/var/lib/kubelet/pods/04916aa0-c92b-4b8e-8375-179fc4d96ba0/volumes" Mar 11 10:34:12 crc kubenswrapper[4840]: I0311 10:34:12.070638 4840 scope.go:117] "RemoveContainer" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" Mar 11 10:34:12 crc kubenswrapper[4840]: E0311 10:34:12.072043 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:34:13 crc kubenswrapper[4840]: I0311 10:34:13.339000 4840 scope.go:117] "RemoveContainer" containerID="f9e7d522490fb11974615937edf1f31d3bd8272b383ee62d00846ae450599438" Mar 11 10:34:23 crc kubenswrapper[4840]: I0311 10:34:23.060450 4840 scope.go:117] "RemoveContainer" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" Mar 11 10:34:23 crc kubenswrapper[4840]: E0311 10:34:23.061733 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:34:36 crc kubenswrapper[4840]: I0311 10:34:36.060982 4840 scope.go:117] "RemoveContainer" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" Mar 11 10:34:36 crc kubenswrapper[4840]: E0311 10:34:36.062235 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:34:48 crc kubenswrapper[4840]: I0311 10:34:48.060788 4840 scope.go:117] "RemoveContainer" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" Mar 11 10:34:48 crc kubenswrapper[4840]: E0311 10:34:48.062144 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:35:00 crc kubenswrapper[4840]: I0311 10:35:00.060853 4840 scope.go:117] "RemoveContainer" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" Mar 11 10:35:00 crc kubenswrapper[4840]: E0311 10:35:00.061951 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:35:11 crc kubenswrapper[4840]: I0311 10:35:11.060915 4840 scope.go:117] "RemoveContainer" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" Mar 11 10:35:11 crc kubenswrapper[4840]: E0311 10:35:11.062826 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:35:26 crc kubenswrapper[4840]: I0311 10:35:26.060904 4840 scope.go:117] "RemoveContainer" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" Mar 11 10:35:26 crc kubenswrapper[4840]: E0311 10:35:26.062007 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:35:41 crc kubenswrapper[4840]: I0311 10:35:41.061557 4840 scope.go:117] "RemoveContainer" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" Mar 11 10:35:41 crc kubenswrapper[4840]: E0311 10:35:41.063223 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:35:52 crc kubenswrapper[4840]: I0311 10:35:52.069611 4840 scope.go:117] "RemoveContainer" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" Mar 11 10:35:52 crc kubenswrapper[4840]: E0311 10:35:52.070480 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:36:00 crc kubenswrapper[4840]: I0311 10:36:00.154223 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553756-mw76m"] Mar 11 10:36:00 crc kubenswrapper[4840]: E0311 10:36:00.155463 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ef9adb-f0e7-4c7c-971e-894bd7e62ebf" containerName="oc" Mar 11 10:36:00 crc kubenswrapper[4840]: I0311 10:36:00.155520 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ef9adb-f0e7-4c7c-971e-894bd7e62ebf" containerName="oc" Mar 11 10:36:00 crc kubenswrapper[4840]: I0311 10:36:00.155845 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="71ef9adb-f0e7-4c7c-971e-894bd7e62ebf" containerName="oc" Mar 11 10:36:00 crc kubenswrapper[4840]: I0311 10:36:00.156915 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553756-mw76m" Mar 11 10:36:00 crc kubenswrapper[4840]: I0311 10:36:00.161376 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:36:00 crc kubenswrapper[4840]: I0311 10:36:00.161800 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:36:00 crc kubenswrapper[4840]: I0311 10:36:00.162879 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:36:00 crc kubenswrapper[4840]: I0311 10:36:00.165879 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553756-mw76m"] Mar 11 10:36:00 crc kubenswrapper[4840]: I0311 10:36:00.333810 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9zft\" (UniqueName: \"kubernetes.io/projected/eb0b4780-16bd-472a-b302-fec7877e8b3e-kube-api-access-j9zft\") pod \"auto-csr-approver-29553756-mw76m\" (UID: \"eb0b4780-16bd-472a-b302-fec7877e8b3e\") " pod="openshift-infra/auto-csr-approver-29553756-mw76m" Mar 11 10:36:00 crc kubenswrapper[4840]: I0311 10:36:00.435296 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9zft\" (UniqueName: \"kubernetes.io/projected/eb0b4780-16bd-472a-b302-fec7877e8b3e-kube-api-access-j9zft\") pod \"auto-csr-approver-29553756-mw76m\" (UID: \"eb0b4780-16bd-472a-b302-fec7877e8b3e\") " pod="openshift-infra/auto-csr-approver-29553756-mw76m" Mar 11 10:36:00 crc kubenswrapper[4840]: I0311 10:36:00.459977 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9zft\" (UniqueName: \"kubernetes.io/projected/eb0b4780-16bd-472a-b302-fec7877e8b3e-kube-api-access-j9zft\") pod \"auto-csr-approver-29553756-mw76m\" (UID: \"eb0b4780-16bd-472a-b302-fec7877e8b3e\") " pod="openshift-infra/auto-csr-approver-29553756-mw76m" Mar 11 10:36:00 crc kubenswrapper[4840]: I0311 10:36:00.478103 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553756-mw76m" Mar 11 10:36:00 crc kubenswrapper[4840]: I0311 10:36:00.904484 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553756-mw76m"] Mar 11 10:36:01 crc kubenswrapper[4840]: I0311 10:36:01.813603 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553756-mw76m" event={"ID":"eb0b4780-16bd-472a-b302-fec7877e8b3e","Type":"ContainerStarted","Data":"6f7031bc0946acd190f1b8412b46331dc2bfa006175c5862dee9b1eb46368101"} Mar 11 10:36:02 crc kubenswrapper[4840]: I0311 10:36:02.828600 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553756-mw76m" event={"ID":"eb0b4780-16bd-472a-b302-fec7877e8b3e","Type":"ContainerStarted","Data":"b552704211b6882cfafd45b5574c95bd174bafa296d69e20cb64a71e0bc1d91c"} Mar 11 10:36:03 crc kubenswrapper[4840]: I0311 10:36:03.845300 4840 generic.go:334] "Generic (PLEG): container finished" podID="eb0b4780-16bd-472a-b302-fec7877e8b3e" containerID="b552704211b6882cfafd45b5574c95bd174bafa296d69e20cb64a71e0bc1d91c" exitCode=0 Mar 11 10:36:03 crc kubenswrapper[4840]: I0311 10:36:03.845385 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553756-mw76m" event={"ID":"eb0b4780-16bd-472a-b302-fec7877e8b3e","Type":"ContainerDied","Data":"b552704211b6882cfafd45b5574c95bd174bafa296d69e20cb64a71e0bc1d91c"} Mar 11 10:36:04 crc kubenswrapper[4840]: I0311 10:36:04.178260 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553756-mw76m" Mar 11 10:36:04 crc kubenswrapper[4840]: I0311 10:36:04.238062 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9zft\" (UniqueName: \"kubernetes.io/projected/eb0b4780-16bd-472a-b302-fec7877e8b3e-kube-api-access-j9zft\") pod \"eb0b4780-16bd-472a-b302-fec7877e8b3e\" (UID: \"eb0b4780-16bd-472a-b302-fec7877e8b3e\") " Mar 11 10:36:04 crc kubenswrapper[4840]: I0311 10:36:04.249286 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb0b4780-16bd-472a-b302-fec7877e8b3e-kube-api-access-j9zft" (OuterVolumeSpecName: "kube-api-access-j9zft") pod "eb0b4780-16bd-472a-b302-fec7877e8b3e" (UID: "eb0b4780-16bd-472a-b302-fec7877e8b3e"). InnerVolumeSpecName "kube-api-access-j9zft". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:36:04 crc kubenswrapper[4840]: I0311 10:36:04.341368 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9zft\" (UniqueName: \"kubernetes.io/projected/eb0b4780-16bd-472a-b302-fec7877e8b3e-kube-api-access-j9zft\") on node \"crc\" DevicePath \"\"" Mar 11 10:36:04 crc kubenswrapper[4840]: I0311 10:36:04.866198 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553756-mw76m" event={"ID":"eb0b4780-16bd-472a-b302-fec7877e8b3e","Type":"ContainerDied","Data":"6f7031bc0946acd190f1b8412b46331dc2bfa006175c5862dee9b1eb46368101"} Mar 11 10:36:04 crc kubenswrapper[4840]: I0311 10:36:04.866269 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f7031bc0946acd190f1b8412b46331dc2bfa006175c5862dee9b1eb46368101" Mar 11 10:36:04 crc kubenswrapper[4840]: I0311 10:36:04.866328 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553756-mw76m" Mar 11 10:36:05 crc kubenswrapper[4840]: I0311 10:36:05.260079 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553750-zg7qd"] Mar 11 10:36:05 crc kubenswrapper[4840]: I0311 10:36:05.269421 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553750-zg7qd"] Mar 11 10:36:06 crc kubenswrapper[4840]: I0311 10:36:06.076739 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ad286ee-6683-4c48-b195-935c29b048b7" path="/var/lib/kubelet/pods/4ad286ee-6683-4c48-b195-935c29b048b7/volumes" Mar 11 10:36:07 crc kubenswrapper[4840]: I0311 10:36:07.060611 4840 scope.go:117] "RemoveContainer" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" Mar 11 10:36:07 crc kubenswrapper[4840]: E0311 10:36:07.061042 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:36:13 crc kubenswrapper[4840]: I0311 10:36:13.450806 4840 scope.go:117] "RemoveContainer" containerID="f85135619e8d7cb197c10cfe395fb2fec9b5ecd26a1b4bdedc0f3bc8263a48dd" Mar 11 10:36:20 crc kubenswrapper[4840]: I0311 10:36:20.060293 4840 scope.go:117] "RemoveContainer" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" Mar 11 10:36:20 crc kubenswrapper[4840]: E0311 10:36:20.061349 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:36:35 crc kubenswrapper[4840]: I0311 10:36:35.060567 4840 scope.go:117] "RemoveContainer" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" Mar 11 10:36:35 crc kubenswrapper[4840]: E0311 10:36:35.062342 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:36:43 crc kubenswrapper[4840]: I0311 10:36:43.044815 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-699f-account-create-update-ncdx2"] Mar 11 10:36:43 crc kubenswrapper[4840]: I0311 10:36:43.056655 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-699f-account-create-update-ncdx2"] Mar 11 10:36:43 crc kubenswrapper[4840]: I0311 10:36:43.068427 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-qbnnf"] Mar 11 10:36:43 crc kubenswrapper[4840]: I0311 10:36:43.077103 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-qbnnf"] Mar 11 10:36:44 crc kubenswrapper[4840]: I0311 10:36:44.071478 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fc84158-c763-463c-8d98-317545c9f29b" path="/var/lib/kubelet/pods/3fc84158-c763-463c-8d98-317545c9f29b/volumes" Mar 11 10:36:44 crc kubenswrapper[4840]: I0311 10:36:44.072642 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c951d98c-4b85-4bb6-adcb-ddd8e8159304" path="/var/lib/kubelet/pods/c951d98c-4b85-4bb6-adcb-ddd8e8159304/volumes" Mar 11 10:36:50 crc kubenswrapper[4840]: I0311 10:36:50.064865 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-4wd8s"] Mar 11 10:36:50 crc kubenswrapper[4840]: I0311 10:36:50.088735 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-4wd8s"] Mar 11 10:36:50 crc kubenswrapper[4840]: I0311 10:36:50.089451 4840 scope.go:117] "RemoveContainer" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" Mar 11 10:36:50 crc kubenswrapper[4840]: E0311 10:36:50.089756 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:36:52 crc kubenswrapper[4840]: I0311 10:36:52.075490 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5752daf0-741c-47a3-a28d-0997d6c400f1" path="/var/lib/kubelet/pods/5752daf0-741c-47a3-a28d-0997d6c400f1/volumes" Mar 11 10:37:04 crc kubenswrapper[4840]: I0311 10:37:04.038383 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mmzff"] Mar 11 10:37:04 crc kubenswrapper[4840]: I0311 10:37:04.050541 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mmzff"] Mar 11 10:37:04 crc kubenswrapper[4840]: I0311 10:37:04.078695 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68" path="/var/lib/kubelet/pods/a00cf890-ffdb-45d6-9d2a-dfaa6efc0b68/volumes" Mar 11 10:37:05 crc kubenswrapper[4840]: I0311 10:37:05.060987 4840 scope.go:117] "RemoveContainer" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" Mar 11 10:37:05 crc kubenswrapper[4840]: I0311 10:37:05.437508 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"0b1dc94236fd74074941a3853c81f7449b469e1e2e185c24f80ca9a36ca3b1f5"} Mar 11 10:37:13 crc kubenswrapper[4840]: I0311 10:37:13.581408 4840 scope.go:117] "RemoveContainer" containerID="f6604405be5d3d01272e1c73ef7e5372f5edce5bd15205b9ea41f1de14273938" Mar 11 10:37:13 crc kubenswrapper[4840]: I0311 10:37:13.630476 4840 scope.go:117] "RemoveContainer" containerID="ad2751d7126f9b67f4929c11bc060b50e402baada7ca7612ac500a76bd29e02f" Mar 11 10:37:13 crc kubenswrapper[4840]: I0311 10:37:13.656002 4840 scope.go:117] "RemoveContainer" containerID="96bf277b1f3889e50d403c29447ca50d41f6154676a73247e7537804f9a8a703" Mar 11 10:37:13 crc kubenswrapper[4840]: I0311 10:37:13.712728 4840 scope.go:117] "RemoveContainer" containerID="fb5b8bf1ef92e6a289d0ec6deea20a306b2bbf422b714bea6aeec3e817b44140" Mar 11 10:38:00 crc kubenswrapper[4840]: I0311 10:38:00.204103 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553758-mdqls"] Mar 11 10:38:00 crc kubenswrapper[4840]: E0311 10:38:00.205327 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb0b4780-16bd-472a-b302-fec7877e8b3e" containerName="oc" Mar 11 10:38:00 crc kubenswrapper[4840]: I0311 10:38:00.205353 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb0b4780-16bd-472a-b302-fec7877e8b3e" containerName="oc" Mar 11 10:38:00 crc kubenswrapper[4840]: I0311 10:38:00.205764 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb0b4780-16bd-472a-b302-fec7877e8b3e" containerName="oc" Mar 11 10:38:00 crc kubenswrapper[4840]: I0311 10:38:00.206711 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553758-mdqls" Mar 11 10:38:00 crc kubenswrapper[4840]: I0311 10:38:00.208707 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:38:00 crc kubenswrapper[4840]: I0311 10:38:00.210535 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:38:00 crc kubenswrapper[4840]: I0311 10:38:00.210644 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:38:00 crc kubenswrapper[4840]: I0311 10:38:00.230273 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553758-mdqls"] Mar 11 10:38:00 crc kubenswrapper[4840]: I0311 10:38:00.329205 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spmjx\" (UniqueName: \"kubernetes.io/projected/e7cad627-01a5-42a2-a7d1-87f925677e96-kube-api-access-spmjx\") pod \"auto-csr-approver-29553758-mdqls\" (UID: \"e7cad627-01a5-42a2-a7d1-87f925677e96\") " pod="openshift-infra/auto-csr-approver-29553758-mdqls" Mar 11 10:38:00 crc kubenswrapper[4840]: I0311 10:38:00.431220 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spmjx\" (UniqueName: \"kubernetes.io/projected/e7cad627-01a5-42a2-a7d1-87f925677e96-kube-api-access-spmjx\") pod \"auto-csr-approver-29553758-mdqls\" (UID: \"e7cad627-01a5-42a2-a7d1-87f925677e96\") " pod="openshift-infra/auto-csr-approver-29553758-mdqls" Mar 11 10:38:00 crc kubenswrapper[4840]: I0311 10:38:00.460049 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spmjx\" (UniqueName: \"kubernetes.io/projected/e7cad627-01a5-42a2-a7d1-87f925677e96-kube-api-access-spmjx\") pod \"auto-csr-approver-29553758-mdqls\" (UID: \"e7cad627-01a5-42a2-a7d1-87f925677e96\") " pod="openshift-infra/auto-csr-approver-29553758-mdqls" Mar 11 10:38:00 crc kubenswrapper[4840]: I0311 10:38:00.530097 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553758-mdqls" Mar 11 10:38:00 crc kubenswrapper[4840]: I0311 10:38:00.957363 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553758-mdqls"] Mar 11 10:38:00 crc kubenswrapper[4840]: I0311 10:38:00.969038 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 10:38:00 crc kubenswrapper[4840]: I0311 10:38:00.986136 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553758-mdqls" event={"ID":"e7cad627-01a5-42a2-a7d1-87f925677e96","Type":"ContainerStarted","Data":"791cbb6373f7b7dbf99adda7de411716318bba23b8c39e29dadd5b5d64492bd7"} Mar 11 10:38:03 crc kubenswrapper[4840]: I0311 10:38:03.004114 4840 generic.go:334] "Generic (PLEG): container finished" podID="e7cad627-01a5-42a2-a7d1-87f925677e96" containerID="8e3e6e271475ff5d9af70ee9fc19d373686fe89a1a354789882212c1aad15704" exitCode=0 Mar 11 10:38:03 crc kubenswrapper[4840]: I0311 10:38:03.004155 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553758-mdqls" event={"ID":"e7cad627-01a5-42a2-a7d1-87f925677e96","Type":"ContainerDied","Data":"8e3e6e271475ff5d9af70ee9fc19d373686fe89a1a354789882212c1aad15704"} Mar 11 10:38:04 crc kubenswrapper[4840]: I0311 10:38:04.374261 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553758-mdqls" Mar 11 10:38:04 crc kubenswrapper[4840]: I0311 10:38:04.547712 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spmjx\" (UniqueName: \"kubernetes.io/projected/e7cad627-01a5-42a2-a7d1-87f925677e96-kube-api-access-spmjx\") pod \"e7cad627-01a5-42a2-a7d1-87f925677e96\" (UID: \"e7cad627-01a5-42a2-a7d1-87f925677e96\") " Mar 11 10:38:04 crc kubenswrapper[4840]: I0311 10:38:04.555799 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7cad627-01a5-42a2-a7d1-87f925677e96-kube-api-access-spmjx" (OuterVolumeSpecName: "kube-api-access-spmjx") pod "e7cad627-01a5-42a2-a7d1-87f925677e96" (UID: "e7cad627-01a5-42a2-a7d1-87f925677e96"). InnerVolumeSpecName "kube-api-access-spmjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:38:04 crc kubenswrapper[4840]: I0311 10:38:04.649768 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spmjx\" (UniqueName: \"kubernetes.io/projected/e7cad627-01a5-42a2-a7d1-87f925677e96-kube-api-access-spmjx\") on node \"crc\" DevicePath \"\"" Mar 11 10:38:05 crc kubenswrapper[4840]: I0311 10:38:05.022887 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553758-mdqls" event={"ID":"e7cad627-01a5-42a2-a7d1-87f925677e96","Type":"ContainerDied","Data":"791cbb6373f7b7dbf99adda7de411716318bba23b8c39e29dadd5b5d64492bd7"} Mar 11 10:38:05 crc kubenswrapper[4840]: I0311 10:38:05.023141 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="791cbb6373f7b7dbf99adda7de411716318bba23b8c39e29dadd5b5d64492bd7" Mar 11 10:38:05 crc kubenswrapper[4840]: I0311 10:38:05.022966 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553758-mdqls" Mar 11 10:38:05 crc kubenswrapper[4840]: I0311 10:38:05.471460 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553752-2gkm4"] Mar 11 10:38:05 crc kubenswrapper[4840]: I0311 10:38:05.480397 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553752-2gkm4"] Mar 11 10:38:06 crc kubenswrapper[4840]: I0311 10:38:06.072395 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3797a739-a078-4a85-8d90-b5c91b870816" path="/var/lib/kubelet/pods/3797a739-a078-4a85-8d90-b5c91b870816/volumes" Mar 11 10:38:13 crc kubenswrapper[4840]: I0311 10:38:13.830198 4840 scope.go:117] "RemoveContainer" containerID="39352bb0ccc7f7f09e913882dff99b43401e9a895ed462f426c1e66e5853f3f4" Mar 11 10:38:54 crc kubenswrapper[4840]: I0311 10:38:54.570049 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8s8wd"] Mar 11 10:38:54 crc kubenswrapper[4840]: E0311 10:38:54.571868 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7cad627-01a5-42a2-a7d1-87f925677e96" containerName="oc" Mar 11 10:38:54 crc kubenswrapper[4840]: I0311 10:38:54.571897 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7cad627-01a5-42a2-a7d1-87f925677e96" containerName="oc" Mar 11 10:38:54 crc kubenswrapper[4840]: I0311 10:38:54.574697 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7cad627-01a5-42a2-a7d1-87f925677e96" containerName="oc" Mar 11 10:38:54 crc kubenswrapper[4840]: I0311 10:38:54.576610 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8s8wd" Mar 11 10:38:54 crc kubenswrapper[4840]: I0311 10:38:54.589945 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8s8wd"] Mar 11 10:38:54 crc kubenswrapper[4840]: I0311 10:38:54.663259 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaf78b42-004a-4377-aacc-653befa6e2a5-utilities\") pod \"redhat-operators-8s8wd\" (UID: \"aaf78b42-004a-4377-aacc-653befa6e2a5\") " pod="openshift-marketplace/redhat-operators-8s8wd" Mar 11 10:38:54 crc kubenswrapper[4840]: I0311 10:38:54.663394 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaf78b42-004a-4377-aacc-653befa6e2a5-catalog-content\") pod \"redhat-operators-8s8wd\" (UID: \"aaf78b42-004a-4377-aacc-653befa6e2a5\") " pod="openshift-marketplace/redhat-operators-8s8wd" Mar 11 10:38:54 crc kubenswrapper[4840]: I0311 10:38:54.663719 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l2tb\" (UniqueName: \"kubernetes.io/projected/aaf78b42-004a-4377-aacc-653befa6e2a5-kube-api-access-7l2tb\") pod \"redhat-operators-8s8wd\" (UID: \"aaf78b42-004a-4377-aacc-653befa6e2a5\") " pod="openshift-marketplace/redhat-operators-8s8wd" Mar 11 10:38:54 crc kubenswrapper[4840]: I0311 10:38:54.765277 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaf78b42-004a-4377-aacc-653befa6e2a5-catalog-content\") pod \"redhat-operators-8s8wd\" (UID: \"aaf78b42-004a-4377-aacc-653befa6e2a5\") " pod="openshift-marketplace/redhat-operators-8s8wd" Mar 11 10:38:54 crc kubenswrapper[4840]: I0311 10:38:54.765399 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l2tb\" (UniqueName: \"kubernetes.io/projected/aaf78b42-004a-4377-aacc-653befa6e2a5-kube-api-access-7l2tb\") pod \"redhat-operators-8s8wd\" (UID: \"aaf78b42-004a-4377-aacc-653befa6e2a5\") " pod="openshift-marketplace/redhat-operators-8s8wd" Mar 11 10:38:54 crc kubenswrapper[4840]: I0311 10:38:54.765459 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaf78b42-004a-4377-aacc-653befa6e2a5-utilities\") pod \"redhat-operators-8s8wd\" (UID: \"aaf78b42-004a-4377-aacc-653befa6e2a5\") " pod="openshift-marketplace/redhat-operators-8s8wd" Mar 11 10:38:54 crc kubenswrapper[4840]: I0311 10:38:54.766094 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaf78b42-004a-4377-aacc-653befa6e2a5-utilities\") pod \"redhat-operators-8s8wd\" (UID: \"aaf78b42-004a-4377-aacc-653befa6e2a5\") " pod="openshift-marketplace/redhat-operators-8s8wd" Mar 11 10:38:54 crc kubenswrapper[4840]: I0311 10:38:54.766159 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaf78b42-004a-4377-aacc-653befa6e2a5-catalog-content\") pod \"redhat-operators-8s8wd\" (UID: \"aaf78b42-004a-4377-aacc-653befa6e2a5\") " pod="openshift-marketplace/redhat-operators-8s8wd" Mar 11 10:38:54 crc kubenswrapper[4840]: I0311 10:38:54.791528 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l2tb\" (UniqueName: \"kubernetes.io/projected/aaf78b42-004a-4377-aacc-653befa6e2a5-kube-api-access-7l2tb\") pod \"redhat-operators-8s8wd\" (UID: \"aaf78b42-004a-4377-aacc-653befa6e2a5\") " pod="openshift-marketplace/redhat-operators-8s8wd" Mar 11 10:38:54 crc kubenswrapper[4840]: I0311 10:38:54.923433 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8s8wd" Mar 11 10:38:55 crc kubenswrapper[4840]: I0311 10:38:55.375597 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8s8wd"] Mar 11 10:38:55 crc kubenswrapper[4840]: I0311 10:38:55.487200 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s8wd" event={"ID":"aaf78b42-004a-4377-aacc-653befa6e2a5","Type":"ContainerStarted","Data":"6cfa3d9f9f88aff5d75d2d57e62038b127f5be335f3c7e0fb5f5a1b469e49df9"} Mar 11 10:38:56 crc kubenswrapper[4840]: I0311 10:38:56.501565 4840 generic.go:334] "Generic (PLEG): container finished" podID="aaf78b42-004a-4377-aacc-653befa6e2a5" containerID="cd09b4274aeeb8d6fb2411cd3b198107c0411d82f778c7462ee263197f1146fe" exitCode=0 Mar 11 10:38:56 crc kubenswrapper[4840]: I0311 10:38:56.501987 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s8wd" event={"ID":"aaf78b42-004a-4377-aacc-653befa6e2a5","Type":"ContainerDied","Data":"cd09b4274aeeb8d6fb2411cd3b198107c0411d82f778c7462ee263197f1146fe"} Mar 11 10:38:58 crc kubenswrapper[4840]: I0311 10:38:58.524434 4840 generic.go:334] "Generic (PLEG): container finished" podID="aaf78b42-004a-4377-aacc-653befa6e2a5" containerID="490faa5eb743b9abdf9357c24c6b725af950bf92d208126d6e6b3d399505e85e" exitCode=0 Mar 11 10:38:58 crc kubenswrapper[4840]: I0311 10:38:58.525049 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s8wd" event={"ID":"aaf78b42-004a-4377-aacc-653befa6e2a5","Type":"ContainerDied","Data":"490faa5eb743b9abdf9357c24c6b725af950bf92d208126d6e6b3d399505e85e"} Mar 11 10:38:59 crc kubenswrapper[4840]: I0311 10:38:59.539752 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s8wd" event={"ID":"aaf78b42-004a-4377-aacc-653befa6e2a5","Type":"ContainerStarted","Data":"a14409f2e838183e47d50ecd591064541031d84697428f03968c2e29cb656419"} Mar 11 10:38:59 crc kubenswrapper[4840]: I0311 10:38:59.568017 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8s8wd" podStartSLOduration=3.102295682 podStartE2EDuration="5.567991511s" podCreationTimestamp="2026-03-11 10:38:54 +0000 UTC" firstStartedPulling="2026-03-11 10:38:56.506546089 +0000 UTC m=+6135.172215944" lastFinishedPulling="2026-03-11 10:38:58.972241928 +0000 UTC m=+6137.637911773" observedRunningTime="2026-03-11 10:38:59.564535254 +0000 UTC m=+6138.230205099" watchObservedRunningTime="2026-03-11 10:38:59.567991511 +0000 UTC m=+6138.233661346" Mar 11 10:39:04 crc kubenswrapper[4840]: I0311 10:39:04.924690 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8s8wd" Mar 11 10:39:04 crc kubenswrapper[4840]: I0311 10:39:04.926897 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8s8wd" Mar 11 10:39:05 crc kubenswrapper[4840]: I0311 10:39:05.996317 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8s8wd" podUID="aaf78b42-004a-4377-aacc-653befa6e2a5" containerName="registry-server" probeResult="failure" output=< Mar 11 10:39:05 crc kubenswrapper[4840]: timeout: failed to connect service ":50051" within 1s Mar 11 10:39:05 crc kubenswrapper[4840]: > Mar 11 10:39:15 crc kubenswrapper[4840]: I0311 10:39:15.003748 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8s8wd" Mar 11 10:39:15 crc kubenswrapper[4840]: I0311 10:39:15.072874 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8s8wd" Mar 11 10:39:15 crc kubenswrapper[4840]: I0311 10:39:15.266401 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8s8wd"] Mar 11 10:39:16 crc kubenswrapper[4840]: I0311 10:39:16.696160 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8s8wd" podUID="aaf78b42-004a-4377-aacc-653befa6e2a5" containerName="registry-server" containerID="cri-o://a14409f2e838183e47d50ecd591064541031d84697428f03968c2e29cb656419" gracePeriod=2 Mar 11 10:39:17 crc kubenswrapper[4840]: I0311 10:39:17.136767 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8s8wd" Mar 11 10:39:17 crc kubenswrapper[4840]: I0311 10:39:17.274238 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaf78b42-004a-4377-aacc-653befa6e2a5-catalog-content\") pod \"aaf78b42-004a-4377-aacc-653befa6e2a5\" (UID: \"aaf78b42-004a-4377-aacc-653befa6e2a5\") " Mar 11 10:39:17 crc kubenswrapper[4840]: I0311 10:39:17.274611 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l2tb\" (UniqueName: \"kubernetes.io/projected/aaf78b42-004a-4377-aacc-653befa6e2a5-kube-api-access-7l2tb\") pod \"aaf78b42-004a-4377-aacc-653befa6e2a5\" (UID: \"aaf78b42-004a-4377-aacc-653befa6e2a5\") " Mar 11 10:39:17 crc kubenswrapper[4840]: I0311 10:39:17.274890 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaf78b42-004a-4377-aacc-653befa6e2a5-utilities\") pod \"aaf78b42-004a-4377-aacc-653befa6e2a5\" (UID: \"aaf78b42-004a-4377-aacc-653befa6e2a5\") " Mar 11 10:39:17 crc kubenswrapper[4840]: I0311 10:39:17.276052 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaf78b42-004a-4377-aacc-653befa6e2a5-utilities" (OuterVolumeSpecName: "utilities") pod "aaf78b42-004a-4377-aacc-653befa6e2a5" (UID: "aaf78b42-004a-4377-aacc-653befa6e2a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:39:17 crc kubenswrapper[4840]: I0311 10:39:17.292980 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaf78b42-004a-4377-aacc-653befa6e2a5-kube-api-access-7l2tb" (OuterVolumeSpecName: "kube-api-access-7l2tb") pod "aaf78b42-004a-4377-aacc-653befa6e2a5" (UID: "aaf78b42-004a-4377-aacc-653befa6e2a5"). InnerVolumeSpecName "kube-api-access-7l2tb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:39:17 crc kubenswrapper[4840]: I0311 10:39:17.376652 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaf78b42-004a-4377-aacc-653befa6e2a5-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:39:17 crc kubenswrapper[4840]: I0311 10:39:17.376683 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l2tb\" (UniqueName: \"kubernetes.io/projected/aaf78b42-004a-4377-aacc-653befa6e2a5-kube-api-access-7l2tb\") on node \"crc\" DevicePath \"\"" Mar 11 10:39:17 crc kubenswrapper[4840]: I0311 10:39:17.416351 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaf78b42-004a-4377-aacc-653befa6e2a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aaf78b42-004a-4377-aacc-653befa6e2a5" (UID: "aaf78b42-004a-4377-aacc-653befa6e2a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:39:17 crc kubenswrapper[4840]: I0311 10:39:17.478117 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaf78b42-004a-4377-aacc-653befa6e2a5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:39:17 crc kubenswrapper[4840]: I0311 10:39:17.705379 4840 generic.go:334] "Generic (PLEG): container finished" podID="aaf78b42-004a-4377-aacc-653befa6e2a5" containerID="a14409f2e838183e47d50ecd591064541031d84697428f03968c2e29cb656419" exitCode=0 Mar 11 10:39:17 crc kubenswrapper[4840]: I0311 10:39:17.705463 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s8wd" event={"ID":"aaf78b42-004a-4377-aacc-653befa6e2a5","Type":"ContainerDied","Data":"a14409f2e838183e47d50ecd591064541031d84697428f03968c2e29cb656419"} Mar 11 10:39:17 crc kubenswrapper[4840]: I0311 10:39:17.705534 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8s8wd" event={"ID":"aaf78b42-004a-4377-aacc-653befa6e2a5","Type":"ContainerDied","Data":"6cfa3d9f9f88aff5d75d2d57e62038b127f5be335f3c7e0fb5f5a1b469e49df9"} Mar 11 10:39:17 crc kubenswrapper[4840]: I0311 10:39:17.705557 4840 scope.go:117] "RemoveContainer" containerID="a14409f2e838183e47d50ecd591064541031d84697428f03968c2e29cb656419" Mar 11 10:39:17 crc kubenswrapper[4840]: I0311 10:39:17.706522 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8s8wd" Mar 11 10:39:17 crc kubenswrapper[4840]: I0311 10:39:17.724853 4840 scope.go:117] "RemoveContainer" containerID="490faa5eb743b9abdf9357c24c6b725af950bf92d208126d6e6b3d399505e85e" Mar 11 10:39:17 crc kubenswrapper[4840]: I0311 10:39:17.743280 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8s8wd"] Mar 11 10:39:17 crc kubenswrapper[4840]: I0311 10:39:17.752047 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8s8wd"] Mar 11 10:39:17 crc kubenswrapper[4840]: I0311 10:39:17.769259 4840 scope.go:117] "RemoveContainer" containerID="cd09b4274aeeb8d6fb2411cd3b198107c0411d82f778c7462ee263197f1146fe" Mar 11 10:39:17 crc kubenswrapper[4840]: I0311 10:39:17.786754 4840 scope.go:117] "RemoveContainer" containerID="a14409f2e838183e47d50ecd591064541031d84697428f03968c2e29cb656419" Mar 11 10:39:17 crc kubenswrapper[4840]: E0311 10:39:17.787064 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a14409f2e838183e47d50ecd591064541031d84697428f03968c2e29cb656419\": container with ID starting with a14409f2e838183e47d50ecd591064541031d84697428f03968c2e29cb656419 not found: ID does not exist" containerID="a14409f2e838183e47d50ecd591064541031d84697428f03968c2e29cb656419" Mar 11 10:39:17 crc kubenswrapper[4840]: I0311 10:39:17.787117 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a14409f2e838183e47d50ecd591064541031d84697428f03968c2e29cb656419"} err="failed to get container status \"a14409f2e838183e47d50ecd591064541031d84697428f03968c2e29cb656419\": rpc error: code = NotFound desc = could not find container \"a14409f2e838183e47d50ecd591064541031d84697428f03968c2e29cb656419\": container with ID starting with a14409f2e838183e47d50ecd591064541031d84697428f03968c2e29cb656419 not found: ID does not exist" Mar 11 10:39:17 crc kubenswrapper[4840]: I0311 10:39:17.787152 4840 scope.go:117] "RemoveContainer" containerID="490faa5eb743b9abdf9357c24c6b725af950bf92d208126d6e6b3d399505e85e" Mar 11 10:39:17 crc kubenswrapper[4840]: E0311 10:39:17.787422 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"490faa5eb743b9abdf9357c24c6b725af950bf92d208126d6e6b3d399505e85e\": container with ID starting with 490faa5eb743b9abdf9357c24c6b725af950bf92d208126d6e6b3d399505e85e not found: ID does not exist" containerID="490faa5eb743b9abdf9357c24c6b725af950bf92d208126d6e6b3d399505e85e" Mar 11 10:39:17 crc kubenswrapper[4840]: I0311 10:39:17.787524 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"490faa5eb743b9abdf9357c24c6b725af950bf92d208126d6e6b3d399505e85e"} err="failed to get container status \"490faa5eb743b9abdf9357c24c6b725af950bf92d208126d6e6b3d399505e85e\": rpc error: code = NotFound desc = could not find container \"490faa5eb743b9abdf9357c24c6b725af950bf92d208126d6e6b3d399505e85e\": container with ID starting with 490faa5eb743b9abdf9357c24c6b725af950bf92d208126d6e6b3d399505e85e not found: ID does not exist" Mar 11 10:39:17 crc kubenswrapper[4840]: I0311 10:39:17.787575 4840 scope.go:117] "RemoveContainer" containerID="cd09b4274aeeb8d6fb2411cd3b198107c0411d82f778c7462ee263197f1146fe" Mar 11 10:39:17 crc kubenswrapper[4840]: E0311 10:39:17.788004 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd09b4274aeeb8d6fb2411cd3b198107c0411d82f778c7462ee263197f1146fe\": container with ID starting with cd09b4274aeeb8d6fb2411cd3b198107c0411d82f778c7462ee263197f1146fe not found: ID does not exist" containerID="cd09b4274aeeb8d6fb2411cd3b198107c0411d82f778c7462ee263197f1146fe" Mar 11 10:39:17 crc kubenswrapper[4840]: I0311 10:39:17.788031 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd09b4274aeeb8d6fb2411cd3b198107c0411d82f778c7462ee263197f1146fe"} err="failed to get container status \"cd09b4274aeeb8d6fb2411cd3b198107c0411d82f778c7462ee263197f1146fe\": rpc error: code = NotFound desc = could not find container \"cd09b4274aeeb8d6fb2411cd3b198107c0411d82f778c7462ee263197f1146fe\": container with ID starting with cd09b4274aeeb8d6fb2411cd3b198107c0411d82f778c7462ee263197f1146fe not found: ID does not exist" Mar 11 10:39:18 crc kubenswrapper[4840]: I0311 10:39:18.078549 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaf78b42-004a-4377-aacc-653befa6e2a5" path="/var/lib/kubelet/pods/aaf78b42-004a-4377-aacc-653befa6e2a5/volumes" Mar 11 10:39:23 crc kubenswrapper[4840]: I0311 10:39:23.429790 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8n7pg"] Mar 11 10:39:23 crc kubenswrapper[4840]: E0311 10:39:23.430670 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf78b42-004a-4377-aacc-653befa6e2a5" containerName="extract-utilities" Mar 11 10:39:23 crc kubenswrapper[4840]: I0311 10:39:23.430693 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf78b42-004a-4377-aacc-653befa6e2a5" containerName="extract-utilities" Mar 11 10:39:23 crc kubenswrapper[4840]: E0311 10:39:23.430723 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf78b42-004a-4377-aacc-653befa6e2a5" containerName="extract-content" Mar 11 10:39:23 crc kubenswrapper[4840]: I0311 10:39:23.430736 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf78b42-004a-4377-aacc-653befa6e2a5" containerName="extract-content" Mar 11 10:39:23 crc kubenswrapper[4840]: E0311 10:39:23.430764 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf78b42-004a-4377-aacc-653befa6e2a5" containerName="registry-server" Mar 11 10:39:23 crc kubenswrapper[4840]: I0311 10:39:23.430777 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf78b42-004a-4377-aacc-653befa6e2a5" containerName="registry-server" Mar 11 10:39:23 crc kubenswrapper[4840]: I0311 10:39:23.431090 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaf78b42-004a-4377-aacc-653befa6e2a5" containerName="registry-server" Mar 11 10:39:23 crc kubenswrapper[4840]: I0311 10:39:23.433090 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8n7pg" Mar 11 10:39:23 crc kubenswrapper[4840]: I0311 10:39:23.442256 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8n7pg"] Mar 11 10:39:23 crc kubenswrapper[4840]: I0311 10:39:23.593766 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn6zl\" (UniqueName: \"kubernetes.io/projected/3ff01086-f616-4040-9c59-d70be96d2740-kube-api-access-gn6zl\") pod \"certified-operators-8n7pg\" (UID: \"3ff01086-f616-4040-9c59-d70be96d2740\") " pod="openshift-marketplace/certified-operators-8n7pg" Mar 11 10:39:23 crc kubenswrapper[4840]: I0311 10:39:23.593910 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ff01086-f616-4040-9c59-d70be96d2740-catalog-content\") pod \"certified-operators-8n7pg\" (UID: \"3ff01086-f616-4040-9c59-d70be96d2740\") " pod="openshift-marketplace/certified-operators-8n7pg" Mar 11 10:39:23 crc kubenswrapper[4840]: I0311 10:39:23.593945 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ff01086-f616-4040-9c59-d70be96d2740-utilities\") pod \"certified-operators-8n7pg\" (UID: \"3ff01086-f616-4040-9c59-d70be96d2740\") " pod="openshift-marketplace/certified-operators-8n7pg" Mar 11 10:39:23 crc kubenswrapper[4840]: I0311 10:39:23.696217 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn6zl\" (UniqueName: \"kubernetes.io/projected/3ff01086-f616-4040-9c59-d70be96d2740-kube-api-access-gn6zl\") pod \"certified-operators-8n7pg\" (UID: \"3ff01086-f616-4040-9c59-d70be96d2740\") " pod="openshift-marketplace/certified-operators-8n7pg" Mar 11 10:39:23 crc kubenswrapper[4840]: I0311 10:39:23.696330 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ff01086-f616-4040-9c59-d70be96d2740-catalog-content\") pod \"certified-operators-8n7pg\" (UID: \"3ff01086-f616-4040-9c59-d70be96d2740\") " pod="openshift-marketplace/certified-operators-8n7pg" Mar 11 10:39:23 crc kubenswrapper[4840]: I0311 10:39:23.696360 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ff01086-f616-4040-9c59-d70be96d2740-utilities\") pod \"certified-operators-8n7pg\" (UID: \"3ff01086-f616-4040-9c59-d70be96d2740\") " pod="openshift-marketplace/certified-operators-8n7pg" Mar 11 10:39:23 crc kubenswrapper[4840]: I0311 10:39:23.696872 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ff01086-f616-4040-9c59-d70be96d2740-utilities\") pod \"certified-operators-8n7pg\" (UID: \"3ff01086-f616-4040-9c59-d70be96d2740\") " pod="openshift-marketplace/certified-operators-8n7pg" Mar 11 10:39:23 crc kubenswrapper[4840]: I0311 10:39:23.697084 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ff01086-f616-4040-9c59-d70be96d2740-catalog-content\") pod \"certified-operators-8n7pg\" (UID: \"3ff01086-f616-4040-9c59-d70be96d2740\") " pod="openshift-marketplace/certified-operators-8n7pg" Mar 11 10:39:23 crc kubenswrapper[4840]: I0311 10:39:23.721085 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn6zl\" (UniqueName: \"kubernetes.io/projected/3ff01086-f616-4040-9c59-d70be96d2740-kube-api-access-gn6zl\") pod \"certified-operators-8n7pg\" (UID: \"3ff01086-f616-4040-9c59-d70be96d2740\") " pod="openshift-marketplace/certified-operators-8n7pg" Mar 11 10:39:23 crc kubenswrapper[4840]: I0311 10:39:23.768729 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8n7pg" Mar 11 10:39:24 crc kubenswrapper[4840]: I0311 10:39:24.096658 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8n7pg"] Mar 11 10:39:24 crc kubenswrapper[4840]: I0311 10:39:24.773916 4840 generic.go:334] "Generic (PLEG): container finished" podID="3ff01086-f616-4040-9c59-d70be96d2740" containerID="d61f0db7e430ca3ae2378f725153d8c5a6d22fe07b13653eeebe265660e0ab8d" exitCode=0 Mar 11 10:39:24 crc kubenswrapper[4840]: I0311 10:39:24.773958 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8n7pg" event={"ID":"3ff01086-f616-4040-9c59-d70be96d2740","Type":"ContainerDied","Data":"d61f0db7e430ca3ae2378f725153d8c5a6d22fe07b13653eeebe265660e0ab8d"} Mar 11 10:39:24 crc kubenswrapper[4840]: I0311 10:39:24.773983 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8n7pg" event={"ID":"3ff01086-f616-4040-9c59-d70be96d2740","Type":"ContainerStarted","Data":"95fd7f68a78c16fd50a244dd0f9d2034780d72c227d67b9f1f11c009b8358798"} Mar 11 10:39:26 crc kubenswrapper[4840]: I0311 10:39:26.797267 4840 generic.go:334] "Generic (PLEG): container finished" podID="3ff01086-f616-4040-9c59-d70be96d2740" containerID="f3c261ea0061706f9408dd27670d579f304a884f79be652e13109dbfc0726075" exitCode=0 Mar 11 10:39:26 crc kubenswrapper[4840]: I0311 10:39:26.797402 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8n7pg" event={"ID":"3ff01086-f616-4040-9c59-d70be96d2740","Type":"ContainerDied","Data":"f3c261ea0061706f9408dd27670d579f304a884f79be652e13109dbfc0726075"} Mar 11 10:39:27 crc kubenswrapper[4840]: I0311 10:39:27.446250 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:39:27 crc kubenswrapper[4840]: I0311 10:39:27.446594 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:39:27 crc kubenswrapper[4840]: I0311 10:39:27.816866 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8n7pg" event={"ID":"3ff01086-f616-4040-9c59-d70be96d2740","Type":"ContainerStarted","Data":"08d7e9c757bbfbba45ced36338508ab27280e366dd4c436e8cc659c40629b990"} Mar 11 10:39:27 crc kubenswrapper[4840]: I0311 10:39:27.860061 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8n7pg" podStartSLOduration=2.42126763 podStartE2EDuration="4.860033166s" podCreationTimestamp="2026-03-11 10:39:23 +0000 UTC" firstStartedPulling="2026-03-11 10:39:24.776294481 +0000 UTC m=+6163.441964306" lastFinishedPulling="2026-03-11 10:39:27.215060017 +0000 UTC m=+6165.880729842" observedRunningTime="2026-03-11 10:39:27.846305689 +0000 UTC m=+6166.511975544" watchObservedRunningTime="2026-03-11 10:39:27.860033166 +0000 UTC m=+6166.525703021" Mar 11 10:39:33 crc kubenswrapper[4840]: I0311 10:39:33.769967 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8n7pg" Mar 11 10:39:33 crc kubenswrapper[4840]: I0311 10:39:33.771422 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8n7pg" Mar 11 10:39:33 crc kubenswrapper[4840]: I0311 10:39:33.863756 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8n7pg" Mar 11 10:39:33 crc kubenswrapper[4840]: I0311 10:39:33.943363 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8n7pg" Mar 11 10:39:34 crc kubenswrapper[4840]: I0311 10:39:34.107441 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8n7pg"] Mar 11 10:39:35 crc kubenswrapper[4840]: I0311 10:39:35.907371 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8n7pg" podUID="3ff01086-f616-4040-9c59-d70be96d2740" containerName="registry-server" containerID="cri-o://08d7e9c757bbfbba45ced36338508ab27280e366dd4c436e8cc659c40629b990" gracePeriod=2 Mar 11 10:39:36 crc kubenswrapper[4840]: I0311 10:39:36.408617 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8n7pg" Mar 11 10:39:36 crc kubenswrapper[4840]: I0311 10:39:36.536595 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ff01086-f616-4040-9c59-d70be96d2740-utilities\") pod \"3ff01086-f616-4040-9c59-d70be96d2740\" (UID: \"3ff01086-f616-4040-9c59-d70be96d2740\") " Mar 11 10:39:36 crc kubenswrapper[4840]: I0311 10:39:36.536730 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn6zl\" (UniqueName: \"kubernetes.io/projected/3ff01086-f616-4040-9c59-d70be96d2740-kube-api-access-gn6zl\") pod \"3ff01086-f616-4040-9c59-d70be96d2740\" (UID: \"3ff01086-f616-4040-9c59-d70be96d2740\") " Mar 11 10:39:36 crc kubenswrapper[4840]: I0311 10:39:36.536797 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ff01086-f616-4040-9c59-d70be96d2740-catalog-content\") pod \"3ff01086-f616-4040-9c59-d70be96d2740\" (UID: \"3ff01086-f616-4040-9c59-d70be96d2740\") " Mar 11 10:39:36 crc kubenswrapper[4840]: I0311 10:39:36.537795 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ff01086-f616-4040-9c59-d70be96d2740-utilities" (OuterVolumeSpecName: "utilities") pod "3ff01086-f616-4040-9c59-d70be96d2740" (UID: "3ff01086-f616-4040-9c59-d70be96d2740"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:39:36 crc kubenswrapper[4840]: I0311 10:39:36.548638 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ff01086-f616-4040-9c59-d70be96d2740-kube-api-access-gn6zl" (OuterVolumeSpecName: "kube-api-access-gn6zl") pod "3ff01086-f616-4040-9c59-d70be96d2740" (UID: "3ff01086-f616-4040-9c59-d70be96d2740"). InnerVolumeSpecName "kube-api-access-gn6zl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:39:36 crc kubenswrapper[4840]: I0311 10:39:36.638243 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn6zl\" (UniqueName: \"kubernetes.io/projected/3ff01086-f616-4040-9c59-d70be96d2740-kube-api-access-gn6zl\") on node \"crc\" DevicePath \"\"" Mar 11 10:39:36 crc kubenswrapper[4840]: I0311 10:39:36.638300 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ff01086-f616-4040-9c59-d70be96d2740-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:39:36 crc kubenswrapper[4840]: I0311 10:39:36.643684 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ff01086-f616-4040-9c59-d70be96d2740-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ff01086-f616-4040-9c59-d70be96d2740" (UID: "3ff01086-f616-4040-9c59-d70be96d2740"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:39:36 crc kubenswrapper[4840]: I0311 10:39:36.740850 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ff01086-f616-4040-9c59-d70be96d2740-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:39:36 crc kubenswrapper[4840]: I0311 10:39:36.919769 4840 generic.go:334] "Generic (PLEG): container finished" podID="3ff01086-f616-4040-9c59-d70be96d2740" containerID="08d7e9c757bbfbba45ced36338508ab27280e366dd4c436e8cc659c40629b990" exitCode=0 Mar 11 10:39:36 crc kubenswrapper[4840]: I0311 10:39:36.919808 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8n7pg" event={"ID":"3ff01086-f616-4040-9c59-d70be96d2740","Type":"ContainerDied","Data":"08d7e9c757bbfbba45ced36338508ab27280e366dd4c436e8cc659c40629b990"} Mar 11 10:39:36 crc kubenswrapper[4840]: I0311 10:39:36.919833 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8n7pg" event={"ID":"3ff01086-f616-4040-9c59-d70be96d2740","Type":"ContainerDied","Data":"95fd7f68a78c16fd50a244dd0f9d2034780d72c227d67b9f1f11c009b8358798"} Mar 11 10:39:36 crc kubenswrapper[4840]: I0311 10:39:36.919851 4840 scope.go:117] "RemoveContainer" containerID="08d7e9c757bbfbba45ced36338508ab27280e366dd4c436e8cc659c40629b990" Mar 11 10:39:36 crc kubenswrapper[4840]: I0311 10:39:36.919859 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8n7pg" Mar 11 10:39:36 crc kubenswrapper[4840]: I0311 10:39:36.957720 4840 scope.go:117] "RemoveContainer" containerID="f3c261ea0061706f9408dd27670d579f304a884f79be652e13109dbfc0726075" Mar 11 10:39:36 crc kubenswrapper[4840]: I0311 10:39:36.962440 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8n7pg"] Mar 11 10:39:36 crc kubenswrapper[4840]: I0311 10:39:36.968304 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8n7pg"] Mar 11 10:39:36 crc kubenswrapper[4840]: I0311 10:39:36.992568 4840 scope.go:117] "RemoveContainer" containerID="d61f0db7e430ca3ae2378f725153d8c5a6d22fe07b13653eeebe265660e0ab8d" Mar 11 10:39:37 crc kubenswrapper[4840]: I0311 10:39:37.049213 4840 scope.go:117] "RemoveContainer" containerID="08d7e9c757bbfbba45ced36338508ab27280e366dd4c436e8cc659c40629b990" Mar 11 10:39:37 crc kubenswrapper[4840]: E0311 10:39:37.049612 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08d7e9c757bbfbba45ced36338508ab27280e366dd4c436e8cc659c40629b990\": container with ID starting with 08d7e9c757bbfbba45ced36338508ab27280e366dd4c436e8cc659c40629b990 not found: ID does not exist" containerID="08d7e9c757bbfbba45ced36338508ab27280e366dd4c436e8cc659c40629b990" Mar 11 10:39:37 crc kubenswrapper[4840]: I0311 10:39:37.049649 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08d7e9c757bbfbba45ced36338508ab27280e366dd4c436e8cc659c40629b990"} err="failed to get container status \"08d7e9c757bbfbba45ced36338508ab27280e366dd4c436e8cc659c40629b990\": rpc error: code = NotFound desc = could not find container \"08d7e9c757bbfbba45ced36338508ab27280e366dd4c436e8cc659c40629b990\": container with ID starting with 08d7e9c757bbfbba45ced36338508ab27280e366dd4c436e8cc659c40629b990 not found: ID does not exist" Mar 11 10:39:37 crc kubenswrapper[4840]: I0311 10:39:37.049671 4840 scope.go:117] "RemoveContainer" containerID="f3c261ea0061706f9408dd27670d579f304a884f79be652e13109dbfc0726075" Mar 11 10:39:37 crc kubenswrapper[4840]: E0311 10:39:37.049993 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3c261ea0061706f9408dd27670d579f304a884f79be652e13109dbfc0726075\": container with ID starting with f3c261ea0061706f9408dd27670d579f304a884f79be652e13109dbfc0726075 not found: ID does not exist" containerID="f3c261ea0061706f9408dd27670d579f304a884f79be652e13109dbfc0726075" Mar 11 10:39:37 crc kubenswrapper[4840]: I0311 10:39:37.050023 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c261ea0061706f9408dd27670d579f304a884f79be652e13109dbfc0726075"} err="failed to get container status \"f3c261ea0061706f9408dd27670d579f304a884f79be652e13109dbfc0726075\": rpc error: code = NotFound desc = could not find container \"f3c261ea0061706f9408dd27670d579f304a884f79be652e13109dbfc0726075\": container with ID starting with f3c261ea0061706f9408dd27670d579f304a884f79be652e13109dbfc0726075 not found: ID does not exist" Mar 11 10:39:37 crc kubenswrapper[4840]: I0311 10:39:37.050040 4840 scope.go:117] "RemoveContainer" containerID="d61f0db7e430ca3ae2378f725153d8c5a6d22fe07b13653eeebe265660e0ab8d" Mar 11 10:39:37 crc kubenswrapper[4840]: E0311 10:39:37.050402 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d61f0db7e430ca3ae2378f725153d8c5a6d22fe07b13653eeebe265660e0ab8d\": container with ID starting with d61f0db7e430ca3ae2378f725153d8c5a6d22fe07b13653eeebe265660e0ab8d not found: ID does not exist" containerID="d61f0db7e430ca3ae2378f725153d8c5a6d22fe07b13653eeebe265660e0ab8d" Mar 11 10:39:37 crc kubenswrapper[4840]: I0311 10:39:37.050485 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d61f0db7e430ca3ae2378f725153d8c5a6d22fe07b13653eeebe265660e0ab8d"} err="failed to get container status \"d61f0db7e430ca3ae2378f725153d8c5a6d22fe07b13653eeebe265660e0ab8d\": rpc error: code = NotFound desc = could not find container \"d61f0db7e430ca3ae2378f725153d8c5a6d22fe07b13653eeebe265660e0ab8d\": container with ID starting with d61f0db7e430ca3ae2378f725153d8c5a6d22fe07b13653eeebe265660e0ab8d not found: ID does not exist" Mar 11 10:39:38 crc kubenswrapper[4840]: I0311 10:39:38.089600 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ff01086-f616-4040-9c59-d70be96d2740" path="/var/lib/kubelet/pods/3ff01086-f616-4040-9c59-d70be96d2740/volumes" Mar 11 10:39:57 crc kubenswrapper[4840]: I0311 10:39:57.445645 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:39:57 crc kubenswrapper[4840]: I0311 10:39:57.446531 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:40:00 crc kubenswrapper[4840]: I0311 10:40:00.160136 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553760-l2x2j"] Mar 11 10:40:00 crc kubenswrapper[4840]: E0311 10:40:00.160537 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ff01086-f616-4040-9c59-d70be96d2740" containerName="extract-utilities" Mar 11 10:40:00 crc kubenswrapper[4840]: I0311 10:40:00.160557 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ff01086-f616-4040-9c59-d70be96d2740" containerName="extract-utilities" Mar 11 10:40:00 crc kubenswrapper[4840]: E0311 10:40:00.160584 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ff01086-f616-4040-9c59-d70be96d2740" containerName="extract-content" Mar 11 10:40:00 crc kubenswrapper[4840]: I0311 10:40:00.160594 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ff01086-f616-4040-9c59-d70be96d2740" containerName="extract-content" Mar 11 10:40:00 crc kubenswrapper[4840]: E0311 10:40:00.160627 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ff01086-f616-4040-9c59-d70be96d2740" containerName="registry-server" Mar 11 10:40:00 crc kubenswrapper[4840]: I0311 10:40:00.160638 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ff01086-f616-4040-9c59-d70be96d2740" containerName="registry-server" Mar 11 10:40:00 crc kubenswrapper[4840]: I0311 10:40:00.160934 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ff01086-f616-4040-9c59-d70be96d2740" containerName="registry-server" Mar 11 10:40:00 crc kubenswrapper[4840]: I0311 10:40:00.161751 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553760-l2x2j" Mar 11 10:40:00 crc kubenswrapper[4840]: I0311 10:40:00.164733 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:40:00 crc kubenswrapper[4840]: I0311 10:40:00.165266 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:40:00 crc kubenswrapper[4840]: I0311 10:40:00.165805 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:40:00 crc kubenswrapper[4840]: I0311 10:40:00.174536 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553760-l2x2j"] Mar 11 10:40:00 crc kubenswrapper[4840]: I0311 10:40:00.289092 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49f46\" (UniqueName: \"kubernetes.io/projected/c9fe9239-81ae-4487-a404-52763ee9ba7c-kube-api-access-49f46\") pod \"auto-csr-approver-29553760-l2x2j\" (UID: \"c9fe9239-81ae-4487-a404-52763ee9ba7c\") " pod="openshift-infra/auto-csr-approver-29553760-l2x2j" Mar 11 10:40:00 crc kubenswrapper[4840]: I0311 10:40:00.391083 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49f46\" (UniqueName: \"kubernetes.io/projected/c9fe9239-81ae-4487-a404-52763ee9ba7c-kube-api-access-49f46\") pod \"auto-csr-approver-29553760-l2x2j\" (UID: \"c9fe9239-81ae-4487-a404-52763ee9ba7c\") " pod="openshift-infra/auto-csr-approver-29553760-l2x2j" Mar 11 10:40:00 crc kubenswrapper[4840]: I0311 10:40:00.426419 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49f46\" (UniqueName: \"kubernetes.io/projected/c9fe9239-81ae-4487-a404-52763ee9ba7c-kube-api-access-49f46\") pod \"auto-csr-approver-29553760-l2x2j\" (UID: \"c9fe9239-81ae-4487-a404-52763ee9ba7c\") " pod="openshift-infra/auto-csr-approver-29553760-l2x2j" Mar 11 10:40:00 crc kubenswrapper[4840]: I0311 10:40:00.508036 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553760-l2x2j" Mar 11 10:40:00 crc kubenswrapper[4840]: I0311 10:40:00.971139 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553760-l2x2j"] Mar 11 10:40:01 crc kubenswrapper[4840]: I0311 10:40:01.167744 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553760-l2x2j" event={"ID":"c9fe9239-81ae-4487-a404-52763ee9ba7c","Type":"ContainerStarted","Data":"7663c2e450453920be4cacad96924d3d69522831ffdce3f19f8d8f4d7ee5ec6d"} Mar 11 10:40:03 crc kubenswrapper[4840]: I0311 10:40:03.183138 4840 generic.go:334] "Generic (PLEG): container finished" podID="c9fe9239-81ae-4487-a404-52763ee9ba7c" containerID="77f12dd759df4a83cd7b5f851b51e942e2379b4d50c3bbae53496f58b08e73ad" exitCode=0 Mar 11 10:40:03 crc kubenswrapper[4840]: I0311 10:40:03.183196 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553760-l2x2j" event={"ID":"c9fe9239-81ae-4487-a404-52763ee9ba7c","Type":"ContainerDied","Data":"77f12dd759df4a83cd7b5f851b51e942e2379b4d50c3bbae53496f58b08e73ad"} Mar 11 10:40:04 crc kubenswrapper[4840]: I0311 10:40:04.553093 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553760-l2x2j" Mar 11 10:40:04 crc kubenswrapper[4840]: I0311 10:40:04.573867 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49f46\" (UniqueName: \"kubernetes.io/projected/c9fe9239-81ae-4487-a404-52763ee9ba7c-kube-api-access-49f46\") pod \"c9fe9239-81ae-4487-a404-52763ee9ba7c\" (UID: \"c9fe9239-81ae-4487-a404-52763ee9ba7c\") " Mar 11 10:40:04 crc kubenswrapper[4840]: I0311 10:40:04.608170 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9fe9239-81ae-4487-a404-52763ee9ba7c-kube-api-access-49f46" (OuterVolumeSpecName: "kube-api-access-49f46") pod "c9fe9239-81ae-4487-a404-52763ee9ba7c" (UID: "c9fe9239-81ae-4487-a404-52763ee9ba7c"). InnerVolumeSpecName "kube-api-access-49f46". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:40:04 crc kubenswrapper[4840]: I0311 10:40:04.676668 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49f46\" (UniqueName: \"kubernetes.io/projected/c9fe9239-81ae-4487-a404-52763ee9ba7c-kube-api-access-49f46\") on node \"crc\" DevicePath \"\"" Mar 11 10:40:05 crc kubenswrapper[4840]: I0311 10:40:05.210259 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553760-l2x2j" event={"ID":"c9fe9239-81ae-4487-a404-52763ee9ba7c","Type":"ContainerDied","Data":"7663c2e450453920be4cacad96924d3d69522831ffdce3f19f8d8f4d7ee5ec6d"} Mar 11 10:40:05 crc kubenswrapper[4840]: I0311 10:40:05.210334 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7663c2e450453920be4cacad96924d3d69522831ffdce3f19f8d8f4d7ee5ec6d" Mar 11 10:40:05 crc kubenswrapper[4840]: I0311 10:40:05.210294 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553760-l2x2j" Mar 11 10:40:05 crc kubenswrapper[4840]: I0311 10:40:05.619942 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553754-xplpc"] Mar 11 10:40:05 crc kubenswrapper[4840]: I0311 10:40:05.625629 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553754-xplpc"] Mar 11 10:40:06 crc kubenswrapper[4840]: I0311 10:40:06.075735 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71ef9adb-f0e7-4c7c-971e-894bd7e62ebf" path="/var/lib/kubelet/pods/71ef9adb-f0e7-4c7c-971e-894bd7e62ebf/volumes" Mar 11 10:40:13 crc kubenswrapper[4840]: I0311 10:40:13.932097 4840 scope.go:117] "RemoveContainer" containerID="92edfe5ac4048e527ecccf00a58f89273d1a7c5fc41684a969927949e477e800" Mar 11 10:40:27 crc kubenswrapper[4840]: I0311 10:40:27.445482 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:40:27 crc kubenswrapper[4840]: I0311 10:40:27.447330 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:40:27 crc kubenswrapper[4840]: I0311 10:40:27.447454 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 10:40:27 crc kubenswrapper[4840]: I0311 10:40:27.448249 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b1dc94236fd74074941a3853c81f7449b469e1e2e185c24f80ca9a36ca3b1f5"} pod="openshift-machine-config-operator/machine-config-daemon-brtht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 10:40:27 crc kubenswrapper[4840]: I0311 10:40:27.448409 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" containerID="cri-o://0b1dc94236fd74074941a3853c81f7449b469e1e2e185c24f80ca9a36ca3b1f5" gracePeriod=600 Mar 11 10:40:28 crc kubenswrapper[4840]: I0311 10:40:28.424183 4840 generic.go:334] "Generic (PLEG): container finished" podID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerID="0b1dc94236fd74074941a3853c81f7449b469e1e2e185c24f80ca9a36ca3b1f5" exitCode=0 Mar 11 10:40:28 crc kubenswrapper[4840]: I0311 10:40:28.424251 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerDied","Data":"0b1dc94236fd74074941a3853c81f7449b469e1e2e185c24f80ca9a36ca3b1f5"} Mar 11 10:40:28 crc kubenswrapper[4840]: I0311 10:40:28.424903 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50"} Mar 11 10:40:28 crc kubenswrapper[4840]: I0311 10:40:28.424944 4840 scope.go:117] "RemoveContainer" containerID="0fa02edfa90522d9bb5a7465f18679bbe488a4c47686d2e7bed64f7d3b2e1873" Mar 11 10:41:49 crc kubenswrapper[4840]: I0311 10:41:49.162676 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hq8r6"] Mar 11 10:41:49 crc kubenswrapper[4840]: E0311 10:41:49.164272 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9fe9239-81ae-4487-a404-52763ee9ba7c" containerName="oc" Mar 11 10:41:49 crc kubenswrapper[4840]: I0311 10:41:49.164300 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9fe9239-81ae-4487-a404-52763ee9ba7c" containerName="oc" Mar 11 10:41:49 crc kubenswrapper[4840]: I0311 10:41:49.164651 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9fe9239-81ae-4487-a404-52763ee9ba7c" containerName="oc" Mar 11 10:41:49 crc kubenswrapper[4840]: I0311 10:41:49.165854 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hq8r6" Mar 11 10:41:49 crc kubenswrapper[4840]: I0311 10:41:49.180848 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hq8r6"] Mar 11 10:41:49 crc kubenswrapper[4840]: I0311 10:41:49.342804 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddfca1fb-6df3-45c1-8fed-e9500351006a-catalog-content\") pod \"redhat-marketplace-hq8r6\" (UID: \"ddfca1fb-6df3-45c1-8fed-e9500351006a\") " pod="openshift-marketplace/redhat-marketplace-hq8r6" Mar 11 10:41:49 crc kubenswrapper[4840]: I0311 10:41:49.343173 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddfca1fb-6df3-45c1-8fed-e9500351006a-utilities\") pod \"redhat-marketplace-hq8r6\" (UID: \"ddfca1fb-6df3-45c1-8fed-e9500351006a\") " pod="openshift-marketplace/redhat-marketplace-hq8r6" Mar 11 10:41:49 crc kubenswrapper[4840]: I0311 10:41:49.343202 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqqfr\" (UniqueName: \"kubernetes.io/projected/ddfca1fb-6df3-45c1-8fed-e9500351006a-kube-api-access-zqqfr\") pod \"redhat-marketplace-hq8r6\" (UID: \"ddfca1fb-6df3-45c1-8fed-e9500351006a\") " pod="openshift-marketplace/redhat-marketplace-hq8r6" Mar 11 10:41:49 crc kubenswrapper[4840]: I0311 10:41:49.444133 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddfca1fb-6df3-45c1-8fed-e9500351006a-catalog-content\") pod \"redhat-marketplace-hq8r6\" (UID: \"ddfca1fb-6df3-45c1-8fed-e9500351006a\") " pod="openshift-marketplace/redhat-marketplace-hq8r6" Mar 11 10:41:49 crc kubenswrapper[4840]: I0311 10:41:49.444196 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddfca1fb-6df3-45c1-8fed-e9500351006a-utilities\") pod \"redhat-marketplace-hq8r6\" (UID: \"ddfca1fb-6df3-45c1-8fed-e9500351006a\") " pod="openshift-marketplace/redhat-marketplace-hq8r6" Mar 11 10:41:49 crc kubenswrapper[4840]: I0311 10:41:49.444221 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqqfr\" (UniqueName: \"kubernetes.io/projected/ddfca1fb-6df3-45c1-8fed-e9500351006a-kube-api-access-zqqfr\") pod \"redhat-marketplace-hq8r6\" (UID: \"ddfca1fb-6df3-45c1-8fed-e9500351006a\") " pod="openshift-marketplace/redhat-marketplace-hq8r6" Mar 11 10:41:49 crc kubenswrapper[4840]: I0311 10:41:49.444866 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddfca1fb-6df3-45c1-8fed-e9500351006a-catalog-content\") pod \"redhat-marketplace-hq8r6\" (UID: \"ddfca1fb-6df3-45c1-8fed-e9500351006a\") " pod="openshift-marketplace/redhat-marketplace-hq8r6" Mar 11 10:41:49 crc kubenswrapper[4840]: I0311 10:41:49.445144 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddfca1fb-6df3-45c1-8fed-e9500351006a-utilities\") pod \"redhat-marketplace-hq8r6\" (UID: \"ddfca1fb-6df3-45c1-8fed-e9500351006a\") " pod="openshift-marketplace/redhat-marketplace-hq8r6" Mar 11 10:41:49 crc kubenswrapper[4840]: I0311 10:41:49.480856 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqqfr\" (UniqueName: \"kubernetes.io/projected/ddfca1fb-6df3-45c1-8fed-e9500351006a-kube-api-access-zqqfr\") pod \"redhat-marketplace-hq8r6\" (UID: \"ddfca1fb-6df3-45c1-8fed-e9500351006a\") " pod="openshift-marketplace/redhat-marketplace-hq8r6" Mar 11 10:41:49 crc kubenswrapper[4840]: I0311 10:41:49.486211 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hq8r6" Mar 11 10:41:50 crc kubenswrapper[4840]: I0311 10:41:50.020742 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hq8r6"] Mar 11 10:41:50 crc kubenswrapper[4840]: I0311 10:41:50.114207 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hq8r6" event={"ID":"ddfca1fb-6df3-45c1-8fed-e9500351006a","Type":"ContainerStarted","Data":"a3354e67c646d58e792c4a2af5dccd098317584270962021f31ccf0fc18cf6a7"} Mar 11 10:41:51 crc kubenswrapper[4840]: I0311 10:41:51.126451 4840 generic.go:334] "Generic (PLEG): container finished" podID="ddfca1fb-6df3-45c1-8fed-e9500351006a" containerID="60f11372a9101d890a4222ad0d7acf4ad4ff18ff2372cf41deca40606d688f8c" exitCode=0 Mar 11 10:41:51 crc kubenswrapper[4840]: I0311 10:41:51.126572 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hq8r6" event={"ID":"ddfca1fb-6df3-45c1-8fed-e9500351006a","Type":"ContainerDied","Data":"60f11372a9101d890a4222ad0d7acf4ad4ff18ff2372cf41deca40606d688f8c"} Mar 11 10:41:52 crc kubenswrapper[4840]: I0311 10:41:52.144405 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hq8r6" event={"ID":"ddfca1fb-6df3-45c1-8fed-e9500351006a","Type":"ContainerStarted","Data":"3baa5b0075ac63177e8b79f4f8630948763af95b51d79c9daf57c363509c196f"} Mar 11 10:41:53 crc kubenswrapper[4840]: I0311 10:41:53.154565 4840 generic.go:334] "Generic (PLEG): container finished" podID="ddfca1fb-6df3-45c1-8fed-e9500351006a" containerID="3baa5b0075ac63177e8b79f4f8630948763af95b51d79c9daf57c363509c196f" exitCode=0 Mar 11 10:41:53 crc kubenswrapper[4840]: I0311 10:41:53.154740 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hq8r6" event={"ID":"ddfca1fb-6df3-45c1-8fed-e9500351006a","Type":"ContainerDied","Data":"3baa5b0075ac63177e8b79f4f8630948763af95b51d79c9daf57c363509c196f"} Mar 11 10:41:54 crc kubenswrapper[4840]: I0311 10:41:54.166071 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hq8r6" event={"ID":"ddfca1fb-6df3-45c1-8fed-e9500351006a","Type":"ContainerStarted","Data":"fac88cc37f3f1a6c569e77bb2225350360356711bfe664d598db7228b45b7de4"} Mar 11 10:41:54 crc kubenswrapper[4840]: I0311 10:41:54.203283 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hq8r6" podStartSLOduration=2.7381277859999997 podStartE2EDuration="5.20325464s" podCreationTimestamp="2026-03-11 10:41:49 +0000 UTC" firstStartedPulling="2026-03-11 10:41:51.128655385 +0000 UTC m=+6309.794325200" lastFinishedPulling="2026-03-11 10:41:53.593782219 +0000 UTC m=+6312.259452054" observedRunningTime="2026-03-11 10:41:54.196318154 +0000 UTC m=+6312.861987999" watchObservedRunningTime="2026-03-11 10:41:54.20325464 +0000 UTC m=+6312.868924495" Mar 11 10:41:59 crc kubenswrapper[4840]: I0311 10:41:59.486424 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hq8r6" Mar 11 10:41:59 crc kubenswrapper[4840]: I0311 10:41:59.486892 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hq8r6" Mar 11 10:41:59 crc kubenswrapper[4840]: I0311 10:41:59.568875 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hq8r6" Mar 11 10:42:00 crc kubenswrapper[4840]: I0311 10:42:00.160602 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553762-tx9wr"] Mar 11 10:42:00 crc kubenswrapper[4840]: I0311 10:42:00.161839 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553762-tx9wr" Mar 11 10:42:00 crc kubenswrapper[4840]: I0311 10:42:00.166283 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:42:00 crc kubenswrapper[4840]: I0311 10:42:00.167574 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:42:00 crc kubenswrapper[4840]: I0311 10:42:00.168816 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:42:00 crc kubenswrapper[4840]: I0311 10:42:00.172057 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553762-tx9wr"] Mar 11 10:42:00 crc kubenswrapper[4840]: I0311 10:42:00.250998 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkx8l\" (UniqueName: \"kubernetes.io/projected/4d1f94e0-0ee6-41ec-8031-c2346d1d694d-kube-api-access-pkx8l\") pod \"auto-csr-approver-29553762-tx9wr\" (UID: \"4d1f94e0-0ee6-41ec-8031-c2346d1d694d\") " pod="openshift-infra/auto-csr-approver-29553762-tx9wr" Mar 11 10:42:00 crc kubenswrapper[4840]: I0311 10:42:00.262371 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hq8r6" Mar 11 10:42:00 crc kubenswrapper[4840]: I0311 10:42:00.308415 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hq8r6"] Mar 11 10:42:00 crc kubenswrapper[4840]: I0311 10:42:00.352912 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkx8l\" (UniqueName: \"kubernetes.io/projected/4d1f94e0-0ee6-41ec-8031-c2346d1d694d-kube-api-access-pkx8l\") pod \"auto-csr-approver-29553762-tx9wr\" (UID: \"4d1f94e0-0ee6-41ec-8031-c2346d1d694d\") " pod="openshift-infra/auto-csr-approver-29553762-tx9wr" Mar 11 10:42:00 crc kubenswrapper[4840]: I0311 10:42:00.372697 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkx8l\" (UniqueName: \"kubernetes.io/projected/4d1f94e0-0ee6-41ec-8031-c2346d1d694d-kube-api-access-pkx8l\") pod \"auto-csr-approver-29553762-tx9wr\" (UID: \"4d1f94e0-0ee6-41ec-8031-c2346d1d694d\") " pod="openshift-infra/auto-csr-approver-29553762-tx9wr" Mar 11 10:42:00 crc kubenswrapper[4840]: I0311 10:42:00.485859 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553762-tx9wr" Mar 11 10:42:00 crc kubenswrapper[4840]: I0311 10:42:00.949453 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553762-tx9wr"] Mar 11 10:42:00 crc kubenswrapper[4840]: W0311 10:42:00.955903 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d1f94e0_0ee6_41ec_8031_c2346d1d694d.slice/crio-fd83f94917bd384af3aff4bfa6f92d81146edfaf7d9c300593e3bb36c79f1254 WatchSource:0}: Error finding container fd83f94917bd384af3aff4bfa6f92d81146edfaf7d9c300593e3bb36c79f1254: Status 404 returned error can't find the container with id fd83f94917bd384af3aff4bfa6f92d81146edfaf7d9c300593e3bb36c79f1254 Mar 11 10:42:01 crc kubenswrapper[4840]: I0311 10:42:01.219184 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553762-tx9wr" event={"ID":"4d1f94e0-0ee6-41ec-8031-c2346d1d694d","Type":"ContainerStarted","Data":"fd83f94917bd384af3aff4bfa6f92d81146edfaf7d9c300593e3bb36c79f1254"} Mar 11 10:42:02 crc kubenswrapper[4840]: I0311 10:42:02.228186 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hq8r6" podUID="ddfca1fb-6df3-45c1-8fed-e9500351006a" containerName="registry-server" containerID="cri-o://fac88cc37f3f1a6c569e77bb2225350360356711bfe664d598db7228b45b7de4" gracePeriod=2 Mar 11 10:42:02 crc kubenswrapper[4840]: I0311 10:42:02.674139 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hq8r6" Mar 11 10:42:02 crc kubenswrapper[4840]: I0311 10:42:02.793736 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqqfr\" (UniqueName: \"kubernetes.io/projected/ddfca1fb-6df3-45c1-8fed-e9500351006a-kube-api-access-zqqfr\") pod \"ddfca1fb-6df3-45c1-8fed-e9500351006a\" (UID: \"ddfca1fb-6df3-45c1-8fed-e9500351006a\") " Mar 11 10:42:02 crc kubenswrapper[4840]: I0311 10:42:02.794252 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddfca1fb-6df3-45c1-8fed-e9500351006a-catalog-content\") pod \"ddfca1fb-6df3-45c1-8fed-e9500351006a\" (UID: \"ddfca1fb-6df3-45c1-8fed-e9500351006a\") " Mar 11 10:42:02 crc kubenswrapper[4840]: I0311 10:42:02.794384 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddfca1fb-6df3-45c1-8fed-e9500351006a-utilities\") pod \"ddfca1fb-6df3-45c1-8fed-e9500351006a\" (UID: \"ddfca1fb-6df3-45c1-8fed-e9500351006a\") " Mar 11 10:42:02 crc kubenswrapper[4840]: I0311 10:42:02.795087 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddfca1fb-6df3-45c1-8fed-e9500351006a-utilities" (OuterVolumeSpecName: "utilities") pod "ddfca1fb-6df3-45c1-8fed-e9500351006a" (UID: "ddfca1fb-6df3-45c1-8fed-e9500351006a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:42:02 crc kubenswrapper[4840]: I0311 10:42:02.800611 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddfca1fb-6df3-45c1-8fed-e9500351006a-kube-api-access-zqqfr" (OuterVolumeSpecName: "kube-api-access-zqqfr") pod "ddfca1fb-6df3-45c1-8fed-e9500351006a" (UID: "ddfca1fb-6df3-45c1-8fed-e9500351006a"). InnerVolumeSpecName "kube-api-access-zqqfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:42:02 crc kubenswrapper[4840]: I0311 10:42:02.818069 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddfca1fb-6df3-45c1-8fed-e9500351006a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ddfca1fb-6df3-45c1-8fed-e9500351006a" (UID: "ddfca1fb-6df3-45c1-8fed-e9500351006a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:42:02 crc kubenswrapper[4840]: I0311 10:42:02.896928 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqqfr\" (UniqueName: \"kubernetes.io/projected/ddfca1fb-6df3-45c1-8fed-e9500351006a-kube-api-access-zqqfr\") on node \"crc\" DevicePath \"\"" Mar 11 10:42:02 crc kubenswrapper[4840]: I0311 10:42:02.896968 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddfca1fb-6df3-45c1-8fed-e9500351006a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:42:02 crc kubenswrapper[4840]: I0311 10:42:02.896976 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddfca1fb-6df3-45c1-8fed-e9500351006a-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:42:03 crc kubenswrapper[4840]: I0311 10:42:03.239528 4840 generic.go:334] "Generic (PLEG): container finished" podID="ddfca1fb-6df3-45c1-8fed-e9500351006a" containerID="fac88cc37f3f1a6c569e77bb2225350360356711bfe664d598db7228b45b7de4" exitCode=0 Mar 11 10:42:03 crc kubenswrapper[4840]: I0311 10:42:03.239564 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hq8r6" Mar 11 10:42:03 crc kubenswrapper[4840]: I0311 10:42:03.239589 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hq8r6" event={"ID":"ddfca1fb-6df3-45c1-8fed-e9500351006a","Type":"ContainerDied","Data":"fac88cc37f3f1a6c569e77bb2225350360356711bfe664d598db7228b45b7de4"} Mar 11 10:42:03 crc kubenswrapper[4840]: I0311 10:42:03.239651 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hq8r6" event={"ID":"ddfca1fb-6df3-45c1-8fed-e9500351006a","Type":"ContainerDied","Data":"a3354e67c646d58e792c4a2af5dccd098317584270962021f31ccf0fc18cf6a7"} Mar 11 10:42:03 crc kubenswrapper[4840]: I0311 10:42:03.239676 4840 scope.go:117] "RemoveContainer" containerID="fac88cc37f3f1a6c569e77bb2225350360356711bfe664d598db7228b45b7de4" Mar 11 10:42:03 crc kubenswrapper[4840]: I0311 10:42:03.243016 4840 generic.go:334] "Generic (PLEG): container finished" podID="4d1f94e0-0ee6-41ec-8031-c2346d1d694d" containerID="938fc6a4cceb5d3c87b6aec79702708a208582e0c8a080508f6d7aed9e88b2ad" exitCode=0 Mar 11 10:42:03 crc kubenswrapper[4840]: I0311 10:42:03.243053 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553762-tx9wr" event={"ID":"4d1f94e0-0ee6-41ec-8031-c2346d1d694d","Type":"ContainerDied","Data":"938fc6a4cceb5d3c87b6aec79702708a208582e0c8a080508f6d7aed9e88b2ad"} Mar 11 10:42:03 crc kubenswrapper[4840]: I0311 10:42:03.258679 4840 scope.go:117] "RemoveContainer" containerID="3baa5b0075ac63177e8b79f4f8630948763af95b51d79c9daf57c363509c196f" Mar 11 10:42:03 crc kubenswrapper[4840]: I0311 10:42:03.293867 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hq8r6"] Mar 11 10:42:03 crc kubenswrapper[4840]: I0311 10:42:03.302907 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hq8r6"] Mar 11 10:42:03 crc kubenswrapper[4840]: I0311 10:42:03.304305 4840 scope.go:117] "RemoveContainer" containerID="60f11372a9101d890a4222ad0d7acf4ad4ff18ff2372cf41deca40606d688f8c" Mar 11 10:42:03 crc kubenswrapper[4840]: I0311 10:42:03.323706 4840 scope.go:117] "RemoveContainer" containerID="fac88cc37f3f1a6c569e77bb2225350360356711bfe664d598db7228b45b7de4" Mar 11 10:42:03 crc kubenswrapper[4840]: E0311 10:42:03.324123 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fac88cc37f3f1a6c569e77bb2225350360356711bfe664d598db7228b45b7de4\": container with ID starting with fac88cc37f3f1a6c569e77bb2225350360356711bfe664d598db7228b45b7de4 not found: ID does not exist" containerID="fac88cc37f3f1a6c569e77bb2225350360356711bfe664d598db7228b45b7de4" Mar 11 10:42:03 crc kubenswrapper[4840]: I0311 10:42:03.324162 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fac88cc37f3f1a6c569e77bb2225350360356711bfe664d598db7228b45b7de4"} err="failed to get container status \"fac88cc37f3f1a6c569e77bb2225350360356711bfe664d598db7228b45b7de4\": rpc error: code = NotFound desc = could not find container \"fac88cc37f3f1a6c569e77bb2225350360356711bfe664d598db7228b45b7de4\": container with ID starting with fac88cc37f3f1a6c569e77bb2225350360356711bfe664d598db7228b45b7de4 not found: ID does not exist" Mar 11 10:42:03 crc kubenswrapper[4840]: I0311 10:42:03.324188 4840 scope.go:117] "RemoveContainer" containerID="3baa5b0075ac63177e8b79f4f8630948763af95b51d79c9daf57c363509c196f" Mar 11 10:42:03 crc kubenswrapper[4840]: E0311 10:42:03.324534 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3baa5b0075ac63177e8b79f4f8630948763af95b51d79c9daf57c363509c196f\": container with ID starting with 3baa5b0075ac63177e8b79f4f8630948763af95b51d79c9daf57c363509c196f not found: ID does not exist" containerID="3baa5b0075ac63177e8b79f4f8630948763af95b51d79c9daf57c363509c196f" Mar 11 10:42:03 crc kubenswrapper[4840]: I0311 10:42:03.324577 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3baa5b0075ac63177e8b79f4f8630948763af95b51d79c9daf57c363509c196f"} err="failed to get container status \"3baa5b0075ac63177e8b79f4f8630948763af95b51d79c9daf57c363509c196f\": rpc error: code = NotFound desc = could not find container \"3baa5b0075ac63177e8b79f4f8630948763af95b51d79c9daf57c363509c196f\": container with ID starting with 3baa5b0075ac63177e8b79f4f8630948763af95b51d79c9daf57c363509c196f not found: ID does not exist" Mar 11 10:42:03 crc kubenswrapper[4840]: I0311 10:42:03.324603 4840 scope.go:117] "RemoveContainer" containerID="60f11372a9101d890a4222ad0d7acf4ad4ff18ff2372cf41deca40606d688f8c" Mar 11 10:42:03 crc kubenswrapper[4840]: E0311 10:42:03.324958 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60f11372a9101d890a4222ad0d7acf4ad4ff18ff2372cf41deca40606d688f8c\": container with ID starting with 60f11372a9101d890a4222ad0d7acf4ad4ff18ff2372cf41deca40606d688f8c not found: ID does not exist" containerID="60f11372a9101d890a4222ad0d7acf4ad4ff18ff2372cf41deca40606d688f8c" Mar 11 10:42:03 crc kubenswrapper[4840]: I0311 10:42:03.324984 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60f11372a9101d890a4222ad0d7acf4ad4ff18ff2372cf41deca40606d688f8c"} err="failed to get container status \"60f11372a9101d890a4222ad0d7acf4ad4ff18ff2372cf41deca40606d688f8c\": rpc error: code = NotFound desc = could not find container \"60f11372a9101d890a4222ad0d7acf4ad4ff18ff2372cf41deca40606d688f8c\": container with ID starting with 60f11372a9101d890a4222ad0d7acf4ad4ff18ff2372cf41deca40606d688f8c not found: ID does not exist" Mar 11 10:42:04 crc kubenswrapper[4840]: I0311 10:42:04.071751 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddfca1fb-6df3-45c1-8fed-e9500351006a" path="/var/lib/kubelet/pods/ddfca1fb-6df3-45c1-8fed-e9500351006a/volumes" Mar 11 10:42:04 crc kubenswrapper[4840]: I0311 10:42:04.670038 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553762-tx9wr" Mar 11 10:42:04 crc kubenswrapper[4840]: I0311 10:42:04.749750 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkx8l\" (UniqueName: \"kubernetes.io/projected/4d1f94e0-0ee6-41ec-8031-c2346d1d694d-kube-api-access-pkx8l\") pod \"4d1f94e0-0ee6-41ec-8031-c2346d1d694d\" (UID: \"4d1f94e0-0ee6-41ec-8031-c2346d1d694d\") " Mar 11 10:42:04 crc kubenswrapper[4840]: I0311 10:42:04.756943 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d1f94e0-0ee6-41ec-8031-c2346d1d694d-kube-api-access-pkx8l" (OuterVolumeSpecName: "kube-api-access-pkx8l") pod "4d1f94e0-0ee6-41ec-8031-c2346d1d694d" (UID: "4d1f94e0-0ee6-41ec-8031-c2346d1d694d"). InnerVolumeSpecName "kube-api-access-pkx8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:42:04 crc kubenswrapper[4840]: I0311 10:42:04.851915 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkx8l\" (UniqueName: \"kubernetes.io/projected/4d1f94e0-0ee6-41ec-8031-c2346d1d694d-kube-api-access-pkx8l\") on node \"crc\" DevicePath \"\"" Mar 11 10:42:05 crc kubenswrapper[4840]: I0311 10:42:05.260674 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553762-tx9wr" event={"ID":"4d1f94e0-0ee6-41ec-8031-c2346d1d694d","Type":"ContainerDied","Data":"fd83f94917bd384af3aff4bfa6f92d81146edfaf7d9c300593e3bb36c79f1254"} Mar 11 10:42:05 crc kubenswrapper[4840]: I0311 10:42:05.260963 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd83f94917bd384af3aff4bfa6f92d81146edfaf7d9c300593e3bb36c79f1254" Mar 11 10:42:05 crc kubenswrapper[4840]: I0311 10:42:05.260777 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553762-tx9wr" Mar 11 10:42:05 crc kubenswrapper[4840]: I0311 10:42:05.747337 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553756-mw76m"] Mar 11 10:42:05 crc kubenswrapper[4840]: I0311 10:42:05.754197 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553756-mw76m"] Mar 11 10:42:06 crc kubenswrapper[4840]: I0311 10:42:06.073417 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb0b4780-16bd-472a-b302-fec7877e8b3e" path="/var/lib/kubelet/pods/eb0b4780-16bd-472a-b302-fec7877e8b3e/volumes" Mar 11 10:42:14 crc kubenswrapper[4840]: I0311 10:42:14.055114 4840 scope.go:117] "RemoveContainer" containerID="b552704211b6882cfafd45b5574c95bd174bafa296d69e20cb64a71e0bc1d91c" Mar 11 10:42:27 crc kubenswrapper[4840]: I0311 10:42:27.445492 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:42:27 crc kubenswrapper[4840]: I0311 10:42:27.446180 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:42:34 crc kubenswrapper[4840]: I0311 10:42:34.621682 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nks92"] Mar 11 10:42:34 crc kubenswrapper[4840]: E0311 10:42:34.622734 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d1f94e0-0ee6-41ec-8031-c2346d1d694d" containerName="oc" Mar 11 10:42:34 crc kubenswrapper[4840]: I0311 10:42:34.622747 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d1f94e0-0ee6-41ec-8031-c2346d1d694d" containerName="oc" Mar 11 10:42:34 crc kubenswrapper[4840]: E0311 10:42:34.622759 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddfca1fb-6df3-45c1-8fed-e9500351006a" containerName="registry-server" Mar 11 10:42:34 crc kubenswrapper[4840]: I0311 10:42:34.622768 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddfca1fb-6df3-45c1-8fed-e9500351006a" containerName="registry-server" Mar 11 10:42:34 crc kubenswrapper[4840]: E0311 10:42:34.622782 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddfca1fb-6df3-45c1-8fed-e9500351006a" containerName="extract-content" Mar 11 10:42:34 crc kubenswrapper[4840]: I0311 10:42:34.622788 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddfca1fb-6df3-45c1-8fed-e9500351006a" containerName="extract-content" Mar 11 10:42:34 crc kubenswrapper[4840]: E0311 10:42:34.622797 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddfca1fb-6df3-45c1-8fed-e9500351006a" containerName="extract-utilities" Mar 11 10:42:34 crc kubenswrapper[4840]: I0311 10:42:34.622803 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddfca1fb-6df3-45c1-8fed-e9500351006a" containerName="extract-utilities" Mar 11 10:42:34 crc kubenswrapper[4840]: I0311 10:42:34.622964 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d1f94e0-0ee6-41ec-8031-c2346d1d694d" containerName="oc" Mar 11 10:42:34 crc kubenswrapper[4840]: I0311 10:42:34.622987 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddfca1fb-6df3-45c1-8fed-e9500351006a" containerName="registry-server" Mar 11 10:42:34 crc kubenswrapper[4840]: I0311 10:42:34.624155 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nks92" Mar 11 10:42:34 crc kubenswrapper[4840]: I0311 10:42:34.634659 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nks92"] Mar 11 10:42:34 crc kubenswrapper[4840]: I0311 10:42:34.645102 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38811405-674e-4a9e-ba6f-e3e934764449-utilities\") pod \"community-operators-nks92\" (UID: \"38811405-674e-4a9e-ba6f-e3e934764449\") " pod="openshift-marketplace/community-operators-nks92" Mar 11 10:42:34 crc kubenswrapper[4840]: I0311 10:42:34.645155 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnhb9\" (UniqueName: \"kubernetes.io/projected/38811405-674e-4a9e-ba6f-e3e934764449-kube-api-access-wnhb9\") pod \"community-operators-nks92\" (UID: \"38811405-674e-4a9e-ba6f-e3e934764449\") " pod="openshift-marketplace/community-operators-nks92" Mar 11 10:42:34 crc kubenswrapper[4840]: I0311 10:42:34.645176 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38811405-674e-4a9e-ba6f-e3e934764449-catalog-content\") pod \"community-operators-nks92\" (UID: \"38811405-674e-4a9e-ba6f-e3e934764449\") " pod="openshift-marketplace/community-operators-nks92" Mar 11 10:42:34 crc kubenswrapper[4840]: I0311 10:42:34.746717 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38811405-674e-4a9e-ba6f-e3e934764449-utilities\") pod \"community-operators-nks92\" (UID: \"38811405-674e-4a9e-ba6f-e3e934764449\") " pod="openshift-marketplace/community-operators-nks92" Mar 11 10:42:34 crc kubenswrapper[4840]: I0311 10:42:34.746784 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnhb9\" (UniqueName: \"kubernetes.io/projected/38811405-674e-4a9e-ba6f-e3e934764449-kube-api-access-wnhb9\") pod \"community-operators-nks92\" (UID: \"38811405-674e-4a9e-ba6f-e3e934764449\") " pod="openshift-marketplace/community-operators-nks92" Mar 11 10:42:34 crc kubenswrapper[4840]: I0311 10:42:34.746803 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38811405-674e-4a9e-ba6f-e3e934764449-catalog-content\") pod \"community-operators-nks92\" (UID: \"38811405-674e-4a9e-ba6f-e3e934764449\") " pod="openshift-marketplace/community-operators-nks92" Mar 11 10:42:34 crc kubenswrapper[4840]: I0311 10:42:34.747754 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38811405-674e-4a9e-ba6f-e3e934764449-utilities\") pod \"community-operators-nks92\" (UID: \"38811405-674e-4a9e-ba6f-e3e934764449\") " pod="openshift-marketplace/community-operators-nks92" Mar 11 10:42:34 crc kubenswrapper[4840]: I0311 10:42:34.747776 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38811405-674e-4a9e-ba6f-e3e934764449-catalog-content\") pod \"community-operators-nks92\" (UID: \"38811405-674e-4a9e-ba6f-e3e934764449\") " pod="openshift-marketplace/community-operators-nks92" Mar 11 10:42:34 crc kubenswrapper[4840]: I0311 10:42:34.774403 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnhb9\" (UniqueName: \"kubernetes.io/projected/38811405-674e-4a9e-ba6f-e3e934764449-kube-api-access-wnhb9\") pod \"community-operators-nks92\" (UID: \"38811405-674e-4a9e-ba6f-e3e934764449\") " pod="openshift-marketplace/community-operators-nks92" Mar 11 10:42:34 crc kubenswrapper[4840]: I0311 10:42:34.947219 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nks92" Mar 11 10:42:35 crc kubenswrapper[4840]: I0311 10:42:35.448075 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nks92"] Mar 11 10:42:35 crc kubenswrapper[4840]: I0311 10:42:35.533487 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nks92" event={"ID":"38811405-674e-4a9e-ba6f-e3e934764449","Type":"ContainerStarted","Data":"e4702c01344ca8d26c070b8b8e7cf754a13fc259e6394225b57f26fa6b1a6816"} Mar 11 10:42:36 crc kubenswrapper[4840]: I0311 10:42:36.541433 4840 generic.go:334] "Generic (PLEG): container finished" podID="38811405-674e-4a9e-ba6f-e3e934764449" containerID="4f1347ceb70f3b612bbeb2b2f4fc70103a9f709a19156e9ece4975a313c15bf0" exitCode=0 Mar 11 10:42:36 crc kubenswrapper[4840]: I0311 10:42:36.541543 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nks92" event={"ID":"38811405-674e-4a9e-ba6f-e3e934764449","Type":"ContainerDied","Data":"4f1347ceb70f3b612bbeb2b2f4fc70103a9f709a19156e9ece4975a313c15bf0"} Mar 11 10:42:39 crc kubenswrapper[4840]: I0311 10:42:39.564746 4840 generic.go:334] "Generic (PLEG): container finished" podID="38811405-674e-4a9e-ba6f-e3e934764449" containerID="80e3b41be5d364a12b8ad0e7c689c6ea251409719cedc79e1a835bacc57e56b0" exitCode=0 Mar 11 10:42:39 crc kubenswrapper[4840]: I0311 10:42:39.564792 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nks92" event={"ID":"38811405-674e-4a9e-ba6f-e3e934764449","Type":"ContainerDied","Data":"80e3b41be5d364a12b8ad0e7c689c6ea251409719cedc79e1a835bacc57e56b0"} Mar 11 10:42:40 crc kubenswrapper[4840]: I0311 10:42:40.574153 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nks92" event={"ID":"38811405-674e-4a9e-ba6f-e3e934764449","Type":"ContainerStarted","Data":"80e94ff9929e9da5f8907b25f704852122e1ba2ad23e82607e053b099ee7601f"} Mar 11 10:42:44 crc kubenswrapper[4840]: I0311 10:42:44.949102 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nks92" Mar 11 10:42:44 crc kubenswrapper[4840]: I0311 10:42:44.949555 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nks92" Mar 11 10:42:44 crc kubenswrapper[4840]: I0311 10:42:44.991602 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nks92" Mar 11 10:42:45 crc kubenswrapper[4840]: I0311 10:42:45.028646 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nks92" podStartSLOduration=7.6075225159999995 podStartE2EDuration="11.028626447s" podCreationTimestamp="2026-03-11 10:42:34 +0000 UTC" firstStartedPulling="2026-03-11 10:42:36.54369509 +0000 UTC m=+6355.209364905" lastFinishedPulling="2026-03-11 10:42:39.964799021 +0000 UTC m=+6358.630468836" observedRunningTime="2026-03-11 10:42:40.593806047 +0000 UTC m=+6359.259475882" watchObservedRunningTime="2026-03-11 10:42:45.028626447 +0000 UTC m=+6363.694296322" Mar 11 10:42:45 crc kubenswrapper[4840]: I0311 10:42:45.659820 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nks92" Mar 11 10:42:45 crc kubenswrapper[4840]: I0311 10:42:45.702436 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nks92"] Mar 11 10:42:47 crc kubenswrapper[4840]: I0311 10:42:47.623919 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nks92" podUID="38811405-674e-4a9e-ba6f-e3e934764449" containerName="registry-server" containerID="cri-o://80e94ff9929e9da5f8907b25f704852122e1ba2ad23e82607e053b099ee7601f" gracePeriod=2 Mar 11 10:42:48 crc kubenswrapper[4840]: I0311 10:42:48.075412 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nks92" Mar 11 10:42:48 crc kubenswrapper[4840]: I0311 10:42:48.083405 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnhb9\" (UniqueName: \"kubernetes.io/projected/38811405-674e-4a9e-ba6f-e3e934764449-kube-api-access-wnhb9\") pod \"38811405-674e-4a9e-ba6f-e3e934764449\" (UID: \"38811405-674e-4a9e-ba6f-e3e934764449\") " Mar 11 10:42:48 crc kubenswrapper[4840]: I0311 10:42:48.083496 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38811405-674e-4a9e-ba6f-e3e934764449-catalog-content\") pod \"38811405-674e-4a9e-ba6f-e3e934764449\" (UID: \"38811405-674e-4a9e-ba6f-e3e934764449\") " Mar 11 10:42:48 crc kubenswrapper[4840]: I0311 10:42:48.083530 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38811405-674e-4a9e-ba6f-e3e934764449-utilities\") pod \"38811405-674e-4a9e-ba6f-e3e934764449\" (UID: \"38811405-674e-4a9e-ba6f-e3e934764449\") " Mar 11 10:42:48 crc kubenswrapper[4840]: I0311 10:42:48.084976 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38811405-674e-4a9e-ba6f-e3e934764449-utilities" (OuterVolumeSpecName: "utilities") pod "38811405-674e-4a9e-ba6f-e3e934764449" (UID: "38811405-674e-4a9e-ba6f-e3e934764449"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:42:48 crc kubenswrapper[4840]: I0311 10:42:48.085929 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38811405-674e-4a9e-ba6f-e3e934764449-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:42:48 crc kubenswrapper[4840]: I0311 10:42:48.090680 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38811405-674e-4a9e-ba6f-e3e934764449-kube-api-access-wnhb9" (OuterVolumeSpecName: "kube-api-access-wnhb9") pod "38811405-674e-4a9e-ba6f-e3e934764449" (UID: "38811405-674e-4a9e-ba6f-e3e934764449"). InnerVolumeSpecName "kube-api-access-wnhb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:42:48 crc kubenswrapper[4840]: I0311 10:42:48.140416 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38811405-674e-4a9e-ba6f-e3e934764449-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38811405-674e-4a9e-ba6f-e3e934764449" (UID: "38811405-674e-4a9e-ba6f-e3e934764449"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:42:48 crc kubenswrapper[4840]: I0311 10:42:48.187955 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38811405-674e-4a9e-ba6f-e3e934764449-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:42:48 crc kubenswrapper[4840]: I0311 10:42:48.187986 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnhb9\" (UniqueName: \"kubernetes.io/projected/38811405-674e-4a9e-ba6f-e3e934764449-kube-api-access-wnhb9\") on node \"crc\" DevicePath \"\"" Mar 11 10:42:48 crc kubenswrapper[4840]: I0311 10:42:48.632582 4840 generic.go:334] "Generic (PLEG): container finished" podID="38811405-674e-4a9e-ba6f-e3e934764449" containerID="80e94ff9929e9da5f8907b25f704852122e1ba2ad23e82607e053b099ee7601f" exitCode=0 Mar 11 10:42:48 crc kubenswrapper[4840]: I0311 10:42:48.632624 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nks92" event={"ID":"38811405-674e-4a9e-ba6f-e3e934764449","Type":"ContainerDied","Data":"80e94ff9929e9da5f8907b25f704852122e1ba2ad23e82607e053b099ee7601f"} Mar 11 10:42:48 crc kubenswrapper[4840]: I0311 10:42:48.632656 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nks92" event={"ID":"38811405-674e-4a9e-ba6f-e3e934764449","Type":"ContainerDied","Data":"e4702c01344ca8d26c070b8b8e7cf754a13fc259e6394225b57f26fa6b1a6816"} Mar 11 10:42:48 crc kubenswrapper[4840]: I0311 10:42:48.632689 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nks92" Mar 11 10:42:48 crc kubenswrapper[4840]: I0311 10:42:48.632705 4840 scope.go:117] "RemoveContainer" containerID="80e94ff9929e9da5f8907b25f704852122e1ba2ad23e82607e053b099ee7601f" Mar 11 10:42:48 crc kubenswrapper[4840]: I0311 10:42:48.651440 4840 scope.go:117] "RemoveContainer" containerID="80e3b41be5d364a12b8ad0e7c689c6ea251409719cedc79e1a835bacc57e56b0" Mar 11 10:42:48 crc kubenswrapper[4840]: I0311 10:42:48.669206 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nks92"] Mar 11 10:42:48 crc kubenswrapper[4840]: I0311 10:42:48.676782 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nks92"] Mar 11 10:42:48 crc kubenswrapper[4840]: I0311 10:42:48.684273 4840 scope.go:117] "RemoveContainer" containerID="4f1347ceb70f3b612bbeb2b2f4fc70103a9f709a19156e9ece4975a313c15bf0" Mar 11 10:42:48 crc kubenswrapper[4840]: I0311 10:42:48.711709 4840 scope.go:117] "RemoveContainer" containerID="80e94ff9929e9da5f8907b25f704852122e1ba2ad23e82607e053b099ee7601f" Mar 11 10:42:48 crc kubenswrapper[4840]: E0311 10:42:48.712153 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80e94ff9929e9da5f8907b25f704852122e1ba2ad23e82607e053b099ee7601f\": container with ID starting with 80e94ff9929e9da5f8907b25f704852122e1ba2ad23e82607e053b099ee7601f not found: ID does not exist" containerID="80e94ff9929e9da5f8907b25f704852122e1ba2ad23e82607e053b099ee7601f" Mar 11 10:42:48 crc kubenswrapper[4840]: I0311 10:42:48.712192 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80e94ff9929e9da5f8907b25f704852122e1ba2ad23e82607e053b099ee7601f"} err="failed to get container status \"80e94ff9929e9da5f8907b25f704852122e1ba2ad23e82607e053b099ee7601f\": rpc error: code = NotFound desc = could not find container \"80e94ff9929e9da5f8907b25f704852122e1ba2ad23e82607e053b099ee7601f\": container with ID starting with 80e94ff9929e9da5f8907b25f704852122e1ba2ad23e82607e053b099ee7601f not found: ID does not exist" Mar 11 10:42:48 crc kubenswrapper[4840]: I0311 10:42:48.712218 4840 scope.go:117] "RemoveContainer" containerID="80e3b41be5d364a12b8ad0e7c689c6ea251409719cedc79e1a835bacc57e56b0" Mar 11 10:42:48 crc kubenswrapper[4840]: E0311 10:42:48.712616 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80e3b41be5d364a12b8ad0e7c689c6ea251409719cedc79e1a835bacc57e56b0\": container with ID starting with 80e3b41be5d364a12b8ad0e7c689c6ea251409719cedc79e1a835bacc57e56b0 not found: ID does not exist" containerID="80e3b41be5d364a12b8ad0e7c689c6ea251409719cedc79e1a835bacc57e56b0" Mar 11 10:42:48 crc kubenswrapper[4840]: I0311 10:42:48.712651 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80e3b41be5d364a12b8ad0e7c689c6ea251409719cedc79e1a835bacc57e56b0"} err="failed to get container status \"80e3b41be5d364a12b8ad0e7c689c6ea251409719cedc79e1a835bacc57e56b0\": rpc error: code = NotFound desc = could not find container \"80e3b41be5d364a12b8ad0e7c689c6ea251409719cedc79e1a835bacc57e56b0\": container with ID starting with 80e3b41be5d364a12b8ad0e7c689c6ea251409719cedc79e1a835bacc57e56b0 not found: ID does not exist" Mar 11 10:42:48 crc kubenswrapper[4840]: I0311 10:42:48.712672 4840 scope.go:117] "RemoveContainer" containerID="4f1347ceb70f3b612bbeb2b2f4fc70103a9f709a19156e9ece4975a313c15bf0" Mar 11 10:42:48 crc kubenswrapper[4840]: E0311 10:42:48.712904 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f1347ceb70f3b612bbeb2b2f4fc70103a9f709a19156e9ece4975a313c15bf0\": container with ID starting with 4f1347ceb70f3b612bbeb2b2f4fc70103a9f709a19156e9ece4975a313c15bf0 not found: ID does not exist" containerID="4f1347ceb70f3b612bbeb2b2f4fc70103a9f709a19156e9ece4975a313c15bf0" Mar 11 10:42:48 crc kubenswrapper[4840]: I0311 10:42:48.712943 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f1347ceb70f3b612bbeb2b2f4fc70103a9f709a19156e9ece4975a313c15bf0"} err="failed to get container status \"4f1347ceb70f3b612bbeb2b2f4fc70103a9f709a19156e9ece4975a313c15bf0\": rpc error: code = NotFound desc = could not find container \"4f1347ceb70f3b612bbeb2b2f4fc70103a9f709a19156e9ece4975a313c15bf0\": container with ID starting with 4f1347ceb70f3b612bbeb2b2f4fc70103a9f709a19156e9ece4975a313c15bf0 not found: ID does not exist" Mar 11 10:42:50 crc kubenswrapper[4840]: I0311 10:42:50.069859 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38811405-674e-4a9e-ba6f-e3e934764449" path="/var/lib/kubelet/pods/38811405-674e-4a9e-ba6f-e3e934764449/volumes" Mar 11 10:42:57 crc kubenswrapper[4840]: I0311 10:42:57.445672 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:42:57 crc kubenswrapper[4840]: I0311 10:42:57.446273 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:43:27 crc kubenswrapper[4840]: I0311 10:43:27.446209 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:43:27 crc kubenswrapper[4840]: I0311 10:43:27.447030 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:43:27 crc kubenswrapper[4840]: I0311 10:43:27.447106 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 10:43:27 crc kubenswrapper[4840]: I0311 10:43:27.448219 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50"} pod="openshift-machine-config-operator/machine-config-daemon-brtht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 10:43:27 crc kubenswrapper[4840]: I0311 10:43:27.448325 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" containerID="cri-o://180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" gracePeriod=600 Mar 11 10:43:27 crc kubenswrapper[4840]: E0311 10:43:27.570316 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:43:28 crc kubenswrapper[4840]: I0311 10:43:28.397007 4840 generic.go:334] "Generic (PLEG): container finished" podID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" exitCode=0 Mar 11 10:43:28 crc kubenswrapper[4840]: I0311 10:43:28.397085 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerDied","Data":"180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50"} Mar 11 10:43:28 crc kubenswrapper[4840]: I0311 10:43:28.397505 4840 scope.go:117] "RemoveContainer" containerID="0b1dc94236fd74074941a3853c81f7449b469e1e2e185c24f80ca9a36ca3b1f5" Mar 11 10:43:28 crc kubenswrapper[4840]: I0311 10:43:28.398196 4840 scope.go:117] "RemoveContainer" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" Mar 11 10:43:28 crc kubenswrapper[4840]: E0311 10:43:28.398636 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:43:42 crc kubenswrapper[4840]: I0311 10:43:42.065233 4840 scope.go:117] "RemoveContainer" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" Mar 11 10:43:42 crc kubenswrapper[4840]: E0311 10:43:42.067341 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:43:57 crc kubenswrapper[4840]: I0311 10:43:57.588352 4840 scope.go:117] "RemoveContainer" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" Mar 11 10:43:57 crc kubenswrapper[4840]: E0311 10:43:57.590279 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:44:00 crc kubenswrapper[4840]: I0311 10:44:00.163119 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553764-qqvnx"] Mar 11 10:44:00 crc kubenswrapper[4840]: E0311 10:44:00.163845 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38811405-674e-4a9e-ba6f-e3e934764449" containerName="extract-utilities" Mar 11 10:44:00 crc kubenswrapper[4840]: I0311 10:44:00.163861 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="38811405-674e-4a9e-ba6f-e3e934764449" containerName="extract-utilities" Mar 11 10:44:00 crc kubenswrapper[4840]: E0311 10:44:00.163876 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38811405-674e-4a9e-ba6f-e3e934764449" containerName="registry-server" Mar 11 10:44:00 crc kubenswrapper[4840]: I0311 10:44:00.163882 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="38811405-674e-4a9e-ba6f-e3e934764449" containerName="registry-server" Mar 11 10:44:00 crc kubenswrapper[4840]: E0311 10:44:00.163906 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38811405-674e-4a9e-ba6f-e3e934764449" containerName="extract-content" Mar 11 10:44:00 crc kubenswrapper[4840]: I0311 10:44:00.163914 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="38811405-674e-4a9e-ba6f-e3e934764449" containerName="extract-content" Mar 11 10:44:00 crc kubenswrapper[4840]: I0311 10:44:00.164066 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="38811405-674e-4a9e-ba6f-e3e934764449" containerName="registry-server" Mar 11 10:44:00 crc kubenswrapper[4840]: I0311 10:44:00.164629 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553764-qqvnx" Mar 11 10:44:00 crc kubenswrapper[4840]: I0311 10:44:00.167636 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:44:00 crc kubenswrapper[4840]: I0311 10:44:00.167803 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:44:00 crc kubenswrapper[4840]: I0311 10:44:00.167904 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:44:00 crc kubenswrapper[4840]: I0311 10:44:00.172162 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553764-qqvnx"] Mar 11 10:44:00 crc kubenswrapper[4840]: I0311 10:44:00.344847 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qb44\" (UniqueName: \"kubernetes.io/projected/69ab84bf-dcac-4db5-8d05-ad02af254a80-kube-api-access-5qb44\") pod \"auto-csr-approver-29553764-qqvnx\" (UID: \"69ab84bf-dcac-4db5-8d05-ad02af254a80\") " pod="openshift-infra/auto-csr-approver-29553764-qqvnx" Mar 11 10:44:00 crc kubenswrapper[4840]: I0311 10:44:00.446855 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qb44\" (UniqueName: \"kubernetes.io/projected/69ab84bf-dcac-4db5-8d05-ad02af254a80-kube-api-access-5qb44\") pod \"auto-csr-approver-29553764-qqvnx\" (UID: \"69ab84bf-dcac-4db5-8d05-ad02af254a80\") " pod="openshift-infra/auto-csr-approver-29553764-qqvnx" Mar 11 10:44:00 crc kubenswrapper[4840]: I0311 10:44:00.470952 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qb44\" (UniqueName: \"kubernetes.io/projected/69ab84bf-dcac-4db5-8d05-ad02af254a80-kube-api-access-5qb44\") pod \"auto-csr-approver-29553764-qqvnx\" (UID: \"69ab84bf-dcac-4db5-8d05-ad02af254a80\") " pod="openshift-infra/auto-csr-approver-29553764-qqvnx" Mar 11 10:44:00 crc kubenswrapper[4840]: I0311 10:44:00.487284 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553764-qqvnx" Mar 11 10:44:00 crc kubenswrapper[4840]: I0311 10:44:00.911213 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553764-qqvnx"] Mar 11 10:44:00 crc kubenswrapper[4840]: W0311 10:44:00.917876 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69ab84bf_dcac_4db5_8d05_ad02af254a80.slice/crio-182f139420db8006db2dc7f0047aed1e8e1f23c27793e636b743b4ba149c14a4 WatchSource:0}: Error finding container 182f139420db8006db2dc7f0047aed1e8e1f23c27793e636b743b4ba149c14a4: Status 404 returned error can't find the container with id 182f139420db8006db2dc7f0047aed1e8e1f23c27793e636b743b4ba149c14a4 Mar 11 10:44:00 crc kubenswrapper[4840]: I0311 10:44:00.921160 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 10:44:01 crc kubenswrapper[4840]: I0311 10:44:01.810589 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553764-qqvnx" event={"ID":"69ab84bf-dcac-4db5-8d05-ad02af254a80","Type":"ContainerStarted","Data":"182f139420db8006db2dc7f0047aed1e8e1f23c27793e636b743b4ba149c14a4"} Mar 11 10:44:04 crc kubenswrapper[4840]: I0311 10:44:04.839011 4840 generic.go:334] "Generic (PLEG): container finished" podID="69ab84bf-dcac-4db5-8d05-ad02af254a80" containerID="4f2f82fa09199228c4f8f00a2f11696ce09951710a63d8a3411d2d441c697adb" exitCode=0 Mar 11 10:44:04 crc kubenswrapper[4840]: I0311 10:44:04.839078 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553764-qqvnx" event={"ID":"69ab84bf-dcac-4db5-8d05-ad02af254a80","Type":"ContainerDied","Data":"4f2f82fa09199228c4f8f00a2f11696ce09951710a63d8a3411d2d441c697adb"} Mar 11 10:44:06 crc kubenswrapper[4840]: I0311 10:44:06.137517 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553764-qqvnx" Mar 11 10:44:06 crc kubenswrapper[4840]: I0311 10:44:06.233554 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qb44\" (UniqueName: \"kubernetes.io/projected/69ab84bf-dcac-4db5-8d05-ad02af254a80-kube-api-access-5qb44\") pod \"69ab84bf-dcac-4db5-8d05-ad02af254a80\" (UID: \"69ab84bf-dcac-4db5-8d05-ad02af254a80\") " Mar 11 10:44:06 crc kubenswrapper[4840]: I0311 10:44:06.239931 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69ab84bf-dcac-4db5-8d05-ad02af254a80-kube-api-access-5qb44" (OuterVolumeSpecName: "kube-api-access-5qb44") pod "69ab84bf-dcac-4db5-8d05-ad02af254a80" (UID: "69ab84bf-dcac-4db5-8d05-ad02af254a80"). InnerVolumeSpecName "kube-api-access-5qb44". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:44:06 crc kubenswrapper[4840]: I0311 10:44:06.335594 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qb44\" (UniqueName: \"kubernetes.io/projected/69ab84bf-dcac-4db5-8d05-ad02af254a80-kube-api-access-5qb44\") on node \"crc\" DevicePath \"\"" Mar 11 10:44:06 crc kubenswrapper[4840]: I0311 10:44:06.871456 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553764-qqvnx" event={"ID":"69ab84bf-dcac-4db5-8d05-ad02af254a80","Type":"ContainerDied","Data":"182f139420db8006db2dc7f0047aed1e8e1f23c27793e636b743b4ba149c14a4"} Mar 11 10:44:06 crc kubenswrapper[4840]: I0311 10:44:06.871742 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="182f139420db8006db2dc7f0047aed1e8e1f23c27793e636b743b4ba149c14a4" Mar 11 10:44:06 crc kubenswrapper[4840]: I0311 10:44:06.871497 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553764-qqvnx" Mar 11 10:44:07 crc kubenswrapper[4840]: I0311 10:44:07.206677 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553758-mdqls"] Mar 11 10:44:07 crc kubenswrapper[4840]: I0311 10:44:07.213007 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553758-mdqls"] Mar 11 10:44:08 crc kubenswrapper[4840]: I0311 10:44:08.059922 4840 scope.go:117] "RemoveContainer" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" Mar 11 10:44:08 crc kubenswrapper[4840]: E0311 10:44:08.060221 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:44:08 crc kubenswrapper[4840]: I0311 10:44:08.069623 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7cad627-01a5-42a2-a7d1-87f925677e96" path="/var/lib/kubelet/pods/e7cad627-01a5-42a2-a7d1-87f925677e96/volumes" Mar 11 10:44:14 crc kubenswrapper[4840]: I0311 10:44:14.198226 4840 scope.go:117] "RemoveContainer" containerID="8e3e6e271475ff5d9af70ee9fc19d373686fe89a1a354789882212c1aad15704" Mar 11 10:44:20 crc kubenswrapper[4840]: I0311 10:44:20.061956 4840 scope.go:117] "RemoveContainer" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" Mar 11 10:44:20 crc kubenswrapper[4840]: E0311 10:44:20.066826 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:44:32 crc kubenswrapper[4840]: I0311 10:44:32.065716 4840 scope.go:117] "RemoveContainer" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" Mar 11 10:44:32 crc kubenswrapper[4840]: E0311 10:44:32.066568 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:44:44 crc kubenswrapper[4840]: I0311 10:44:44.060824 4840 scope.go:117] "RemoveContainer" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" Mar 11 10:44:44 crc kubenswrapper[4840]: E0311 10:44:44.061763 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:44:59 crc kubenswrapper[4840]: I0311 10:44:59.061522 4840 scope.go:117] "RemoveContainer" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" Mar 11 10:44:59 crc kubenswrapper[4840]: E0311 10:44:59.064203 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:45:00 crc kubenswrapper[4840]: I0311 10:45:00.146320 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553765-8hmrb"] Mar 11 10:45:00 crc kubenswrapper[4840]: E0311 10:45:00.146979 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69ab84bf-dcac-4db5-8d05-ad02af254a80" containerName="oc" Mar 11 10:45:00 crc kubenswrapper[4840]: I0311 10:45:00.146992 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="69ab84bf-dcac-4db5-8d05-ad02af254a80" containerName="oc" Mar 11 10:45:00 crc kubenswrapper[4840]: I0311 10:45:00.147145 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="69ab84bf-dcac-4db5-8d05-ad02af254a80" containerName="oc" Mar 11 10:45:00 crc kubenswrapper[4840]: I0311 10:45:00.147801 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553765-8hmrb" Mar 11 10:45:00 crc kubenswrapper[4840]: I0311 10:45:00.150188 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 11 10:45:00 crc kubenswrapper[4840]: I0311 10:45:00.150535 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 11 10:45:00 crc kubenswrapper[4840]: I0311 10:45:00.160746 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553765-8hmrb"] Mar 11 10:45:00 crc kubenswrapper[4840]: I0311 10:45:00.288050 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1c5e2f9-361b-471c-9583-c15b632c808e-secret-volume\") pod \"collect-profiles-29553765-8hmrb\" (UID: \"b1c5e2f9-361b-471c-9583-c15b632c808e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553765-8hmrb" Mar 11 10:45:00 crc kubenswrapper[4840]: I0311 10:45:00.288692 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1c5e2f9-361b-471c-9583-c15b632c808e-config-volume\") pod \"collect-profiles-29553765-8hmrb\" (UID: \"b1c5e2f9-361b-471c-9583-c15b632c808e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553765-8hmrb" Mar 11 10:45:00 crc kubenswrapper[4840]: I0311 10:45:00.288815 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsjvm\" (UniqueName: \"kubernetes.io/projected/b1c5e2f9-361b-471c-9583-c15b632c808e-kube-api-access-wsjvm\") pod \"collect-profiles-29553765-8hmrb\" (UID: \"b1c5e2f9-361b-471c-9583-c15b632c808e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553765-8hmrb" Mar 11 10:45:00 crc kubenswrapper[4840]: I0311 10:45:00.390610 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1c5e2f9-361b-471c-9583-c15b632c808e-config-volume\") pod \"collect-profiles-29553765-8hmrb\" (UID: \"b1c5e2f9-361b-471c-9583-c15b632c808e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553765-8hmrb" Mar 11 10:45:00 crc kubenswrapper[4840]: I0311 10:45:00.390670 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsjvm\" (UniqueName: \"kubernetes.io/projected/b1c5e2f9-361b-471c-9583-c15b632c808e-kube-api-access-wsjvm\") pod \"collect-profiles-29553765-8hmrb\" (UID: \"b1c5e2f9-361b-471c-9583-c15b632c808e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553765-8hmrb" Mar 11 10:45:00 crc kubenswrapper[4840]: I0311 10:45:00.390751 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1c5e2f9-361b-471c-9583-c15b632c808e-secret-volume\") pod \"collect-profiles-29553765-8hmrb\" (UID: \"b1c5e2f9-361b-471c-9583-c15b632c808e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553765-8hmrb" Mar 11 10:45:00 crc kubenswrapper[4840]: I0311 10:45:00.392392 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1c5e2f9-361b-471c-9583-c15b632c808e-config-volume\") pod \"collect-profiles-29553765-8hmrb\" (UID: \"b1c5e2f9-361b-471c-9583-c15b632c808e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553765-8hmrb" Mar 11 10:45:00 crc kubenswrapper[4840]: I0311 10:45:00.397055 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1c5e2f9-361b-471c-9583-c15b632c808e-secret-volume\") pod \"collect-profiles-29553765-8hmrb\" (UID: \"b1c5e2f9-361b-471c-9583-c15b632c808e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553765-8hmrb" Mar 11 10:45:00 crc kubenswrapper[4840]: I0311 10:45:00.413299 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsjvm\" (UniqueName: \"kubernetes.io/projected/b1c5e2f9-361b-471c-9583-c15b632c808e-kube-api-access-wsjvm\") pod \"collect-profiles-29553765-8hmrb\" (UID: \"b1c5e2f9-361b-471c-9583-c15b632c808e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29553765-8hmrb" Mar 11 10:45:00 crc kubenswrapper[4840]: I0311 10:45:00.471410 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553765-8hmrb" Mar 11 10:45:00 crc kubenswrapper[4840]: I0311 10:45:00.897368 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553765-8hmrb"] Mar 11 10:45:01 crc kubenswrapper[4840]: I0311 10:45:01.222545 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553765-8hmrb" event={"ID":"b1c5e2f9-361b-471c-9583-c15b632c808e","Type":"ContainerStarted","Data":"a4a386f1330c14ad5cd196b8f8e343df4fd89e653d0ed1c871dfa0c572cef82c"} Mar 11 10:45:01 crc kubenswrapper[4840]: I0311 10:45:01.222899 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553765-8hmrb" event={"ID":"b1c5e2f9-361b-471c-9583-c15b632c808e","Type":"ContainerStarted","Data":"4894250838bbbf20e55a53f9982be16ffd3216f4a1db75536ab0a4ef9684eeee"} Mar 11 10:45:02 crc kubenswrapper[4840]: I0311 10:45:02.234000 4840 generic.go:334] "Generic (PLEG): container finished" podID="b1c5e2f9-361b-471c-9583-c15b632c808e" containerID="a4a386f1330c14ad5cd196b8f8e343df4fd89e653d0ed1c871dfa0c572cef82c" exitCode=0 Mar 11 10:45:02 crc kubenswrapper[4840]: I0311 10:45:02.234134 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553765-8hmrb" event={"ID":"b1c5e2f9-361b-471c-9583-c15b632c808e","Type":"ContainerDied","Data":"a4a386f1330c14ad5cd196b8f8e343df4fd89e653d0ed1c871dfa0c572cef82c"} Mar 11 10:45:02 crc kubenswrapper[4840]: I0311 10:45:02.500829 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553765-8hmrb" Mar 11 10:45:02 crc kubenswrapper[4840]: I0311 10:45:02.635356 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1c5e2f9-361b-471c-9583-c15b632c808e-config-volume\") pod \"b1c5e2f9-361b-471c-9583-c15b632c808e\" (UID: \"b1c5e2f9-361b-471c-9583-c15b632c808e\") " Mar 11 10:45:02 crc kubenswrapper[4840]: I0311 10:45:02.635497 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1c5e2f9-361b-471c-9583-c15b632c808e-secret-volume\") pod \"b1c5e2f9-361b-471c-9583-c15b632c808e\" (UID: \"b1c5e2f9-361b-471c-9583-c15b632c808e\") " Mar 11 10:45:02 crc kubenswrapper[4840]: I0311 10:45:02.635548 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsjvm\" (UniqueName: \"kubernetes.io/projected/b1c5e2f9-361b-471c-9583-c15b632c808e-kube-api-access-wsjvm\") pod \"b1c5e2f9-361b-471c-9583-c15b632c808e\" (UID: \"b1c5e2f9-361b-471c-9583-c15b632c808e\") " Mar 11 10:45:02 crc kubenswrapper[4840]: I0311 10:45:02.636422 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1c5e2f9-361b-471c-9583-c15b632c808e-config-volume" (OuterVolumeSpecName: "config-volume") pod "b1c5e2f9-361b-471c-9583-c15b632c808e" (UID: "b1c5e2f9-361b-471c-9583-c15b632c808e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 11 10:45:02 crc kubenswrapper[4840]: I0311 10:45:02.648821 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1c5e2f9-361b-471c-9583-c15b632c808e-kube-api-access-wsjvm" (OuterVolumeSpecName: "kube-api-access-wsjvm") pod "b1c5e2f9-361b-471c-9583-c15b632c808e" (UID: "b1c5e2f9-361b-471c-9583-c15b632c808e"). InnerVolumeSpecName "kube-api-access-wsjvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:45:02 crc kubenswrapper[4840]: I0311 10:45:02.649047 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1c5e2f9-361b-471c-9583-c15b632c808e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b1c5e2f9-361b-471c-9583-c15b632c808e" (UID: "b1c5e2f9-361b-471c-9583-c15b632c808e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 11 10:45:02 crc kubenswrapper[4840]: I0311 10:45:02.737066 4840 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1c5e2f9-361b-471c-9583-c15b632c808e-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 11 10:45:02 crc kubenswrapper[4840]: I0311 10:45:02.737130 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsjvm\" (UniqueName: \"kubernetes.io/projected/b1c5e2f9-361b-471c-9583-c15b632c808e-kube-api-access-wsjvm\") on node \"crc\" DevicePath \"\"" Mar 11 10:45:02 crc kubenswrapper[4840]: I0311 10:45:02.737150 4840 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1c5e2f9-361b-471c-9583-c15b632c808e-config-volume\") on node \"crc\" DevicePath \"\"" Mar 11 10:45:03 crc kubenswrapper[4840]: I0311 10:45:03.246951 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29553765-8hmrb" event={"ID":"b1c5e2f9-361b-471c-9583-c15b632c808e","Type":"ContainerDied","Data":"4894250838bbbf20e55a53f9982be16ffd3216f4a1db75536ab0a4ef9684eeee"} Mar 11 10:45:03 crc kubenswrapper[4840]: I0311 10:45:03.246990 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4894250838bbbf20e55a53f9982be16ffd3216f4a1db75536ab0a4ef9684eeee" Mar 11 10:45:03 crc kubenswrapper[4840]: I0311 10:45:03.247043 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29553765-8hmrb" Mar 11 10:45:03 crc kubenswrapper[4840]: I0311 10:45:03.578384 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553720-jgjh5"] Mar 11 10:45:03 crc kubenswrapper[4840]: I0311 10:45:03.588382 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29553720-jgjh5"] Mar 11 10:45:04 crc kubenswrapper[4840]: I0311 10:45:04.076707 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e41004e-87d6-4023-9792-55b2930786ba" path="/var/lib/kubelet/pods/2e41004e-87d6-4023-9792-55b2930786ba/volumes" Mar 11 10:45:14 crc kubenswrapper[4840]: I0311 10:45:14.061081 4840 scope.go:117] "RemoveContainer" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" Mar 11 10:45:14 crc kubenswrapper[4840]: E0311 10:45:14.062359 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:45:14 crc kubenswrapper[4840]: I0311 10:45:14.265727 4840 scope.go:117] "RemoveContainer" containerID="73119154aca4d25902e267de2bf6322ac8fb45f466719e06567b17b9524e87e5" Mar 11 10:45:29 crc kubenswrapper[4840]: I0311 10:45:29.061193 4840 scope.go:117] "RemoveContainer" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" Mar 11 10:45:29 crc kubenswrapper[4840]: E0311 10:45:29.062196 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:45:44 crc kubenswrapper[4840]: I0311 10:45:44.060958 4840 scope.go:117] "RemoveContainer" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" Mar 11 10:45:44 crc kubenswrapper[4840]: E0311 10:45:44.062675 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:45:57 crc kubenswrapper[4840]: I0311 10:45:57.060171 4840 scope.go:117] "RemoveContainer" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" Mar 11 10:45:57 crc kubenswrapper[4840]: E0311 10:45:57.060972 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:46:00 crc kubenswrapper[4840]: I0311 10:46:00.141458 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553766-qkd44"] Mar 11 10:46:00 crc kubenswrapper[4840]: E0311 10:46:00.142216 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1c5e2f9-361b-471c-9583-c15b632c808e" containerName="collect-profiles" Mar 11 10:46:00 crc kubenswrapper[4840]: I0311 10:46:00.142233 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1c5e2f9-361b-471c-9583-c15b632c808e" containerName="collect-profiles" Mar 11 10:46:00 crc kubenswrapper[4840]: I0311 10:46:00.142472 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1c5e2f9-361b-471c-9583-c15b632c808e" containerName="collect-profiles" Mar 11 10:46:00 crc kubenswrapper[4840]: I0311 10:46:00.143262 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553766-qkd44" Mar 11 10:46:00 crc kubenswrapper[4840]: I0311 10:46:00.145710 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:46:00 crc kubenswrapper[4840]: I0311 10:46:00.145798 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:46:00 crc kubenswrapper[4840]: I0311 10:46:00.146957 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:46:00 crc kubenswrapper[4840]: I0311 10:46:00.149382 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553766-qkd44"] Mar 11 10:46:00 crc kubenswrapper[4840]: I0311 10:46:00.300411 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc7sn\" (UniqueName: \"kubernetes.io/projected/6a6422e4-95b1-45e6-b90c-8e72fb37b64f-kube-api-access-bc7sn\") pod \"auto-csr-approver-29553766-qkd44\" (UID: \"6a6422e4-95b1-45e6-b90c-8e72fb37b64f\") " pod="openshift-infra/auto-csr-approver-29553766-qkd44" Mar 11 10:46:00 crc kubenswrapper[4840]: I0311 10:46:00.401667 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc7sn\" (UniqueName: \"kubernetes.io/projected/6a6422e4-95b1-45e6-b90c-8e72fb37b64f-kube-api-access-bc7sn\") pod \"auto-csr-approver-29553766-qkd44\" (UID: \"6a6422e4-95b1-45e6-b90c-8e72fb37b64f\") " pod="openshift-infra/auto-csr-approver-29553766-qkd44" Mar 11 10:46:00 crc kubenswrapper[4840]: I0311 10:46:00.421254 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc7sn\" (UniqueName: \"kubernetes.io/projected/6a6422e4-95b1-45e6-b90c-8e72fb37b64f-kube-api-access-bc7sn\") pod \"auto-csr-approver-29553766-qkd44\" (UID: \"6a6422e4-95b1-45e6-b90c-8e72fb37b64f\") " pod="openshift-infra/auto-csr-approver-29553766-qkd44" Mar 11 10:46:00 crc kubenswrapper[4840]: I0311 10:46:00.465388 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553766-qkd44" Mar 11 10:46:00 crc kubenswrapper[4840]: I0311 10:46:00.877971 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553766-qkd44"] Mar 11 10:46:00 crc kubenswrapper[4840]: W0311 10:46:00.885958 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a6422e4_95b1_45e6_b90c_8e72fb37b64f.slice/crio-2ff2b9aea55a13b55cdde2d58cee6e8b57ef5d0926cd96f5e3c773d8fd638984 WatchSource:0}: Error finding container 2ff2b9aea55a13b55cdde2d58cee6e8b57ef5d0926cd96f5e3c773d8fd638984: Status 404 returned error can't find the container with id 2ff2b9aea55a13b55cdde2d58cee6e8b57ef5d0926cd96f5e3c773d8fd638984 Mar 11 10:46:01 crc kubenswrapper[4840]: I0311 10:46:01.756342 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553766-qkd44" event={"ID":"6a6422e4-95b1-45e6-b90c-8e72fb37b64f","Type":"ContainerStarted","Data":"2ff2b9aea55a13b55cdde2d58cee6e8b57ef5d0926cd96f5e3c773d8fd638984"} Mar 11 10:46:02 crc kubenswrapper[4840]: I0311 10:46:02.764680 4840 generic.go:334] "Generic (PLEG): container finished" podID="6a6422e4-95b1-45e6-b90c-8e72fb37b64f" containerID="d6f9bc474aaa77fdba07a4734a7cae2521afd4b9d2a0b4a471f48652a79e8643" exitCode=0 Mar 11 10:46:02 crc kubenswrapper[4840]: I0311 10:46:02.764732 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553766-qkd44" event={"ID":"6a6422e4-95b1-45e6-b90c-8e72fb37b64f","Type":"ContainerDied","Data":"d6f9bc474aaa77fdba07a4734a7cae2521afd4b9d2a0b4a471f48652a79e8643"} Mar 11 10:46:04 crc kubenswrapper[4840]: I0311 10:46:04.070289 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553766-qkd44" Mar 11 10:46:04 crc kubenswrapper[4840]: I0311 10:46:04.271649 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc7sn\" (UniqueName: \"kubernetes.io/projected/6a6422e4-95b1-45e6-b90c-8e72fb37b64f-kube-api-access-bc7sn\") pod \"6a6422e4-95b1-45e6-b90c-8e72fb37b64f\" (UID: \"6a6422e4-95b1-45e6-b90c-8e72fb37b64f\") " Mar 11 10:46:04 crc kubenswrapper[4840]: I0311 10:46:04.277467 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a6422e4-95b1-45e6-b90c-8e72fb37b64f-kube-api-access-bc7sn" (OuterVolumeSpecName: "kube-api-access-bc7sn") pod "6a6422e4-95b1-45e6-b90c-8e72fb37b64f" (UID: "6a6422e4-95b1-45e6-b90c-8e72fb37b64f"). InnerVolumeSpecName "kube-api-access-bc7sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:46:04 crc kubenswrapper[4840]: I0311 10:46:04.373078 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc7sn\" (UniqueName: \"kubernetes.io/projected/6a6422e4-95b1-45e6-b90c-8e72fb37b64f-kube-api-access-bc7sn\") on node \"crc\" DevicePath \"\"" Mar 11 10:46:04 crc kubenswrapper[4840]: I0311 10:46:04.780170 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553766-qkd44" event={"ID":"6a6422e4-95b1-45e6-b90c-8e72fb37b64f","Type":"ContainerDied","Data":"2ff2b9aea55a13b55cdde2d58cee6e8b57ef5d0926cd96f5e3c773d8fd638984"} Mar 11 10:46:04 crc kubenswrapper[4840]: I0311 10:46:04.780206 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ff2b9aea55a13b55cdde2d58cee6e8b57ef5d0926cd96f5e3c773d8fd638984" Mar 11 10:46:04 crc kubenswrapper[4840]: I0311 10:46:04.780276 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553766-qkd44" Mar 11 10:46:05 crc kubenswrapper[4840]: I0311 10:46:05.141849 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553760-l2x2j"] Mar 11 10:46:05 crc kubenswrapper[4840]: I0311 10:46:05.148511 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553760-l2x2j"] Mar 11 10:46:06 crc kubenswrapper[4840]: I0311 10:46:06.076436 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9fe9239-81ae-4487-a404-52763ee9ba7c" path="/var/lib/kubelet/pods/c9fe9239-81ae-4487-a404-52763ee9ba7c/volumes" Mar 11 10:46:12 crc kubenswrapper[4840]: I0311 10:46:12.060700 4840 scope.go:117] "RemoveContainer" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" Mar 11 10:46:12 crc kubenswrapper[4840]: E0311 10:46:12.061659 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:46:13 crc kubenswrapper[4840]: I0311 10:46:13.415169 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-h772l/must-gather-qtsc5"] Mar 11 10:46:13 crc kubenswrapper[4840]: E0311 10:46:13.415724 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a6422e4-95b1-45e6-b90c-8e72fb37b64f" containerName="oc" Mar 11 10:46:13 crc kubenswrapper[4840]: I0311 10:46:13.415744 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a6422e4-95b1-45e6-b90c-8e72fb37b64f" containerName="oc" Mar 11 10:46:13 crc kubenswrapper[4840]: I0311 10:46:13.415970 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a6422e4-95b1-45e6-b90c-8e72fb37b64f" containerName="oc" Mar 11 10:46:13 crc kubenswrapper[4840]: I0311 10:46:13.417072 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h772l/must-gather-qtsc5" Mar 11 10:46:13 crc kubenswrapper[4840]: I0311 10:46:13.418974 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-h772l"/"default-dockercfg-nkrmh" Mar 11 10:46:13 crc kubenswrapper[4840]: I0311 10:46:13.419092 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-h772l"/"kube-root-ca.crt" Mar 11 10:46:13 crc kubenswrapper[4840]: I0311 10:46:13.419808 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-h772l"/"openshift-service-ca.crt" Mar 11 10:46:13 crc kubenswrapper[4840]: I0311 10:46:13.422680 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-h772l/must-gather-qtsc5"] Mar 11 10:46:13 crc kubenswrapper[4840]: I0311 10:46:13.473210 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8873911f-dd11-461e-a6c5-a612cd94f39b-must-gather-output\") pod \"must-gather-qtsc5\" (UID: \"8873911f-dd11-461e-a6c5-a612cd94f39b\") " pod="openshift-must-gather-h772l/must-gather-qtsc5" Mar 11 10:46:13 crc kubenswrapper[4840]: I0311 10:46:13.473377 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8sml\" (UniqueName: \"kubernetes.io/projected/8873911f-dd11-461e-a6c5-a612cd94f39b-kube-api-access-r8sml\") pod \"must-gather-qtsc5\" (UID: \"8873911f-dd11-461e-a6c5-a612cd94f39b\") " pod="openshift-must-gather-h772l/must-gather-qtsc5" Mar 11 10:46:13 crc kubenswrapper[4840]: I0311 10:46:13.574982 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8873911f-dd11-461e-a6c5-a612cd94f39b-must-gather-output\") pod \"must-gather-qtsc5\" (UID: \"8873911f-dd11-461e-a6c5-a612cd94f39b\") " pod="openshift-must-gather-h772l/must-gather-qtsc5" Mar 11 10:46:13 crc kubenswrapper[4840]: I0311 10:46:13.575306 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8sml\" (UniqueName: \"kubernetes.io/projected/8873911f-dd11-461e-a6c5-a612cd94f39b-kube-api-access-r8sml\") pod \"must-gather-qtsc5\" (UID: \"8873911f-dd11-461e-a6c5-a612cd94f39b\") " pod="openshift-must-gather-h772l/must-gather-qtsc5" Mar 11 10:46:13 crc kubenswrapper[4840]: I0311 10:46:13.575414 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8873911f-dd11-461e-a6c5-a612cd94f39b-must-gather-output\") pod \"must-gather-qtsc5\" (UID: \"8873911f-dd11-461e-a6c5-a612cd94f39b\") " pod="openshift-must-gather-h772l/must-gather-qtsc5" Mar 11 10:46:13 crc kubenswrapper[4840]: I0311 10:46:13.599295 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8sml\" (UniqueName: \"kubernetes.io/projected/8873911f-dd11-461e-a6c5-a612cd94f39b-kube-api-access-r8sml\") pod \"must-gather-qtsc5\" (UID: \"8873911f-dd11-461e-a6c5-a612cd94f39b\") " pod="openshift-must-gather-h772l/must-gather-qtsc5" Mar 11 10:46:13 crc kubenswrapper[4840]: I0311 10:46:13.786665 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h772l/must-gather-qtsc5" Mar 11 10:46:14 crc kubenswrapper[4840]: I0311 10:46:14.250946 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-h772l/must-gather-qtsc5"] Mar 11 10:46:14 crc kubenswrapper[4840]: I0311 10:46:14.343235 4840 scope.go:117] "RemoveContainer" containerID="77f12dd759df4a83cd7b5f851b51e942e2379b4d50c3bbae53496f58b08e73ad" Mar 11 10:46:14 crc kubenswrapper[4840]: I0311 10:46:14.896745 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h772l/must-gather-qtsc5" event={"ID":"8873911f-dd11-461e-a6c5-a612cd94f39b","Type":"ContainerStarted","Data":"a50be3ded15214966e66d93a0cf690c05a7f7e247d014caf438a426ad9639c4e"} Mar 11 10:46:22 crc kubenswrapper[4840]: I0311 10:46:22.964678 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h772l/must-gather-qtsc5" event={"ID":"8873911f-dd11-461e-a6c5-a612cd94f39b","Type":"ContainerStarted","Data":"76aaaa6c4b47e015508b26f519ba636cc6dfe6b5000cb4623c0e9949ad6bfb31"} Mar 11 10:46:22 crc kubenswrapper[4840]: I0311 10:46:22.965242 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h772l/must-gather-qtsc5" event={"ID":"8873911f-dd11-461e-a6c5-a612cd94f39b","Type":"ContainerStarted","Data":"e2ad27deeb07e13af6854a4f583bccabc64422d067f5d46a46fa17b0b0af3d72"} Mar 11 10:46:22 crc kubenswrapper[4840]: I0311 10:46:22.981805 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-h772l/must-gather-qtsc5" podStartSLOduration=1.9304246489999999 podStartE2EDuration="9.981780626s" podCreationTimestamp="2026-03-11 10:46:13 +0000 UTC" firstStartedPulling="2026-03-11 10:46:14.260647943 +0000 UTC m=+6572.926317758" lastFinishedPulling="2026-03-11 10:46:22.31200392 +0000 UTC m=+6580.977673735" observedRunningTime="2026-03-11 10:46:22.97915338 +0000 UTC m=+6581.644823195" watchObservedRunningTime="2026-03-11 10:46:22.981780626 +0000 UTC m=+6581.647450451" Mar 11 10:46:25 crc kubenswrapper[4840]: I0311 10:46:25.062678 4840 scope.go:117] "RemoveContainer" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" Mar 11 10:46:25 crc kubenswrapper[4840]: E0311 10:46:25.063647 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:46:25 crc kubenswrapper[4840]: I0311 10:46:25.174746 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-h772l/crc-debug-bzpgv"] Mar 11 10:46:25 crc kubenswrapper[4840]: I0311 10:46:25.175703 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h772l/crc-debug-bzpgv" Mar 11 10:46:25 crc kubenswrapper[4840]: I0311 10:46:25.214625 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/744062ca-32ad-40b2-897c-83702c0dfadc-host\") pod \"crc-debug-bzpgv\" (UID: \"744062ca-32ad-40b2-897c-83702c0dfadc\") " pod="openshift-must-gather-h772l/crc-debug-bzpgv" Mar 11 10:46:25 crc kubenswrapper[4840]: I0311 10:46:25.214678 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbxzm\" (UniqueName: \"kubernetes.io/projected/744062ca-32ad-40b2-897c-83702c0dfadc-kube-api-access-rbxzm\") pod \"crc-debug-bzpgv\" (UID: \"744062ca-32ad-40b2-897c-83702c0dfadc\") " pod="openshift-must-gather-h772l/crc-debug-bzpgv" Mar 11 10:46:25 crc kubenswrapper[4840]: I0311 10:46:25.316229 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/744062ca-32ad-40b2-897c-83702c0dfadc-host\") pod \"crc-debug-bzpgv\" (UID: \"744062ca-32ad-40b2-897c-83702c0dfadc\") " pod="openshift-must-gather-h772l/crc-debug-bzpgv" Mar 11 10:46:25 crc kubenswrapper[4840]: I0311 10:46:25.316295 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbxzm\" (UniqueName: \"kubernetes.io/projected/744062ca-32ad-40b2-897c-83702c0dfadc-kube-api-access-rbxzm\") pod \"crc-debug-bzpgv\" (UID: \"744062ca-32ad-40b2-897c-83702c0dfadc\") " pod="openshift-must-gather-h772l/crc-debug-bzpgv" Mar 11 10:46:25 crc kubenswrapper[4840]: I0311 10:46:25.316386 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/744062ca-32ad-40b2-897c-83702c0dfadc-host\") pod \"crc-debug-bzpgv\" (UID: \"744062ca-32ad-40b2-897c-83702c0dfadc\") " pod="openshift-must-gather-h772l/crc-debug-bzpgv" Mar 11 10:46:25 crc kubenswrapper[4840]: I0311 10:46:25.341211 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbxzm\" (UniqueName: \"kubernetes.io/projected/744062ca-32ad-40b2-897c-83702c0dfadc-kube-api-access-rbxzm\") pod \"crc-debug-bzpgv\" (UID: \"744062ca-32ad-40b2-897c-83702c0dfadc\") " pod="openshift-must-gather-h772l/crc-debug-bzpgv" Mar 11 10:46:25 crc kubenswrapper[4840]: I0311 10:46:25.495219 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h772l/crc-debug-bzpgv" Mar 11 10:46:25 crc kubenswrapper[4840]: I0311 10:46:25.990434 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h772l/crc-debug-bzpgv" event={"ID":"744062ca-32ad-40b2-897c-83702c0dfadc","Type":"ContainerStarted","Data":"c13cd502fd2eeb875903c7ac3b8e13319f7115c85790c13fa2a47be9df9a609d"} Mar 11 10:46:37 crc kubenswrapper[4840]: I0311 10:46:37.081089 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h772l/crc-debug-bzpgv" event={"ID":"744062ca-32ad-40b2-897c-83702c0dfadc","Type":"ContainerStarted","Data":"5a7cf04e27cd4399d49d363fd1c27111895b080f969d3386cddef2e0f1965b30"} Mar 11 10:46:37 crc kubenswrapper[4840]: I0311 10:46:37.101780 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-h772l/crc-debug-bzpgv" podStartSLOduration=1.26734353 podStartE2EDuration="12.101757725s" podCreationTimestamp="2026-03-11 10:46:25 +0000 UTC" firstStartedPulling="2026-03-11 10:46:25.528655249 +0000 UTC m=+6584.194325064" lastFinishedPulling="2026-03-11 10:46:36.363069444 +0000 UTC m=+6595.028739259" observedRunningTime="2026-03-11 10:46:37.095287281 +0000 UTC m=+6595.760957106" watchObservedRunningTime="2026-03-11 10:46:37.101757725 +0000 UTC m=+6595.767427540" Mar 11 10:46:39 crc kubenswrapper[4840]: I0311 10:46:39.059690 4840 scope.go:117] "RemoveContainer" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" Mar 11 10:46:39 crc kubenswrapper[4840]: E0311 10:46:39.060285 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:46:54 crc kubenswrapper[4840]: I0311 10:46:54.060188 4840 scope.go:117] "RemoveContainer" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" Mar 11 10:46:54 crc kubenswrapper[4840]: E0311 10:46:54.060970 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:46:55 crc kubenswrapper[4840]: I0311 10:46:55.212633 4840 generic.go:334] "Generic (PLEG): container finished" podID="744062ca-32ad-40b2-897c-83702c0dfadc" containerID="5a7cf04e27cd4399d49d363fd1c27111895b080f969d3386cddef2e0f1965b30" exitCode=0 Mar 11 10:46:55 crc kubenswrapper[4840]: I0311 10:46:55.212993 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h772l/crc-debug-bzpgv" event={"ID":"744062ca-32ad-40b2-897c-83702c0dfadc","Type":"ContainerDied","Data":"5a7cf04e27cd4399d49d363fd1c27111895b080f969d3386cddef2e0f1965b30"} Mar 11 10:46:56 crc kubenswrapper[4840]: I0311 10:46:56.309748 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h772l/crc-debug-bzpgv" Mar 11 10:46:56 crc kubenswrapper[4840]: I0311 10:46:56.349984 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-h772l/crc-debug-bzpgv"] Mar 11 10:46:56 crc kubenswrapper[4840]: I0311 10:46:56.355329 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-h772l/crc-debug-bzpgv"] Mar 11 10:46:56 crc kubenswrapper[4840]: I0311 10:46:56.423338 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/744062ca-32ad-40b2-897c-83702c0dfadc-host\") pod \"744062ca-32ad-40b2-897c-83702c0dfadc\" (UID: \"744062ca-32ad-40b2-897c-83702c0dfadc\") " Mar 11 10:46:56 crc kubenswrapper[4840]: I0311 10:46:56.423392 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbxzm\" (UniqueName: \"kubernetes.io/projected/744062ca-32ad-40b2-897c-83702c0dfadc-kube-api-access-rbxzm\") pod \"744062ca-32ad-40b2-897c-83702c0dfadc\" (UID: \"744062ca-32ad-40b2-897c-83702c0dfadc\") " Mar 11 10:46:56 crc kubenswrapper[4840]: I0311 10:46:56.424401 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/744062ca-32ad-40b2-897c-83702c0dfadc-host" (OuterVolumeSpecName: "host") pod "744062ca-32ad-40b2-897c-83702c0dfadc" (UID: "744062ca-32ad-40b2-897c-83702c0dfadc"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 10:46:56 crc kubenswrapper[4840]: I0311 10:46:56.466876 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/744062ca-32ad-40b2-897c-83702c0dfadc-kube-api-access-rbxzm" (OuterVolumeSpecName: "kube-api-access-rbxzm") pod "744062ca-32ad-40b2-897c-83702c0dfadc" (UID: "744062ca-32ad-40b2-897c-83702c0dfadc"). InnerVolumeSpecName "kube-api-access-rbxzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:46:56 crc kubenswrapper[4840]: I0311 10:46:56.525240 4840 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/744062ca-32ad-40b2-897c-83702c0dfadc-host\") on node \"crc\" DevicePath \"\"" Mar 11 10:46:56 crc kubenswrapper[4840]: I0311 10:46:56.525282 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbxzm\" (UniqueName: \"kubernetes.io/projected/744062ca-32ad-40b2-897c-83702c0dfadc-kube-api-access-rbxzm\") on node \"crc\" DevicePath \"\"" Mar 11 10:46:57 crc kubenswrapper[4840]: I0311 10:46:57.228565 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c13cd502fd2eeb875903c7ac3b8e13319f7115c85790c13fa2a47be9df9a609d" Mar 11 10:46:57 crc kubenswrapper[4840]: I0311 10:46:57.228632 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h772l/crc-debug-bzpgv" Mar 11 10:46:57 crc kubenswrapper[4840]: I0311 10:46:57.575662 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-h772l/crc-debug-vnzwl"] Mar 11 10:46:57 crc kubenswrapper[4840]: E0311 10:46:57.576061 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744062ca-32ad-40b2-897c-83702c0dfadc" containerName="container-00" Mar 11 10:46:57 crc kubenswrapper[4840]: I0311 10:46:57.576076 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="744062ca-32ad-40b2-897c-83702c0dfadc" containerName="container-00" Mar 11 10:46:57 crc kubenswrapper[4840]: I0311 10:46:57.576318 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="744062ca-32ad-40b2-897c-83702c0dfadc" containerName="container-00" Mar 11 10:46:57 crc kubenswrapper[4840]: I0311 10:46:57.577036 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h772l/crc-debug-vnzwl" Mar 11 10:46:57 crc kubenswrapper[4840]: I0311 10:46:57.743962 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnnhz\" (UniqueName: \"kubernetes.io/projected/770db204-4909-4c76-aa66-dde6e3be6d14-kube-api-access-nnnhz\") pod \"crc-debug-vnzwl\" (UID: \"770db204-4909-4c76-aa66-dde6e3be6d14\") " pod="openshift-must-gather-h772l/crc-debug-vnzwl" Mar 11 10:46:57 crc kubenswrapper[4840]: I0311 10:46:57.744255 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/770db204-4909-4c76-aa66-dde6e3be6d14-host\") pod \"crc-debug-vnzwl\" (UID: \"770db204-4909-4c76-aa66-dde6e3be6d14\") " pod="openshift-must-gather-h772l/crc-debug-vnzwl" Mar 11 10:46:57 crc kubenswrapper[4840]: I0311 10:46:57.845384 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/770db204-4909-4c76-aa66-dde6e3be6d14-host\") pod \"crc-debug-vnzwl\" (UID: \"770db204-4909-4c76-aa66-dde6e3be6d14\") " pod="openshift-must-gather-h772l/crc-debug-vnzwl" Mar 11 10:46:57 crc kubenswrapper[4840]: I0311 10:46:57.845464 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnnhz\" (UniqueName: \"kubernetes.io/projected/770db204-4909-4c76-aa66-dde6e3be6d14-kube-api-access-nnnhz\") pod \"crc-debug-vnzwl\" (UID: \"770db204-4909-4c76-aa66-dde6e3be6d14\") " pod="openshift-must-gather-h772l/crc-debug-vnzwl" Mar 11 10:46:57 crc kubenswrapper[4840]: I0311 10:46:57.845563 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/770db204-4909-4c76-aa66-dde6e3be6d14-host\") pod \"crc-debug-vnzwl\" (UID: \"770db204-4909-4c76-aa66-dde6e3be6d14\") " pod="openshift-must-gather-h772l/crc-debug-vnzwl" Mar 11 10:46:57 crc kubenswrapper[4840]: I0311 10:46:57.862403 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnnhz\" (UniqueName: \"kubernetes.io/projected/770db204-4909-4c76-aa66-dde6e3be6d14-kube-api-access-nnnhz\") pod \"crc-debug-vnzwl\" (UID: \"770db204-4909-4c76-aa66-dde6e3be6d14\") " pod="openshift-must-gather-h772l/crc-debug-vnzwl" Mar 11 10:46:57 crc kubenswrapper[4840]: I0311 10:46:57.891469 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h772l/crc-debug-vnzwl" Mar 11 10:46:58 crc kubenswrapper[4840]: I0311 10:46:58.070050 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="744062ca-32ad-40b2-897c-83702c0dfadc" path="/var/lib/kubelet/pods/744062ca-32ad-40b2-897c-83702c0dfadc/volumes" Mar 11 10:46:58 crc kubenswrapper[4840]: I0311 10:46:58.236405 4840 generic.go:334] "Generic (PLEG): container finished" podID="770db204-4909-4c76-aa66-dde6e3be6d14" containerID="63f5b7c391d50ae87f5b2f7cbcd7ae7b3d7b5f5a08a1a993c908a557bf8904cf" exitCode=1 Mar 11 10:46:58 crc kubenswrapper[4840]: I0311 10:46:58.236448 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h772l/crc-debug-vnzwl" event={"ID":"770db204-4909-4c76-aa66-dde6e3be6d14","Type":"ContainerDied","Data":"63f5b7c391d50ae87f5b2f7cbcd7ae7b3d7b5f5a08a1a993c908a557bf8904cf"} Mar 11 10:46:58 crc kubenswrapper[4840]: I0311 10:46:58.236496 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h772l/crc-debug-vnzwl" event={"ID":"770db204-4909-4c76-aa66-dde6e3be6d14","Type":"ContainerStarted","Data":"2fb751f77465881c84a4cd1ac17bb6b633fc20f77473e2e45bf49ee85b0a24ba"} Mar 11 10:46:58 crc kubenswrapper[4840]: I0311 10:46:58.271395 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-h772l/crc-debug-vnzwl"] Mar 11 10:46:58 crc kubenswrapper[4840]: I0311 10:46:58.279792 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-h772l/crc-debug-vnzwl"] Mar 11 10:46:59 crc kubenswrapper[4840]: I0311 10:46:59.364623 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h772l/crc-debug-vnzwl" Mar 11 10:46:59 crc kubenswrapper[4840]: I0311 10:46:59.469970 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/770db204-4909-4c76-aa66-dde6e3be6d14-host\") pod \"770db204-4909-4c76-aa66-dde6e3be6d14\" (UID: \"770db204-4909-4c76-aa66-dde6e3be6d14\") " Mar 11 10:46:59 crc kubenswrapper[4840]: I0311 10:46:59.470121 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/770db204-4909-4c76-aa66-dde6e3be6d14-host" (OuterVolumeSpecName: "host") pod "770db204-4909-4c76-aa66-dde6e3be6d14" (UID: "770db204-4909-4c76-aa66-dde6e3be6d14"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 11 10:46:59 crc kubenswrapper[4840]: I0311 10:46:59.470169 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnnhz\" (UniqueName: \"kubernetes.io/projected/770db204-4909-4c76-aa66-dde6e3be6d14-kube-api-access-nnnhz\") pod \"770db204-4909-4c76-aa66-dde6e3be6d14\" (UID: \"770db204-4909-4c76-aa66-dde6e3be6d14\") " Mar 11 10:46:59 crc kubenswrapper[4840]: I0311 10:46:59.470847 4840 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/770db204-4909-4c76-aa66-dde6e3be6d14-host\") on node \"crc\" DevicePath \"\"" Mar 11 10:46:59 crc kubenswrapper[4840]: I0311 10:46:59.476615 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/770db204-4909-4c76-aa66-dde6e3be6d14-kube-api-access-nnnhz" (OuterVolumeSpecName: "kube-api-access-nnnhz") pod "770db204-4909-4c76-aa66-dde6e3be6d14" (UID: "770db204-4909-4c76-aa66-dde6e3be6d14"). InnerVolumeSpecName "kube-api-access-nnnhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:46:59 crc kubenswrapper[4840]: I0311 10:46:59.572603 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnnhz\" (UniqueName: \"kubernetes.io/projected/770db204-4909-4c76-aa66-dde6e3be6d14-kube-api-access-nnnhz\") on node \"crc\" DevicePath \"\"" Mar 11 10:47:00 crc kubenswrapper[4840]: I0311 10:47:00.070271 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="770db204-4909-4c76-aa66-dde6e3be6d14" path="/var/lib/kubelet/pods/770db204-4909-4c76-aa66-dde6e3be6d14/volumes" Mar 11 10:47:00 crc kubenswrapper[4840]: I0311 10:47:00.252527 4840 scope.go:117] "RemoveContainer" containerID="63f5b7c391d50ae87f5b2f7cbcd7ae7b3d7b5f5a08a1a993c908a557bf8904cf" Mar 11 10:47:00 crc kubenswrapper[4840]: I0311 10:47:00.252575 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h772l/crc-debug-vnzwl" Mar 11 10:47:08 crc kubenswrapper[4840]: I0311 10:47:08.061126 4840 scope.go:117] "RemoveContainer" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" Mar 11 10:47:08 crc kubenswrapper[4840]: E0311 10:47:08.062312 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:47:16 crc kubenswrapper[4840]: I0311 10:47:16.275129 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cbc5d9c5f-w6q8t_0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1/init/0.log" Mar 11 10:47:16 crc kubenswrapper[4840]: I0311 10:47:16.491902 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cbc5d9c5f-w6q8t_0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1/init/0.log" Mar 11 10:47:16 crc kubenswrapper[4840]: I0311 10:47:16.534058 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cbc5d9c5f-w6q8t_0ba44e89-3431-47ff-a0e0-b8daf1cf7ce1/dnsmasq-dns/0.log" Mar 11 10:47:16 crc kubenswrapper[4840]: I0311 10:47:16.725608 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-76548565b-rrmpc_8225c1e8-6b6f-4d6b-a744-29b0fdc802df/keystone-api/0.log" Mar 11 10:47:16 crc kubenswrapper[4840]: I0311 10:47:16.825412 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-copy-data_455e7fd5-57d3-417d-888b-f5f16756b1a9/adoption/0.log" Mar 11 10:47:17 crc kubenswrapper[4840]: I0311 10:47:17.025859 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe/mysql-bootstrap/0.log" Mar 11 10:47:17 crc kubenswrapper[4840]: I0311 10:47:17.561645 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe/mysql-bootstrap/0.log" Mar 11 10:47:17 crc kubenswrapper[4840]: I0311 10:47:17.645530 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_6a2dcec5-bf72-4b97-b2cf-0af92dae1cbe/galera/0.log" Mar 11 10:47:17 crc kubenswrapper[4840]: I0311 10:47:17.819102 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2c0400ce-cf58-461a-946f-8f341c40fcce/mysql-bootstrap/0.log" Mar 11 10:47:18 crc kubenswrapper[4840]: I0311 10:47:18.031762 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2c0400ce-cf58-461a-946f-8f341c40fcce/mysql-bootstrap/0.log" Mar 11 10:47:18 crc kubenswrapper[4840]: I0311 10:47:18.107179 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_2c0400ce-cf58-461a-946f-8f341c40fcce/galera/0.log" Mar 11 10:47:18 crc kubenswrapper[4840]: I0311 10:47:18.215253 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_9cd09283-efe1-471c-9723-3e2114c910f7/memcached/0.log" Mar 11 10:47:18 crc kubenswrapper[4840]: I0311 10:47:18.224302 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_366976d6-0ec3-444e-864b-0e721ec24799/openstackclient/0.log" Mar 11 10:47:18 crc kubenswrapper[4840]: I0311 10:47:18.311264 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-copy-data_e8c95ae1-037a-453e-8910-774fc6c665cb/adoption/0.log" Mar 11 10:47:18 crc kubenswrapper[4840]: I0311 10:47:18.461015 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a2d77cda-f495-462c-941d-90edf6abb3ca/openstack-network-exporter/0.log" Mar 11 10:47:18 crc kubenswrapper[4840]: I0311 10:47:18.481042 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_a2d77cda-f495-462c-941d-90edf6abb3ca/ovn-northd/0.log" Mar 11 10:47:18 crc kubenswrapper[4840]: I0311 10:47:18.665677 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_efbbc77c-f1d4-4756-9e19-6f53b03c275b/ovsdbserver-nb/0.log" Mar 11 10:47:18 crc kubenswrapper[4840]: I0311 10:47:18.675833 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_efbbc77c-f1d4-4756-9e19-6f53b03c275b/openstack-network-exporter/0.log" Mar 11 10:47:18 crc kubenswrapper[4840]: I0311 10:47:18.840395 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a/openstack-network-exporter/0.log" Mar 11 10:47:18 crc kubenswrapper[4840]: I0311 10:47:18.880730 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_80b7e9d7-53ff-47ca-8b87-6d03cb1abe2a/ovsdbserver-nb/0.log" Mar 11 10:47:18 crc kubenswrapper[4840]: I0311 10:47:18.969369 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_000957b4-d3c7-4076-a0d3-21a679bfe061/openstack-network-exporter/0.log" Mar 11 10:47:19 crc kubenswrapper[4840]: I0311 10:47:19.034924 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_000957b4-d3c7-4076-a0d3-21a679bfe061/ovsdbserver-nb/0.log" Mar 11 10:47:19 crc kubenswrapper[4840]: I0311 10:47:19.081294 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_69010a76-692d-46c4-bf9b-db918d487b4c/openstack-network-exporter/0.log" Mar 11 10:47:19 crc kubenswrapper[4840]: I0311 10:47:19.154549 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_69010a76-692d-46c4-bf9b-db918d487b4c/ovsdbserver-sb/0.log" Mar 11 10:47:19 crc kubenswrapper[4840]: I0311 10:47:19.260705 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_c7ad5868-055d-4e77-bc60-a7a2bab42b89/openstack-network-exporter/0.log" Mar 11 10:47:19 crc kubenswrapper[4840]: I0311 10:47:19.283836 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_c7ad5868-055d-4e77-bc60-a7a2bab42b89/ovsdbserver-sb/0.log" Mar 11 10:47:19 crc kubenswrapper[4840]: I0311 10:47:19.426684 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_86a8de5d-3566-4a15-a21e-59379813ed0d/openstack-network-exporter/0.log" Mar 11 10:47:19 crc kubenswrapper[4840]: I0311 10:47:19.494821 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_86a8de5d-3566-4a15-a21e-59379813ed0d/ovsdbserver-sb/0.log" Mar 11 10:47:19 crc kubenswrapper[4840]: I0311 10:47:19.581114 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_71c42351-b9e3-43cb-b97a-0c28c37b5416/setup-container/0.log" Mar 11 10:47:19 crc kubenswrapper[4840]: I0311 10:47:19.902017 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_71c42351-b9e3-43cb-b97a-0c28c37b5416/rabbitmq/0.log" Mar 11 10:47:19 crc kubenswrapper[4840]: I0311 10:47:19.906347 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_71c42351-b9e3-43cb-b97a-0c28c37b5416/setup-container/0.log" Mar 11 10:47:19 crc kubenswrapper[4840]: I0311 10:47:19.959376 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0e29f879-7226-4e32-bdac-aa71b0755af5/setup-container/0.log" Mar 11 10:47:20 crc kubenswrapper[4840]: I0311 10:47:20.125568 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0e29f879-7226-4e32-bdac-aa71b0755af5/setup-container/0.log" Mar 11 10:47:20 crc kubenswrapper[4840]: I0311 10:47:20.131904 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_0e29f879-7226-4e32-bdac-aa71b0755af5/rabbitmq/0.log" Mar 11 10:47:21 crc kubenswrapper[4840]: I0311 10:47:21.060490 4840 scope.go:117] "RemoveContainer" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" Mar 11 10:47:21 crc kubenswrapper[4840]: E0311 10:47:21.060704 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:47:33 crc kubenswrapper[4840]: I0311 10:47:33.060394 4840 scope.go:117] "RemoveContainer" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" Mar 11 10:47:33 crc kubenswrapper[4840]: E0311 10:47:33.061118 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:47:34 crc kubenswrapper[4840]: I0311 10:47:34.829717 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6_76328eae-3dfe-4246-8633-2b53684e8312/util/0.log" Mar 11 10:47:35 crc kubenswrapper[4840]: I0311 10:47:35.014477 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6_76328eae-3dfe-4246-8633-2b53684e8312/util/0.log" Mar 11 10:47:35 crc kubenswrapper[4840]: I0311 10:47:35.044253 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6_76328eae-3dfe-4246-8633-2b53684e8312/pull/0.log" Mar 11 10:47:35 crc kubenswrapper[4840]: I0311 10:47:35.045082 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6_76328eae-3dfe-4246-8633-2b53684e8312/pull/0.log" Mar 11 10:47:35 crc kubenswrapper[4840]: I0311 10:47:35.205522 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6_76328eae-3dfe-4246-8633-2b53684e8312/extract/0.log" Mar 11 10:47:35 crc kubenswrapper[4840]: I0311 10:47:35.215280 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6_76328eae-3dfe-4246-8633-2b53684e8312/util/0.log" Mar 11 10:47:35 crc kubenswrapper[4840]: I0311 10:47:35.216324 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_951b5e6f67e727348308cf03f2b463d3e2bd27386b453f6699e03a49ba5nhq6_76328eae-3dfe-4246-8633-2b53684e8312/pull/0.log" Mar 11 10:47:35 crc kubenswrapper[4840]: I0311 10:47:35.652385 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-66d56f6ff4-tjvvr_9ba35a68-fbec-4de0-a84a-8f879b9906e5/manager/0.log" Mar 11 10:47:35 crc kubenswrapper[4840]: I0311 10:47:35.968018 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5964f64c48-qbmr9_7f7b0431-153a-48e0-8523-1db25d309919/manager/0.log" Mar 11 10:47:36 crc kubenswrapper[4840]: I0311 10:47:36.051813 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-77b6666d85-6gn84_32013686-938e-476d-b215-0bb597f780da/manager/0.log" Mar 11 10:47:36 crc kubenswrapper[4840]: I0311 10:47:36.292659 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d9d6b584d-k7b9q_93e6c54e-ac8e-4cec-a872-6e5204f0afdb/manager/0.log" Mar 11 10:47:36 crc kubenswrapper[4840]: I0311 10:47:36.856998 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6bbb499bbc-9dffx_b3f01dca-eeb5-40bf-bddb-2fe256ee64f8/manager/0.log" Mar 11 10:47:37 crc kubenswrapper[4840]: I0311 10:47:37.109556 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-5995f4446f-xldht_ed2a6fa8-2915-4d07-b54d-b274a742c5a7/manager/0.log" Mar 11 10:47:37 crc kubenswrapper[4840]: I0311 10:47:37.339376 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-684f77d66d-bzvzx_086eae6a-0cbe-4a9a-884b-272239b8d302/manager/0.log" Mar 11 10:47:37 crc kubenswrapper[4840]: I0311 10:47:37.567066 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-68f45f9d9f-pqcxj_6d12563d-1416-4fd9-b38d-40bdadc53b40/manager/0.log" Mar 11 10:47:37 crc kubenswrapper[4840]: I0311 10:47:37.864828 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-658d4cdd5-gg7rz_d5f70a0c-43ba-4cb0-b66b-a24b3e861b56/manager/0.log" Mar 11 10:47:38 crc kubenswrapper[4840]: I0311 10:47:38.096988 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-776c5696bf-pgbkn_9170b899-2f0e-498c-893d-fd8b64eb96c6/manager/0.log" Mar 11 10:47:38 crc kubenswrapper[4840]: I0311 10:47:38.187962 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-984cd4dcf-mngxm_2332f92a-b46b-4f63-83f9-f48ea29492b9/manager/0.log" Mar 11 10:47:38 crc kubenswrapper[4840]: I0311 10:47:38.387420 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-569cc54c5-q7hvp_b14fee95-3d63-402a-ae0a-d3f74415f59b/manager/0.log" Mar 11 10:47:38 crc kubenswrapper[4840]: I0311 10:47:38.397607 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4f55cb5c-wbkbm_4ed5f2e7-087f-466d-85b3-9088ab43b410/manager/0.log" Mar 11 10:47:38 crc kubenswrapper[4840]: I0311 10:47:38.658046 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6647d7885fb7jnm_97bcd81b-f45e-4a98-9079-000fdf4cc50f/manager/0.log" Mar 11 10:47:38 crc kubenswrapper[4840]: I0311 10:47:38.863435 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6cf8df7788-wznnz_a4fa1ab4-5b35-460c-a350-ba40ed046fe5/operator/0.log" Mar 11 10:47:39 crc kubenswrapper[4840]: I0311 10:47:39.254011 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-f5l8j_35846f03-87ef-4a54-9386-2080ff604a86/registry-server/0.log" Mar 11 10:47:39 crc kubenswrapper[4840]: I0311 10:47:39.312039 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-bbc5b68f9-stv5z_506cf57c-03be-4949-9037-2e806f8b3896/manager/0.log" Mar 11 10:47:39 crc kubenswrapper[4840]: I0311 10:47:39.449812 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-574d45c66c-5dxn9_d58a257e-f4f2-48cd-8c89-e0034e37092c/manager/0.log" Mar 11 10:47:39 crc kubenswrapper[4840]: I0311 10:47:39.537682 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2jbdn_3d457e03-0abd-42cf-83ed-b3e6113781ac/operator/0.log" Mar 11 10:47:39 crc kubenswrapper[4840]: I0311 10:47:39.693415 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-677c674df7-6jvcx_fd5bb41b-837d-473d-9718-56f2247fadcb/manager/0.log" Mar 11 10:47:39 crc kubenswrapper[4840]: I0311 10:47:39.934398 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6cd66dbd4b-j6nng_1fb68385-1f54-4612-8bfc-a4bb2e535600/manager/0.log" Mar 11 10:47:39 crc kubenswrapper[4840]: I0311 10:47:39.987780 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5c5cb9c4d7-vxcc4_ec409cfe-4999-48d4-93f0-cbf22595667e/manager/0.log" Mar 11 10:47:40 crc kubenswrapper[4840]: I0311 10:47:40.128305 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6dd88c6f67-dsd85_03e30bf6-186b-4ec3-965b-c24f4e8af21b/manager/0.log" Mar 11 10:47:40 crc kubenswrapper[4840]: I0311 10:47:40.238507 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6679ddfdc7-h8cln_a121ac36-d9c4-4837-b075-57588b36c8ec/manager/0.log" Mar 11 10:47:47 crc kubenswrapper[4840]: I0311 10:47:47.012330 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-677bd678f7-kkplv_91aae815-00f1-46d8-8709-f212ab049fdf/manager/0.log" Mar 11 10:47:48 crc kubenswrapper[4840]: I0311 10:47:48.060884 4840 scope.go:117] "RemoveContainer" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" Mar 11 10:47:48 crc kubenswrapper[4840]: E0311 10:47:48.061254 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:48:00 crc kubenswrapper[4840]: I0311 10:48:00.143681 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553768-pwf4b"] Mar 11 10:48:00 crc kubenswrapper[4840]: E0311 10:48:00.144894 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="770db204-4909-4c76-aa66-dde6e3be6d14" containerName="container-00" Mar 11 10:48:00 crc kubenswrapper[4840]: I0311 10:48:00.144911 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="770db204-4909-4c76-aa66-dde6e3be6d14" containerName="container-00" Mar 11 10:48:00 crc kubenswrapper[4840]: I0311 10:48:00.145102 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="770db204-4909-4c76-aa66-dde6e3be6d14" containerName="container-00" Mar 11 10:48:00 crc kubenswrapper[4840]: I0311 10:48:00.146333 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553768-pwf4b" Mar 11 10:48:00 crc kubenswrapper[4840]: I0311 10:48:00.148858 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:48:00 crc kubenswrapper[4840]: I0311 10:48:00.152409 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553768-pwf4b"] Mar 11 10:48:00 crc kubenswrapper[4840]: I0311 10:48:00.153366 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:48:00 crc kubenswrapper[4840]: I0311 10:48:00.155174 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:48:00 crc kubenswrapper[4840]: I0311 10:48:00.239619 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8jrw\" (UniqueName: \"kubernetes.io/projected/6d246b21-4e35-4b47-9a2b-fd99865ae488-kube-api-access-x8jrw\") pod \"auto-csr-approver-29553768-pwf4b\" (UID: \"6d246b21-4e35-4b47-9a2b-fd99865ae488\") " pod="openshift-infra/auto-csr-approver-29553768-pwf4b" Mar 11 10:48:00 crc kubenswrapper[4840]: I0311 10:48:00.340710 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8jrw\" (UniqueName: \"kubernetes.io/projected/6d246b21-4e35-4b47-9a2b-fd99865ae488-kube-api-access-x8jrw\") pod \"auto-csr-approver-29553768-pwf4b\" (UID: \"6d246b21-4e35-4b47-9a2b-fd99865ae488\") " pod="openshift-infra/auto-csr-approver-29553768-pwf4b" Mar 11 10:48:00 crc kubenswrapper[4840]: I0311 10:48:00.358267 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8jrw\" (UniqueName: \"kubernetes.io/projected/6d246b21-4e35-4b47-9a2b-fd99865ae488-kube-api-access-x8jrw\") pod \"auto-csr-approver-29553768-pwf4b\" (UID: \"6d246b21-4e35-4b47-9a2b-fd99865ae488\") " pod="openshift-infra/auto-csr-approver-29553768-pwf4b" Mar 11 10:48:00 crc kubenswrapper[4840]: I0311 10:48:00.472668 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553768-pwf4b" Mar 11 10:48:00 crc kubenswrapper[4840]: I0311 10:48:00.918239 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553768-pwf4b"] Mar 11 10:48:01 crc kubenswrapper[4840]: I0311 10:48:01.789281 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553768-pwf4b" event={"ID":"6d246b21-4e35-4b47-9a2b-fd99865ae488","Type":"ContainerStarted","Data":"3e2f36d7dad633d12ba5867ed3f8106ae61e771cb332cad535d6c0bd60b29599"} Mar 11 10:48:02 crc kubenswrapper[4840]: I0311 10:48:02.306781 4840 scope.go:117] "RemoveContainer" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" Mar 11 10:48:02 crc kubenswrapper[4840]: E0311 10:48:02.307243 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:48:02 crc kubenswrapper[4840]: I0311 10:48:02.797889 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553768-pwf4b" event={"ID":"6d246b21-4e35-4b47-9a2b-fd99865ae488","Type":"ContainerStarted","Data":"1d8215a0aa4e1fc5342fcd98d97e0831e8160e9aa4f716202750ede45ddac94c"} Mar 11 10:48:02 crc kubenswrapper[4840]: I0311 10:48:02.812412 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553768-pwf4b" podStartSLOduration=1.23461737 podStartE2EDuration="2.812393472s" podCreationTimestamp="2026-03-11 10:48:00 +0000 UTC" firstStartedPulling="2026-03-11 10:48:00.928402472 +0000 UTC m=+6679.594072287" lastFinishedPulling="2026-03-11 10:48:02.506178534 +0000 UTC m=+6681.171848389" observedRunningTime="2026-03-11 10:48:02.808181695 +0000 UTC m=+6681.473851530" watchObservedRunningTime="2026-03-11 10:48:02.812393472 +0000 UTC m=+6681.478063287" Mar 11 10:48:03 crc kubenswrapper[4840]: I0311 10:48:03.810593 4840 generic.go:334] "Generic (PLEG): container finished" podID="6d246b21-4e35-4b47-9a2b-fd99865ae488" containerID="1d8215a0aa4e1fc5342fcd98d97e0831e8160e9aa4f716202750ede45ddac94c" exitCode=0 Mar 11 10:48:03 crc kubenswrapper[4840]: I0311 10:48:03.810645 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553768-pwf4b" event={"ID":"6d246b21-4e35-4b47-9a2b-fd99865ae488","Type":"ContainerDied","Data":"1d8215a0aa4e1fc5342fcd98d97e0831e8160e9aa4f716202750ede45ddac94c"} Mar 11 10:48:04 crc kubenswrapper[4840]: I0311 10:48:04.935310 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-79btk_26477d16-ea37-4c74-b5d0-b537147ab754/control-plane-machine-set-operator/0.log" Mar 11 10:48:05 crc kubenswrapper[4840]: I0311 10:48:05.077426 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hgxgb_b94670ce-123d-4562-b9ae-7a7fe898bff7/kube-rbac-proxy/0.log" Mar 11 10:48:05 crc kubenswrapper[4840]: I0311 10:48:05.146905 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553768-pwf4b" Mar 11 10:48:05 crc kubenswrapper[4840]: I0311 10:48:05.200007 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hgxgb_b94670ce-123d-4562-b9ae-7a7fe898bff7/machine-api-operator/0.log" Mar 11 10:48:05 crc kubenswrapper[4840]: I0311 10:48:05.222211 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8jrw\" (UniqueName: \"kubernetes.io/projected/6d246b21-4e35-4b47-9a2b-fd99865ae488-kube-api-access-x8jrw\") pod \"6d246b21-4e35-4b47-9a2b-fd99865ae488\" (UID: \"6d246b21-4e35-4b47-9a2b-fd99865ae488\") " Mar 11 10:48:05 crc kubenswrapper[4840]: I0311 10:48:05.227831 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d246b21-4e35-4b47-9a2b-fd99865ae488-kube-api-access-x8jrw" (OuterVolumeSpecName: "kube-api-access-x8jrw") pod "6d246b21-4e35-4b47-9a2b-fd99865ae488" (UID: "6d246b21-4e35-4b47-9a2b-fd99865ae488"). InnerVolumeSpecName "kube-api-access-x8jrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:48:05 crc kubenswrapper[4840]: I0311 10:48:05.324257 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8jrw\" (UniqueName: \"kubernetes.io/projected/6d246b21-4e35-4b47-9a2b-fd99865ae488-kube-api-access-x8jrw\") on node \"crc\" DevicePath \"\"" Mar 11 10:48:05 crc kubenswrapper[4840]: I0311 10:48:05.395832 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553762-tx9wr"] Mar 11 10:48:05 crc kubenswrapper[4840]: I0311 10:48:05.403184 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553762-tx9wr"] Mar 11 10:48:05 crc kubenswrapper[4840]: I0311 10:48:05.825770 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553768-pwf4b" event={"ID":"6d246b21-4e35-4b47-9a2b-fd99865ae488","Type":"ContainerDied","Data":"3e2f36d7dad633d12ba5867ed3f8106ae61e771cb332cad535d6c0bd60b29599"} Mar 11 10:48:05 crc kubenswrapper[4840]: I0311 10:48:05.826169 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e2f36d7dad633d12ba5867ed3f8106ae61e771cb332cad535d6c0bd60b29599" Mar 11 10:48:05 crc kubenswrapper[4840]: I0311 10:48:05.826240 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553768-pwf4b" Mar 11 10:48:06 crc kubenswrapper[4840]: I0311 10:48:06.072654 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d1f94e0-0ee6-41ec-8031-c2346d1d694d" path="/var/lib/kubelet/pods/4d1f94e0-0ee6-41ec-8031-c2346d1d694d/volumes" Mar 11 10:48:14 crc kubenswrapper[4840]: I0311 10:48:14.060340 4840 scope.go:117] "RemoveContainer" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" Mar 11 10:48:14 crc kubenswrapper[4840]: E0311 10:48:14.061197 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:48:14 crc kubenswrapper[4840]: I0311 10:48:14.464828 4840 scope.go:117] "RemoveContainer" containerID="938fc6a4cceb5d3c87b6aec79702708a208582e0c8a080508f6d7aed9e88b2ad" Mar 11 10:48:16 crc kubenswrapper[4840]: I0311 10:48:16.069298 4840 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-2kbzn container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.34:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 11 10:48:16 crc kubenswrapper[4840]: I0311 10:48:16.069584 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-2kbzn" podUID="e3d07b65-1690-44b1-a232-5a9d4187e89d" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.34:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 11 10:48:16 crc kubenswrapper[4840]: I0311 10:48:16.121903 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-vxcc4" podUID="ec409cfe-4999-48d4-93f0-cbf22595667e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 11 10:48:16 crc kubenswrapper[4840]: I0311 10:48:16.122025 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-vxcc4" podUID="ec409cfe-4999-48d4-93f0-cbf22595667e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 11 10:48:20 crc kubenswrapper[4840]: I0311 10:48:20.021903 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-mm7xm_e30eac2b-bf79-4cae-8551-d114587a58bc/cert-manager-controller/0.log" Mar 11 10:48:20 crc kubenswrapper[4840]: I0311 10:48:20.153788 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-6gktm_494dd1d0-27c8-4680-9012-2e598d171b99/cert-manager-cainjector/0.log" Mar 11 10:48:20 crc kubenswrapper[4840]: I0311 10:48:20.239967 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-hmd2q_7d51cb99-73ad-47da-88f0-906a4fb160c4/cert-manager-webhook/0.log" Mar 11 10:48:27 crc kubenswrapper[4840]: I0311 10:48:27.060040 4840 scope.go:117] "RemoveContainer" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" Mar 11 10:48:27 crc kubenswrapper[4840]: E0311 10:48:27.060821 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-brtht_openshift-machine-config-operator(8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d)\"" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" Mar 11 10:48:33 crc kubenswrapper[4840]: I0311 10:48:33.225845 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-9m2wq_52d4c8cc-931e-4994-a9b3-b8b93bc67084/nmstate-console-plugin/0.log" Mar 11 10:48:33 crc kubenswrapper[4840]: I0311 10:48:33.378708 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-wb5bs_f9d3793f-35dd-4f5f-8d7c-d10c5cede8c6/kube-rbac-proxy/0.log" Mar 11 10:48:33 crc kubenswrapper[4840]: I0311 10:48:33.440809 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-9cwqf_bdc2a628-53c9-40c3-b438-9e8e059a93bc/nmstate-handler/0.log" Mar 11 10:48:33 crc kubenswrapper[4840]: I0311 10:48:33.454939 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-wb5bs_f9d3793f-35dd-4f5f-8d7c-d10c5cede8c6/nmstate-metrics/0.log" Mar 11 10:48:33 crc kubenswrapper[4840]: I0311 10:48:33.581383 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-qzjs5_e5d81b10-00d4-4dde-81cd-8adcdb671e0f/nmstate-operator/0.log" Mar 11 10:48:33 crc kubenswrapper[4840]: I0311 10:48:33.654786 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-gjg8z_efd9bdd7-8e04-433a-83d9-bdfcb094d74b/nmstate-webhook/0.log" Mar 11 10:48:41 crc kubenswrapper[4840]: I0311 10:48:41.060899 4840 scope.go:117] "RemoveContainer" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" Mar 11 10:48:42 crc kubenswrapper[4840]: I0311 10:48:42.331011 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"282b582e15e705641c51e98813ed5073fe8b121ae821dee37bebd3923d2fd376"} Mar 11 10:49:00 crc kubenswrapper[4840]: I0311 10:49:00.964062 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-6fr82_8a61c288-882f-44a9-a206-2cefcfa66c5c/kube-rbac-proxy/0.log" Mar 11 10:49:01 crc kubenswrapper[4840]: I0311 10:49:01.171527 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-qjvc2_8389fd54-9d72-452c-b3e3-df6a7d0808ab/frr-k8s-webhook-server/0.log" Mar 11 10:49:01 crc kubenswrapper[4840]: I0311 10:49:01.409679 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xp9gd_76a3df80-6e5e-4618-8dc4-2b697c4f73b8/cp-frr-files/0.log" Mar 11 10:49:01 crc kubenswrapper[4840]: I0311 10:49:01.461005 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-6fr82_8a61c288-882f-44a9-a206-2cefcfa66c5c/controller/0.log" Mar 11 10:49:01 crc kubenswrapper[4840]: I0311 10:49:01.596238 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xp9gd_76a3df80-6e5e-4618-8dc4-2b697c4f73b8/cp-metrics/0.log" Mar 11 10:49:01 crc kubenswrapper[4840]: I0311 10:49:01.605071 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xp9gd_76a3df80-6e5e-4618-8dc4-2b697c4f73b8/cp-frr-files/0.log" Mar 11 10:49:01 crc kubenswrapper[4840]: I0311 10:49:01.627760 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xp9gd_76a3df80-6e5e-4618-8dc4-2b697c4f73b8/cp-reloader/0.log" Mar 11 10:49:01 crc kubenswrapper[4840]: I0311 10:49:01.686262 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xp9gd_76a3df80-6e5e-4618-8dc4-2b697c4f73b8/cp-reloader/0.log" Mar 11 10:49:01 crc kubenswrapper[4840]: I0311 10:49:01.837576 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xp9gd_76a3df80-6e5e-4618-8dc4-2b697c4f73b8/cp-reloader/0.log" Mar 11 10:49:01 crc kubenswrapper[4840]: I0311 10:49:01.861859 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xp9gd_76a3df80-6e5e-4618-8dc4-2b697c4f73b8/cp-frr-files/0.log" Mar 11 10:49:01 crc kubenswrapper[4840]: I0311 10:49:01.872417 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xp9gd_76a3df80-6e5e-4618-8dc4-2b697c4f73b8/cp-metrics/0.log" Mar 11 10:49:01 crc kubenswrapper[4840]: I0311 10:49:01.918613 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xp9gd_76a3df80-6e5e-4618-8dc4-2b697c4f73b8/cp-metrics/0.log" Mar 11 10:49:02 crc kubenswrapper[4840]: I0311 10:49:02.337069 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xp9gd_76a3df80-6e5e-4618-8dc4-2b697c4f73b8/cp-reloader/0.log" Mar 11 10:49:02 crc kubenswrapper[4840]: I0311 10:49:02.340710 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xp9gd_76a3df80-6e5e-4618-8dc4-2b697c4f73b8/cp-metrics/0.log" Mar 11 10:49:02 crc kubenswrapper[4840]: I0311 10:49:02.379114 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xp9gd_76a3df80-6e5e-4618-8dc4-2b697c4f73b8/cp-frr-files/0.log" Mar 11 10:49:02 crc kubenswrapper[4840]: I0311 10:49:02.420875 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xp9gd_76a3df80-6e5e-4618-8dc4-2b697c4f73b8/controller/0.log" Mar 11 10:49:02 crc kubenswrapper[4840]: I0311 10:49:02.578043 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xp9gd_76a3df80-6e5e-4618-8dc4-2b697c4f73b8/kube-rbac-proxy/0.log" Mar 11 10:49:02 crc kubenswrapper[4840]: I0311 10:49:02.608435 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xp9gd_76a3df80-6e5e-4618-8dc4-2b697c4f73b8/frr-metrics/0.log" Mar 11 10:49:02 crc kubenswrapper[4840]: I0311 10:49:02.628658 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xp9gd_76a3df80-6e5e-4618-8dc4-2b697c4f73b8/kube-rbac-proxy-frr/0.log" Mar 11 10:49:02 crc kubenswrapper[4840]: I0311 10:49:02.841586 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xp9gd_76a3df80-6e5e-4618-8dc4-2b697c4f73b8/reloader/0.log" Mar 11 10:49:02 crc kubenswrapper[4840]: I0311 10:49:02.875679 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c8c488f96-z6m7f_6fe83f11-7be3-4047-a551-4b1eb34a4345/manager/0.log" Mar 11 10:49:03 crc kubenswrapper[4840]: I0311 10:49:03.079414 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-dd6c58799-hd789_dbd13783-14a3-4a72-aff6-0367320d9baf/webhook-server/0.log" Mar 11 10:49:03 crc kubenswrapper[4840]: I0311 10:49:03.287928 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-sn7q8_6955be99-a9ae-4c67-a9c8-4e7fe6f909de/kube-rbac-proxy/0.log" Mar 11 10:49:04 crc kubenswrapper[4840]: I0311 10:49:04.005240 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-sn7q8_6955be99-a9ae-4c67-a9c8-4e7fe6f909de/speaker/0.log" Mar 11 10:49:05 crc kubenswrapper[4840]: I0311 10:49:05.003814 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-xp9gd_76a3df80-6e5e-4618-8dc4-2b697c4f73b8/frr/0.log" Mar 11 10:49:16 crc kubenswrapper[4840]: I0311 10:49:16.097436 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z_2fb836d4-fee3-4607-b48b-c2ea1d889ec5/util/0.log" Mar 11 10:49:16 crc kubenswrapper[4840]: I0311 10:49:16.224457 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z_2fb836d4-fee3-4607-b48b-c2ea1d889ec5/pull/0.log" Mar 11 10:49:16 crc kubenswrapper[4840]: I0311 10:49:16.238389 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z_2fb836d4-fee3-4607-b48b-c2ea1d889ec5/util/0.log" Mar 11 10:49:16 crc kubenswrapper[4840]: I0311 10:49:16.263127 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z_2fb836d4-fee3-4607-b48b-c2ea1d889ec5/pull/0.log" Mar 11 10:49:16 crc kubenswrapper[4840]: I0311 10:49:16.440342 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z_2fb836d4-fee3-4607-b48b-c2ea1d889ec5/extract/0.log" Mar 11 10:49:16 crc kubenswrapper[4840]: I0311 10:49:16.446677 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z_2fb836d4-fee3-4607-b48b-c2ea1d889ec5/pull/0.log" Mar 11 10:49:16 crc kubenswrapper[4840]: I0311 10:49:16.487478 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8748n48z_2fb836d4-fee3-4607-b48b-c2ea1d889ec5/util/0.log" Mar 11 10:49:16 crc kubenswrapper[4840]: I0311 10:49:16.606712 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf_31a664e3-80ea-4e78-a87f-3257129bc45a/util/0.log" Mar 11 10:49:16 crc kubenswrapper[4840]: I0311 10:49:16.788867 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf_31a664e3-80ea-4e78-a87f-3257129bc45a/pull/0.log" Mar 11 10:49:16 crc kubenswrapper[4840]: I0311 10:49:16.798566 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf_31a664e3-80ea-4e78-a87f-3257129bc45a/util/0.log" Mar 11 10:49:16 crc kubenswrapper[4840]: I0311 10:49:16.826744 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf_31a664e3-80ea-4e78-a87f-3257129bc45a/pull/0.log" Mar 11 10:49:16 crc kubenswrapper[4840]: I0311 10:49:16.970353 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf_31a664e3-80ea-4e78-a87f-3257129bc45a/pull/0.log" Mar 11 10:49:17 crc kubenswrapper[4840]: I0311 10:49:17.005357 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf_31a664e3-80ea-4e78-a87f-3257129bc45a/extract/0.log" Mar 11 10:49:17 crc kubenswrapper[4840]: I0311 10:49:17.024403 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1d2bvf_31a664e3-80ea-4e78-a87f-3257129bc45a/util/0.log" Mar 11 10:49:17 crc kubenswrapper[4840]: I0311 10:49:17.142572 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9_fdc33192-4b69-4f48-95bd-8b1aa1cc7135/util/0.log" Mar 11 10:49:17 crc kubenswrapper[4840]: I0311 10:49:17.372556 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9_fdc33192-4b69-4f48-95bd-8b1aa1cc7135/util/0.log" Mar 11 10:49:17 crc kubenswrapper[4840]: I0311 10:49:17.420630 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9_fdc33192-4b69-4f48-95bd-8b1aa1cc7135/pull/0.log" Mar 11 10:49:17 crc kubenswrapper[4840]: I0311 10:49:17.426082 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9_fdc33192-4b69-4f48-95bd-8b1aa1cc7135/pull/0.log" Mar 11 10:49:17 crc kubenswrapper[4840]: I0311 10:49:17.567611 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9_fdc33192-4b69-4f48-95bd-8b1aa1cc7135/pull/0.log" Mar 11 10:49:17 crc kubenswrapper[4840]: I0311 10:49:17.599049 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9_fdc33192-4b69-4f48-95bd-8b1aa1cc7135/util/0.log" Mar 11 10:49:17 crc kubenswrapper[4840]: I0311 10:49:17.683350 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5v5ks9_fdc33192-4b69-4f48-95bd-8b1aa1cc7135/extract/0.log" Mar 11 10:49:17 crc kubenswrapper[4840]: I0311 10:49:17.741335 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r6lcr_3c1769da-3e60-4c96-9f09-485ba5ab47ba/extract-utilities/0.log" Mar 11 10:49:17 crc kubenswrapper[4840]: I0311 10:49:17.896750 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r6lcr_3c1769da-3e60-4c96-9f09-485ba5ab47ba/extract-utilities/0.log" Mar 11 10:49:17 crc kubenswrapper[4840]: I0311 10:49:17.908770 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r6lcr_3c1769da-3e60-4c96-9f09-485ba5ab47ba/extract-content/0.log" Mar 11 10:49:17 crc kubenswrapper[4840]: I0311 10:49:17.936814 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r6lcr_3c1769da-3e60-4c96-9f09-485ba5ab47ba/extract-content/0.log" Mar 11 10:49:18 crc kubenswrapper[4840]: I0311 10:49:18.085984 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r6lcr_3c1769da-3e60-4c96-9f09-485ba5ab47ba/extract-content/0.log" Mar 11 10:49:18 crc kubenswrapper[4840]: I0311 10:49:18.089038 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r6lcr_3c1769da-3e60-4c96-9f09-485ba5ab47ba/extract-utilities/0.log" Mar 11 10:49:18 crc kubenswrapper[4840]: I0311 10:49:18.340436 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ftglb_0011bbf4-2baf-40fd-a220-4ee6f6b7fea0/extract-utilities/0.log" Mar 11 10:49:18 crc kubenswrapper[4840]: I0311 10:49:18.534862 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ftglb_0011bbf4-2baf-40fd-a220-4ee6f6b7fea0/extract-utilities/0.log" Mar 11 10:49:18 crc kubenswrapper[4840]: I0311 10:49:18.540174 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ftglb_0011bbf4-2baf-40fd-a220-4ee6f6b7fea0/extract-content/0.log" Mar 11 10:49:18 crc kubenswrapper[4840]: I0311 10:49:18.605523 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ftglb_0011bbf4-2baf-40fd-a220-4ee6f6b7fea0/extract-content/0.log" Mar 11 10:49:18 crc kubenswrapper[4840]: I0311 10:49:18.645643 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-r6lcr_3c1769da-3e60-4c96-9f09-485ba5ab47ba/registry-server/0.log" Mar 11 10:49:18 crc kubenswrapper[4840]: I0311 10:49:18.736949 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ftglb_0011bbf4-2baf-40fd-a220-4ee6f6b7fea0/extract-content/0.log" Mar 11 10:49:18 crc kubenswrapper[4840]: I0311 10:49:18.751599 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ftglb_0011bbf4-2baf-40fd-a220-4ee6f6b7fea0/extract-utilities/0.log" Mar 11 10:49:18 crc kubenswrapper[4840]: I0311 10:49:18.941941 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-2xfdw_73479d9a-07ac-4487-b779-a59d095c8704/marketplace-operator/0.log" Mar 11 10:49:19 crc kubenswrapper[4840]: I0311 10:49:19.162516 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-45qqq_1eafe068-3690-459c-aa70-f9f494a2ca5c/extract-utilities/0.log" Mar 11 10:49:19 crc kubenswrapper[4840]: I0311 10:49:19.306647 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-45qqq_1eafe068-3690-459c-aa70-f9f494a2ca5c/extract-content/0.log" Mar 11 10:49:19 crc kubenswrapper[4840]: I0311 10:49:19.345503 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-45qqq_1eafe068-3690-459c-aa70-f9f494a2ca5c/extract-content/0.log" Mar 11 10:49:19 crc kubenswrapper[4840]: I0311 10:49:19.394875 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-45qqq_1eafe068-3690-459c-aa70-f9f494a2ca5c/extract-utilities/0.log" Mar 11 10:49:19 crc kubenswrapper[4840]: I0311 10:49:19.645748 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-45qqq_1eafe068-3690-459c-aa70-f9f494a2ca5c/extract-utilities/0.log" Mar 11 10:49:19 crc kubenswrapper[4840]: I0311 10:49:19.666055 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-45qqq_1eafe068-3690-459c-aa70-f9f494a2ca5c/extract-content/0.log" Mar 11 10:49:19 crc kubenswrapper[4840]: I0311 10:49:19.956297 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b8d9p_dc824dfc-233b-4613-a7ed-4cb6371a1404/extract-utilities/0.log" Mar 11 10:49:20 crc kubenswrapper[4840]: I0311 10:49:20.042086 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-45qqq_1eafe068-3690-459c-aa70-f9f494a2ca5c/registry-server/0.log" Mar 11 10:49:20 crc kubenswrapper[4840]: I0311 10:49:20.057487 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b8d9p_dc824dfc-233b-4613-a7ed-4cb6371a1404/extract-utilities/0.log" Mar 11 10:49:20 crc kubenswrapper[4840]: I0311 10:49:20.076181 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ftglb_0011bbf4-2baf-40fd-a220-4ee6f6b7fea0/registry-server/0.log" Mar 11 10:49:20 crc kubenswrapper[4840]: I0311 10:49:20.142022 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b8d9p_dc824dfc-233b-4613-a7ed-4cb6371a1404/extract-content/0.log" Mar 11 10:49:20 crc kubenswrapper[4840]: I0311 10:49:20.177878 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b8d9p_dc824dfc-233b-4613-a7ed-4cb6371a1404/extract-content/0.log" Mar 11 10:49:20 crc kubenswrapper[4840]: I0311 10:49:20.306613 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b8d9p_dc824dfc-233b-4613-a7ed-4cb6371a1404/extract-utilities/0.log" Mar 11 10:49:20 crc kubenswrapper[4840]: I0311 10:49:20.312704 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b8d9p_dc824dfc-233b-4613-a7ed-4cb6371a1404/extract-content/0.log" Mar 11 10:49:20 crc kubenswrapper[4840]: I0311 10:49:20.967083 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-b8d9p_dc824dfc-233b-4613-a7ed-4cb6371a1404/registry-server/0.log" Mar 11 10:49:32 crc kubenswrapper[4840]: I0311 10:49:32.359673 4840 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-nh5h5 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 11 10:49:32 crc kubenswrapper[4840]: I0311 10:49:32.370593 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-nh5h5" podUID="b1d0c791-1d0c-4e11-91ce-bb352ce3fce1" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 11 10:49:33 crc kubenswrapper[4840]: I0311 10:49:33.788709 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-59m49"] Mar 11 10:49:33 crc kubenswrapper[4840]: E0311 10:49:33.789339 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d246b21-4e35-4b47-9a2b-fd99865ae488" containerName="oc" Mar 11 10:49:33 crc kubenswrapper[4840]: I0311 10:49:33.789352 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d246b21-4e35-4b47-9a2b-fd99865ae488" containerName="oc" Mar 11 10:49:33 crc kubenswrapper[4840]: I0311 10:49:33.789543 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d246b21-4e35-4b47-9a2b-fd99865ae488" containerName="oc" Mar 11 10:49:33 crc kubenswrapper[4840]: I0311 10:49:33.791209 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-59m49" Mar 11 10:49:33 crc kubenswrapper[4840]: I0311 10:49:33.818981 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-59m49"] Mar 11 10:49:33 crc kubenswrapper[4840]: I0311 10:49:33.911129 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q97d\" (UniqueName: \"kubernetes.io/projected/b850507f-0588-408f-831a-8ecce1e701e7-kube-api-access-2q97d\") pod \"certified-operators-59m49\" (UID: \"b850507f-0588-408f-831a-8ecce1e701e7\") " pod="openshift-marketplace/certified-operators-59m49" Mar 11 10:49:33 crc kubenswrapper[4840]: I0311 10:49:33.911530 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b850507f-0588-408f-831a-8ecce1e701e7-catalog-content\") pod \"certified-operators-59m49\" (UID: \"b850507f-0588-408f-831a-8ecce1e701e7\") " pod="openshift-marketplace/certified-operators-59m49" Mar 11 10:49:33 crc kubenswrapper[4840]: I0311 10:49:33.911581 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b850507f-0588-408f-831a-8ecce1e701e7-utilities\") pod \"certified-operators-59m49\" (UID: \"b850507f-0588-408f-831a-8ecce1e701e7\") " pod="openshift-marketplace/certified-operators-59m49" Mar 11 10:49:34 crc kubenswrapper[4840]: I0311 10:49:34.012865 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b850507f-0588-408f-831a-8ecce1e701e7-utilities\") pod \"certified-operators-59m49\" (UID: \"b850507f-0588-408f-831a-8ecce1e701e7\") " pod="openshift-marketplace/certified-operators-59m49" Mar 11 10:49:34 crc kubenswrapper[4840]: I0311 10:49:34.013210 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q97d\" (UniqueName: \"kubernetes.io/projected/b850507f-0588-408f-831a-8ecce1e701e7-kube-api-access-2q97d\") pod \"certified-operators-59m49\" (UID: \"b850507f-0588-408f-831a-8ecce1e701e7\") " pod="openshift-marketplace/certified-operators-59m49" Mar 11 10:49:34 crc kubenswrapper[4840]: I0311 10:49:34.013339 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b850507f-0588-408f-831a-8ecce1e701e7-catalog-content\") pod \"certified-operators-59m49\" (UID: \"b850507f-0588-408f-831a-8ecce1e701e7\") " pod="openshift-marketplace/certified-operators-59m49" Mar 11 10:49:34 crc kubenswrapper[4840]: I0311 10:49:34.013854 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b850507f-0588-408f-831a-8ecce1e701e7-catalog-content\") pod \"certified-operators-59m49\" (UID: \"b850507f-0588-408f-831a-8ecce1e701e7\") " pod="openshift-marketplace/certified-operators-59m49" Mar 11 10:49:34 crc kubenswrapper[4840]: I0311 10:49:34.014156 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b850507f-0588-408f-831a-8ecce1e701e7-utilities\") pod \"certified-operators-59m49\" (UID: \"b850507f-0588-408f-831a-8ecce1e701e7\") " pod="openshift-marketplace/certified-operators-59m49" Mar 11 10:49:34 crc kubenswrapper[4840]: I0311 10:49:34.032258 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q97d\" (UniqueName: \"kubernetes.io/projected/b850507f-0588-408f-831a-8ecce1e701e7-kube-api-access-2q97d\") pod \"certified-operators-59m49\" (UID: \"b850507f-0588-408f-831a-8ecce1e701e7\") " pod="openshift-marketplace/certified-operators-59m49" Mar 11 10:49:34 crc kubenswrapper[4840]: I0311 10:49:34.499783 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-59m49" Mar 11 10:49:35 crc kubenswrapper[4840]: I0311 10:49:35.913090 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-59m49"] Mar 11 10:49:35 crc kubenswrapper[4840]: I0311 10:49:35.966897 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-59m49" event={"ID":"b850507f-0588-408f-831a-8ecce1e701e7","Type":"ContainerStarted","Data":"827f502d492e2bae0861f11c327978e756f51860cfe770923d513c30764fba4e"} Mar 11 10:49:37 crc kubenswrapper[4840]: I0311 10:49:37.983951 4840 generic.go:334] "Generic (PLEG): container finished" podID="b850507f-0588-408f-831a-8ecce1e701e7" containerID="1226df6871242b41cf865c72c9d0b5a7ad0c24cb6b76db5081297306fb2b7921" exitCode=0 Mar 11 10:49:37 crc kubenswrapper[4840]: I0311 10:49:37.984012 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-59m49" event={"ID":"b850507f-0588-408f-831a-8ecce1e701e7","Type":"ContainerDied","Data":"1226df6871242b41cf865c72c9d0b5a7ad0c24cb6b76db5081297306fb2b7921"} Mar 11 10:49:37 crc kubenswrapper[4840]: I0311 10:49:37.994806 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 11 10:49:38 crc kubenswrapper[4840]: I0311 10:49:38.992188 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-59m49" event={"ID":"b850507f-0588-408f-831a-8ecce1e701e7","Type":"ContainerStarted","Data":"be6b1c108c3d3d0cfde13416a1d6222906f0abf2aa7f955f96b709ae88eb875e"} Mar 11 10:49:39 crc kubenswrapper[4840]: I0311 10:49:39.999758 4840 generic.go:334] "Generic (PLEG): container finished" podID="b850507f-0588-408f-831a-8ecce1e701e7" containerID="be6b1c108c3d3d0cfde13416a1d6222906f0abf2aa7f955f96b709ae88eb875e" exitCode=0 Mar 11 10:49:39 crc kubenswrapper[4840]: I0311 10:49:39.999822 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-59m49" event={"ID":"b850507f-0588-408f-831a-8ecce1e701e7","Type":"ContainerDied","Data":"be6b1c108c3d3d0cfde13416a1d6222906f0abf2aa7f955f96b709ae88eb875e"} Mar 11 10:49:42 crc kubenswrapper[4840]: I0311 10:49:42.013283 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-59m49" event={"ID":"b850507f-0588-408f-831a-8ecce1e701e7","Type":"ContainerStarted","Data":"490ff0c46bc98c97f74544d0e87a750513b6ae0f28e3d9f9263c74e681d3b823"} Mar 11 10:49:42 crc kubenswrapper[4840]: I0311 10:49:42.034236 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-59m49" podStartSLOduration=5.492727103 podStartE2EDuration="9.034219031s" podCreationTimestamp="2026-03-11 10:49:33 +0000 UTC" firstStartedPulling="2026-03-11 10:49:37.994486457 +0000 UTC m=+6776.660156272" lastFinishedPulling="2026-03-11 10:49:41.535978385 +0000 UTC m=+6780.201648200" observedRunningTime="2026-03-11 10:49:42.031855231 +0000 UTC m=+6780.697525046" watchObservedRunningTime="2026-03-11 10:49:42.034219031 +0000 UTC m=+6780.699888846" Mar 11 10:49:44 crc kubenswrapper[4840]: I0311 10:49:44.502664 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-59m49" Mar 11 10:49:44 crc kubenswrapper[4840]: I0311 10:49:44.503016 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-59m49" Mar 11 10:49:44 crc kubenswrapper[4840]: I0311 10:49:44.556579 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-59m49" Mar 11 10:49:54 crc kubenswrapper[4840]: I0311 10:49:54.558018 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-59m49" Mar 11 10:49:54 crc kubenswrapper[4840]: I0311 10:49:54.614363 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-59m49"] Mar 11 10:49:55 crc kubenswrapper[4840]: I0311 10:49:55.255050 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-59m49" podUID="b850507f-0588-408f-831a-8ecce1e701e7" containerName="registry-server" containerID="cri-o://490ff0c46bc98c97f74544d0e87a750513b6ae0f28e3d9f9263c74e681d3b823" gracePeriod=2 Mar 11 10:49:56 crc kubenswrapper[4840]: I0311 10:49:56.262780 4840 generic.go:334] "Generic (PLEG): container finished" podID="b850507f-0588-408f-831a-8ecce1e701e7" containerID="490ff0c46bc98c97f74544d0e87a750513b6ae0f28e3d9f9263c74e681d3b823" exitCode=0 Mar 11 10:49:56 crc kubenswrapper[4840]: I0311 10:49:56.263259 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-59m49" event={"ID":"b850507f-0588-408f-831a-8ecce1e701e7","Type":"ContainerDied","Data":"490ff0c46bc98c97f74544d0e87a750513b6ae0f28e3d9f9263c74e681d3b823"} Mar 11 10:49:56 crc kubenswrapper[4840]: I0311 10:49:56.263285 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-59m49" event={"ID":"b850507f-0588-408f-831a-8ecce1e701e7","Type":"ContainerDied","Data":"827f502d492e2bae0861f11c327978e756f51860cfe770923d513c30764fba4e"} Mar 11 10:49:56 crc kubenswrapper[4840]: I0311 10:49:56.263296 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="827f502d492e2bae0861f11c327978e756f51860cfe770923d513c30764fba4e" Mar 11 10:49:56 crc kubenswrapper[4840]: I0311 10:49:56.298634 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-59m49" Mar 11 10:49:56 crc kubenswrapper[4840]: I0311 10:49:56.336355 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b850507f-0588-408f-831a-8ecce1e701e7-catalog-content\") pod \"b850507f-0588-408f-831a-8ecce1e701e7\" (UID: \"b850507f-0588-408f-831a-8ecce1e701e7\") " Mar 11 10:49:56 crc kubenswrapper[4840]: I0311 10:49:56.336464 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q97d\" (UniqueName: \"kubernetes.io/projected/b850507f-0588-408f-831a-8ecce1e701e7-kube-api-access-2q97d\") pod \"b850507f-0588-408f-831a-8ecce1e701e7\" (UID: \"b850507f-0588-408f-831a-8ecce1e701e7\") " Mar 11 10:49:56 crc kubenswrapper[4840]: I0311 10:49:56.336593 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b850507f-0588-408f-831a-8ecce1e701e7-utilities\") pod \"b850507f-0588-408f-831a-8ecce1e701e7\" (UID: \"b850507f-0588-408f-831a-8ecce1e701e7\") " Mar 11 10:49:56 crc kubenswrapper[4840]: I0311 10:49:56.337504 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b850507f-0588-408f-831a-8ecce1e701e7-utilities" (OuterVolumeSpecName: "utilities") pod "b850507f-0588-408f-831a-8ecce1e701e7" (UID: "b850507f-0588-408f-831a-8ecce1e701e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:49:56 crc kubenswrapper[4840]: I0311 10:49:56.342752 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b850507f-0588-408f-831a-8ecce1e701e7-kube-api-access-2q97d" (OuterVolumeSpecName: "kube-api-access-2q97d") pod "b850507f-0588-408f-831a-8ecce1e701e7" (UID: "b850507f-0588-408f-831a-8ecce1e701e7"). InnerVolumeSpecName "kube-api-access-2q97d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:49:56 crc kubenswrapper[4840]: I0311 10:49:56.403516 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b850507f-0588-408f-831a-8ecce1e701e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b850507f-0588-408f-831a-8ecce1e701e7" (UID: "b850507f-0588-408f-831a-8ecce1e701e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:49:56 crc kubenswrapper[4840]: I0311 10:49:56.439579 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b850507f-0588-408f-831a-8ecce1e701e7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:49:56 crc kubenswrapper[4840]: I0311 10:49:56.439619 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q97d\" (UniqueName: \"kubernetes.io/projected/b850507f-0588-408f-831a-8ecce1e701e7-kube-api-access-2q97d\") on node \"crc\" DevicePath \"\"" Mar 11 10:49:56 crc kubenswrapper[4840]: I0311 10:49:56.439636 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b850507f-0588-408f-831a-8ecce1e701e7-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:49:57 crc kubenswrapper[4840]: I0311 10:49:57.272221 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-59m49" Mar 11 10:49:57 crc kubenswrapper[4840]: I0311 10:49:57.319265 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-59m49"] Mar 11 10:49:57 crc kubenswrapper[4840]: I0311 10:49:57.331149 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-59m49"] Mar 11 10:49:58 crc kubenswrapper[4840]: I0311 10:49:58.071125 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b850507f-0588-408f-831a-8ecce1e701e7" path="/var/lib/kubelet/pods/b850507f-0588-408f-831a-8ecce1e701e7/volumes" Mar 11 10:50:00 crc kubenswrapper[4840]: I0311 10:50:00.137038 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553770-hblpz"] Mar 11 10:50:00 crc kubenswrapper[4840]: E0311 10:50:00.138212 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b850507f-0588-408f-831a-8ecce1e701e7" containerName="extract-utilities" Mar 11 10:50:00 crc kubenswrapper[4840]: I0311 10:50:00.138303 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b850507f-0588-408f-831a-8ecce1e701e7" containerName="extract-utilities" Mar 11 10:50:00 crc kubenswrapper[4840]: E0311 10:50:00.138379 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b850507f-0588-408f-831a-8ecce1e701e7" containerName="registry-server" Mar 11 10:50:00 crc kubenswrapper[4840]: I0311 10:50:00.138439 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b850507f-0588-408f-831a-8ecce1e701e7" containerName="registry-server" Mar 11 10:50:00 crc kubenswrapper[4840]: E0311 10:50:00.138536 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b850507f-0588-408f-831a-8ecce1e701e7" containerName="extract-content" Mar 11 10:50:00 crc kubenswrapper[4840]: I0311 10:50:00.138595 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b850507f-0588-408f-831a-8ecce1e701e7" containerName="extract-content" Mar 11 10:50:00 crc kubenswrapper[4840]: I0311 10:50:00.138808 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b850507f-0588-408f-831a-8ecce1e701e7" containerName="registry-server" Mar 11 10:50:00 crc kubenswrapper[4840]: I0311 10:50:00.139444 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553770-hblpz" Mar 11 10:50:00 crc kubenswrapper[4840]: I0311 10:50:00.141363 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:50:00 crc kubenswrapper[4840]: I0311 10:50:00.142089 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:50:00 crc kubenswrapper[4840]: I0311 10:50:00.142961 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:50:00 crc kubenswrapper[4840]: I0311 10:50:00.146997 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553770-hblpz"] Mar 11 10:50:00 crc kubenswrapper[4840]: I0311 10:50:00.322524 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbsxw\" (UniqueName: \"kubernetes.io/projected/2f821964-984a-4381-9975-f480a20dbf03-kube-api-access-tbsxw\") pod \"auto-csr-approver-29553770-hblpz\" (UID: \"2f821964-984a-4381-9975-f480a20dbf03\") " pod="openshift-infra/auto-csr-approver-29553770-hblpz" Mar 11 10:50:00 crc kubenswrapper[4840]: I0311 10:50:00.424436 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbsxw\" (UniqueName: \"kubernetes.io/projected/2f821964-984a-4381-9975-f480a20dbf03-kube-api-access-tbsxw\") pod \"auto-csr-approver-29553770-hblpz\" (UID: \"2f821964-984a-4381-9975-f480a20dbf03\") " pod="openshift-infra/auto-csr-approver-29553770-hblpz" Mar 11 10:50:00 crc kubenswrapper[4840]: I0311 10:50:00.442696 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbsxw\" (UniqueName: \"kubernetes.io/projected/2f821964-984a-4381-9975-f480a20dbf03-kube-api-access-tbsxw\") pod \"auto-csr-approver-29553770-hblpz\" (UID: \"2f821964-984a-4381-9975-f480a20dbf03\") " pod="openshift-infra/auto-csr-approver-29553770-hblpz" Mar 11 10:50:00 crc kubenswrapper[4840]: I0311 10:50:00.456189 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553770-hblpz" Mar 11 10:50:00 crc kubenswrapper[4840]: I0311 10:50:00.874174 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553770-hblpz"] Mar 11 10:50:01 crc kubenswrapper[4840]: I0311 10:50:01.304623 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553770-hblpz" event={"ID":"2f821964-984a-4381-9975-f480a20dbf03","Type":"ContainerStarted","Data":"8848dacc4c02f4a8de0fcae620fd8681351ac5d4c5ab32610b06b014f18c3b58"} Mar 11 10:50:03 crc kubenswrapper[4840]: I0311 10:50:03.319948 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553770-hblpz" event={"ID":"2f821964-984a-4381-9975-f480a20dbf03","Type":"ContainerStarted","Data":"213c2f2343fa96bc7b84e1138b68fefc4436775e3e52cd406e553a3a9e25a995"} Mar 11 10:50:03 crc kubenswrapper[4840]: I0311 10:50:03.333547 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29553770-hblpz" podStartSLOduration=1.424996312 podStartE2EDuration="3.333529242s" podCreationTimestamp="2026-03-11 10:50:00 +0000 UTC" firstStartedPulling="2026-03-11 10:50:00.900325087 +0000 UTC m=+6799.565994912" lastFinishedPulling="2026-03-11 10:50:02.808858037 +0000 UTC m=+6801.474527842" observedRunningTime="2026-03-11 10:50:03.333218394 +0000 UTC m=+6801.998888249" watchObservedRunningTime="2026-03-11 10:50:03.333529242 +0000 UTC m=+6801.999199057" Mar 11 10:50:04 crc kubenswrapper[4840]: I0311 10:50:04.331883 4840 generic.go:334] "Generic (PLEG): container finished" podID="2f821964-984a-4381-9975-f480a20dbf03" containerID="213c2f2343fa96bc7b84e1138b68fefc4436775e3e52cd406e553a3a9e25a995" exitCode=0 Mar 11 10:50:04 crc kubenswrapper[4840]: I0311 10:50:04.331980 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553770-hblpz" event={"ID":"2f821964-984a-4381-9975-f480a20dbf03","Type":"ContainerDied","Data":"213c2f2343fa96bc7b84e1138b68fefc4436775e3e52cd406e553a3a9e25a995"} Mar 11 10:50:05 crc kubenswrapper[4840]: I0311 10:50:05.649026 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553770-hblpz" Mar 11 10:50:05 crc kubenswrapper[4840]: I0311 10:50:05.754447 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbsxw\" (UniqueName: \"kubernetes.io/projected/2f821964-984a-4381-9975-f480a20dbf03-kube-api-access-tbsxw\") pod \"2f821964-984a-4381-9975-f480a20dbf03\" (UID: \"2f821964-984a-4381-9975-f480a20dbf03\") " Mar 11 10:50:05 crc kubenswrapper[4840]: I0311 10:50:05.759281 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f821964-984a-4381-9975-f480a20dbf03-kube-api-access-tbsxw" (OuterVolumeSpecName: "kube-api-access-tbsxw") pod "2f821964-984a-4381-9975-f480a20dbf03" (UID: "2f821964-984a-4381-9975-f480a20dbf03"). InnerVolumeSpecName "kube-api-access-tbsxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:50:05 crc kubenswrapper[4840]: I0311 10:50:05.856387 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbsxw\" (UniqueName: \"kubernetes.io/projected/2f821964-984a-4381-9975-f480a20dbf03-kube-api-access-tbsxw\") on node \"crc\" DevicePath \"\"" Mar 11 10:50:06 crc kubenswrapper[4840]: I0311 10:50:06.363867 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553770-hblpz" event={"ID":"2f821964-984a-4381-9975-f480a20dbf03","Type":"ContainerDied","Data":"8848dacc4c02f4a8de0fcae620fd8681351ac5d4c5ab32610b06b014f18c3b58"} Mar 11 10:50:06 crc kubenswrapper[4840]: I0311 10:50:06.364257 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8848dacc4c02f4a8de0fcae620fd8681351ac5d4c5ab32610b06b014f18c3b58" Mar 11 10:50:06 crc kubenswrapper[4840]: I0311 10:50:06.364005 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553770-hblpz" Mar 11 10:50:06 crc kubenswrapper[4840]: I0311 10:50:06.419193 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553764-qqvnx"] Mar 11 10:50:06 crc kubenswrapper[4840]: I0311 10:50:06.434314 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553764-qqvnx"] Mar 11 10:50:08 crc kubenswrapper[4840]: I0311 10:50:08.078003 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69ab84bf-dcac-4db5-8d05-ad02af254a80" path="/var/lib/kubelet/pods/69ab84bf-dcac-4db5-8d05-ad02af254a80/volumes" Mar 11 10:50:08 crc kubenswrapper[4840]: I0311 10:50:08.445605 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mg4qg"] Mar 11 10:50:08 crc kubenswrapper[4840]: E0311 10:50:08.445992 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f821964-984a-4381-9975-f480a20dbf03" containerName="oc" Mar 11 10:50:08 crc kubenswrapper[4840]: I0311 10:50:08.446009 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f821964-984a-4381-9975-f480a20dbf03" containerName="oc" Mar 11 10:50:08 crc kubenswrapper[4840]: I0311 10:50:08.446201 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f821964-984a-4381-9975-f480a20dbf03" containerName="oc" Mar 11 10:50:08 crc kubenswrapper[4840]: I0311 10:50:08.447576 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mg4qg" Mar 11 10:50:08 crc kubenswrapper[4840]: I0311 10:50:08.461135 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mg4qg"] Mar 11 10:50:08 crc kubenswrapper[4840]: I0311 10:50:08.604461 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e705c45-6819-483f-96d8-65b7cf9ae3e6-catalog-content\") pod \"redhat-operators-mg4qg\" (UID: \"7e705c45-6819-483f-96d8-65b7cf9ae3e6\") " pod="openshift-marketplace/redhat-operators-mg4qg" Mar 11 10:50:08 crc kubenswrapper[4840]: I0311 10:50:08.604570 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e705c45-6819-483f-96d8-65b7cf9ae3e6-utilities\") pod \"redhat-operators-mg4qg\" (UID: \"7e705c45-6819-483f-96d8-65b7cf9ae3e6\") " pod="openshift-marketplace/redhat-operators-mg4qg" Mar 11 10:50:08 crc kubenswrapper[4840]: I0311 10:50:08.604602 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwzsb\" (UniqueName: \"kubernetes.io/projected/7e705c45-6819-483f-96d8-65b7cf9ae3e6-kube-api-access-kwzsb\") pod \"redhat-operators-mg4qg\" (UID: \"7e705c45-6819-483f-96d8-65b7cf9ae3e6\") " pod="openshift-marketplace/redhat-operators-mg4qg" Mar 11 10:50:08 crc kubenswrapper[4840]: I0311 10:50:08.706975 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e705c45-6819-483f-96d8-65b7cf9ae3e6-catalog-content\") pod \"redhat-operators-mg4qg\" (UID: \"7e705c45-6819-483f-96d8-65b7cf9ae3e6\") " pod="openshift-marketplace/redhat-operators-mg4qg" Mar 11 10:50:08 crc kubenswrapper[4840]: I0311 10:50:08.707048 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e705c45-6819-483f-96d8-65b7cf9ae3e6-utilities\") pod \"redhat-operators-mg4qg\" (UID: \"7e705c45-6819-483f-96d8-65b7cf9ae3e6\") " pod="openshift-marketplace/redhat-operators-mg4qg" Mar 11 10:50:08 crc kubenswrapper[4840]: I0311 10:50:08.707076 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwzsb\" (UniqueName: \"kubernetes.io/projected/7e705c45-6819-483f-96d8-65b7cf9ae3e6-kube-api-access-kwzsb\") pod \"redhat-operators-mg4qg\" (UID: \"7e705c45-6819-483f-96d8-65b7cf9ae3e6\") " pod="openshift-marketplace/redhat-operators-mg4qg" Mar 11 10:50:08 crc kubenswrapper[4840]: I0311 10:50:08.707657 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e705c45-6819-483f-96d8-65b7cf9ae3e6-catalog-content\") pod \"redhat-operators-mg4qg\" (UID: \"7e705c45-6819-483f-96d8-65b7cf9ae3e6\") " pod="openshift-marketplace/redhat-operators-mg4qg" Mar 11 10:50:08 crc kubenswrapper[4840]: I0311 10:50:08.707976 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e705c45-6819-483f-96d8-65b7cf9ae3e6-utilities\") pod \"redhat-operators-mg4qg\" (UID: \"7e705c45-6819-483f-96d8-65b7cf9ae3e6\") " pod="openshift-marketplace/redhat-operators-mg4qg" Mar 11 10:50:08 crc kubenswrapper[4840]: I0311 10:50:08.741265 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwzsb\" (UniqueName: \"kubernetes.io/projected/7e705c45-6819-483f-96d8-65b7cf9ae3e6-kube-api-access-kwzsb\") pod \"redhat-operators-mg4qg\" (UID: \"7e705c45-6819-483f-96d8-65b7cf9ae3e6\") " pod="openshift-marketplace/redhat-operators-mg4qg" Mar 11 10:50:08 crc kubenswrapper[4840]: I0311 10:50:08.774925 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mg4qg" Mar 11 10:50:09 crc kubenswrapper[4840]: I0311 10:50:09.241842 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mg4qg"] Mar 11 10:50:09 crc kubenswrapper[4840]: I0311 10:50:09.396165 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mg4qg" event={"ID":"7e705c45-6819-483f-96d8-65b7cf9ae3e6","Type":"ContainerStarted","Data":"e5a0e2961491b498e7cb1e6fa874bf495af92228414a3e5b2f5fd9c8e9b40f00"} Mar 11 10:50:10 crc kubenswrapper[4840]: I0311 10:50:10.404792 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mg4qg" event={"ID":"7e705c45-6819-483f-96d8-65b7cf9ae3e6","Type":"ContainerDied","Data":"147e53030e7e07a42d510d8d8683ecd42578ee04dc0592292e19ff049c8a851c"} Mar 11 10:50:10 crc kubenswrapper[4840]: I0311 10:50:10.404699 4840 generic.go:334] "Generic (PLEG): container finished" podID="7e705c45-6819-483f-96d8-65b7cf9ae3e6" containerID="147e53030e7e07a42d510d8d8683ecd42578ee04dc0592292e19ff049c8a851c" exitCode=0 Mar 11 10:50:12 crc kubenswrapper[4840]: I0311 10:50:12.437658 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mg4qg" event={"ID":"7e705c45-6819-483f-96d8-65b7cf9ae3e6","Type":"ContainerStarted","Data":"2689d0312fa20759a2c591a7001744f007ef3df7446c975b2824e6f050abaaea"} Mar 11 10:50:13 crc kubenswrapper[4840]: I0311 10:50:13.461979 4840 generic.go:334] "Generic (PLEG): container finished" podID="7e705c45-6819-483f-96d8-65b7cf9ae3e6" containerID="2689d0312fa20759a2c591a7001744f007ef3df7446c975b2824e6f050abaaea" exitCode=0 Mar 11 10:50:13 crc kubenswrapper[4840]: I0311 10:50:13.462013 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mg4qg" event={"ID":"7e705c45-6819-483f-96d8-65b7cf9ae3e6","Type":"ContainerDied","Data":"2689d0312fa20759a2c591a7001744f007ef3df7446c975b2824e6f050abaaea"} Mar 11 10:50:14 crc kubenswrapper[4840]: I0311 10:50:14.478560 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mg4qg" event={"ID":"7e705c45-6819-483f-96d8-65b7cf9ae3e6","Type":"ContainerStarted","Data":"e5cda37c52b3c65155c6a2ec0441807512982ad9750057129657d5738631e2f7"} Mar 11 10:50:14 crc kubenswrapper[4840]: I0311 10:50:14.510022 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mg4qg" podStartSLOduration=2.686268124 podStartE2EDuration="6.509959872s" podCreationTimestamp="2026-03-11 10:50:08 +0000 UTC" firstStartedPulling="2026-03-11 10:50:10.406489145 +0000 UTC m=+6809.072158960" lastFinishedPulling="2026-03-11 10:50:14.230180873 +0000 UTC m=+6812.895850708" observedRunningTime="2026-03-11 10:50:14.498962644 +0000 UTC m=+6813.164632459" watchObservedRunningTime="2026-03-11 10:50:14.509959872 +0000 UTC m=+6813.175629687" Mar 11 10:50:14 crc kubenswrapper[4840]: I0311 10:50:14.582621 4840 scope.go:117] "RemoveContainer" containerID="4f2f82fa09199228c4f8f00a2f11696ce09951710a63d8a3411d2d441c697adb" Mar 11 10:50:18 crc kubenswrapper[4840]: I0311 10:50:18.775075 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mg4qg" Mar 11 10:50:18 crc kubenswrapper[4840]: I0311 10:50:18.775534 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mg4qg" Mar 11 10:50:19 crc kubenswrapper[4840]: I0311 10:50:19.819193 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mg4qg" podUID="7e705c45-6819-483f-96d8-65b7cf9ae3e6" containerName="registry-server" probeResult="failure" output=< Mar 11 10:50:19 crc kubenswrapper[4840]: timeout: failed to connect service ":50051" within 1s Mar 11 10:50:19 crc kubenswrapper[4840]: > Mar 11 10:50:28 crc kubenswrapper[4840]: I0311 10:50:28.836277 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mg4qg" Mar 11 10:50:28 crc kubenswrapper[4840]: I0311 10:50:28.886704 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mg4qg" Mar 11 10:50:29 crc kubenswrapper[4840]: I0311 10:50:29.108197 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mg4qg"] Mar 11 10:50:30 crc kubenswrapper[4840]: I0311 10:50:30.550445 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mg4qg" podUID="7e705c45-6819-483f-96d8-65b7cf9ae3e6" containerName="registry-server" containerID="cri-o://e5cda37c52b3c65155c6a2ec0441807512982ad9750057129657d5738631e2f7" gracePeriod=2 Mar 11 10:50:33 crc kubenswrapper[4840]: I0311 10:50:33.590378 4840 generic.go:334] "Generic (PLEG): container finished" podID="7e705c45-6819-483f-96d8-65b7cf9ae3e6" containerID="e5cda37c52b3c65155c6a2ec0441807512982ad9750057129657d5738631e2f7" exitCode=0 Mar 11 10:50:33 crc kubenswrapper[4840]: I0311 10:50:33.591000 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mg4qg" event={"ID":"7e705c45-6819-483f-96d8-65b7cf9ae3e6","Type":"ContainerDied","Data":"e5cda37c52b3c65155c6a2ec0441807512982ad9750057129657d5738631e2f7"} Mar 11 10:50:33 crc kubenswrapper[4840]: I0311 10:50:33.659581 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mg4qg" Mar 11 10:50:33 crc kubenswrapper[4840]: I0311 10:50:33.821549 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwzsb\" (UniqueName: \"kubernetes.io/projected/7e705c45-6819-483f-96d8-65b7cf9ae3e6-kube-api-access-kwzsb\") pod \"7e705c45-6819-483f-96d8-65b7cf9ae3e6\" (UID: \"7e705c45-6819-483f-96d8-65b7cf9ae3e6\") " Mar 11 10:50:33 crc kubenswrapper[4840]: I0311 10:50:33.821931 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e705c45-6819-483f-96d8-65b7cf9ae3e6-utilities\") pod \"7e705c45-6819-483f-96d8-65b7cf9ae3e6\" (UID: \"7e705c45-6819-483f-96d8-65b7cf9ae3e6\") " Mar 11 10:50:33 crc kubenswrapper[4840]: I0311 10:50:33.822202 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e705c45-6819-483f-96d8-65b7cf9ae3e6-catalog-content\") pod \"7e705c45-6819-483f-96d8-65b7cf9ae3e6\" (UID: \"7e705c45-6819-483f-96d8-65b7cf9ae3e6\") " Mar 11 10:50:33 crc kubenswrapper[4840]: I0311 10:50:33.822802 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e705c45-6819-483f-96d8-65b7cf9ae3e6-utilities" (OuterVolumeSpecName: "utilities") pod "7e705c45-6819-483f-96d8-65b7cf9ae3e6" (UID: "7e705c45-6819-483f-96d8-65b7cf9ae3e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:50:33 crc kubenswrapper[4840]: I0311 10:50:33.824931 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e705c45-6819-483f-96d8-65b7cf9ae3e6-utilities\") on node \"crc\" DevicePath \"\"" Mar 11 10:50:33 crc kubenswrapper[4840]: I0311 10:50:33.852917 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e705c45-6819-483f-96d8-65b7cf9ae3e6-kube-api-access-kwzsb" (OuterVolumeSpecName: "kube-api-access-kwzsb") pod "7e705c45-6819-483f-96d8-65b7cf9ae3e6" (UID: "7e705c45-6819-483f-96d8-65b7cf9ae3e6"). InnerVolumeSpecName "kube-api-access-kwzsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:50:33 crc kubenswrapper[4840]: I0311 10:50:33.926753 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwzsb\" (UniqueName: \"kubernetes.io/projected/7e705c45-6819-483f-96d8-65b7cf9ae3e6-kube-api-access-kwzsb\") on node \"crc\" DevicePath \"\"" Mar 11 10:50:33 crc kubenswrapper[4840]: I0311 10:50:33.989841 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e705c45-6819-483f-96d8-65b7cf9ae3e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7e705c45-6819-483f-96d8-65b7cf9ae3e6" (UID: "7e705c45-6819-483f-96d8-65b7cf9ae3e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:50:34 crc kubenswrapper[4840]: I0311 10:50:34.027976 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e705c45-6819-483f-96d8-65b7cf9ae3e6-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 11 10:50:34 crc kubenswrapper[4840]: I0311 10:50:34.599328 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mg4qg" event={"ID":"7e705c45-6819-483f-96d8-65b7cf9ae3e6","Type":"ContainerDied","Data":"e5a0e2961491b498e7cb1e6fa874bf495af92228414a3e5b2f5fd9c8e9b40f00"} Mar 11 10:50:34 crc kubenswrapper[4840]: I0311 10:50:34.599380 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mg4qg" Mar 11 10:50:34 crc kubenswrapper[4840]: I0311 10:50:34.599395 4840 scope.go:117] "RemoveContainer" containerID="e5cda37c52b3c65155c6a2ec0441807512982ad9750057129657d5738631e2f7" Mar 11 10:50:34 crc kubenswrapper[4840]: I0311 10:50:34.632072 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mg4qg"] Mar 11 10:50:34 crc kubenswrapper[4840]: I0311 10:50:34.635097 4840 scope.go:117] "RemoveContainer" containerID="2689d0312fa20759a2c591a7001744f007ef3df7446c975b2824e6f050abaaea" Mar 11 10:50:34 crc kubenswrapper[4840]: I0311 10:50:34.639794 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mg4qg"] Mar 11 10:50:34 crc kubenswrapper[4840]: I0311 10:50:34.666287 4840 scope.go:117] "RemoveContainer" containerID="147e53030e7e07a42d510d8d8683ecd42578ee04dc0592292e19ff049c8a851c" Mar 11 10:50:36 crc kubenswrapper[4840]: I0311 10:50:36.071773 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e705c45-6819-483f-96d8-65b7cf9ae3e6" path="/var/lib/kubelet/pods/7e705c45-6819-483f-96d8-65b7cf9ae3e6/volumes" Mar 11 10:50:57 crc kubenswrapper[4840]: I0311 10:50:57.445896 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:50:57 crc kubenswrapper[4840]: I0311 10:50:57.446573 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:51:06 crc kubenswrapper[4840]: I0311 10:51:06.917794 4840 generic.go:334] "Generic (PLEG): container finished" podID="8873911f-dd11-461e-a6c5-a612cd94f39b" containerID="e2ad27deeb07e13af6854a4f583bccabc64422d067f5d46a46fa17b0b0af3d72" exitCode=0 Mar 11 10:51:06 crc kubenswrapper[4840]: I0311 10:51:06.918324 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-h772l/must-gather-qtsc5" event={"ID":"8873911f-dd11-461e-a6c5-a612cd94f39b","Type":"ContainerDied","Data":"e2ad27deeb07e13af6854a4f583bccabc64422d067f5d46a46fa17b0b0af3d72"} Mar 11 10:51:06 crc kubenswrapper[4840]: I0311 10:51:06.922226 4840 scope.go:117] "RemoveContainer" containerID="e2ad27deeb07e13af6854a4f583bccabc64422d067f5d46a46fa17b0b0af3d72" Mar 11 10:51:07 crc kubenswrapper[4840]: I0311 10:51:07.382975 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-h772l_must-gather-qtsc5_8873911f-dd11-461e-a6c5-a612cd94f39b/gather/0.log" Mar 11 10:51:15 crc kubenswrapper[4840]: I0311 10:51:15.310442 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-h772l/must-gather-qtsc5"] Mar 11 10:51:15 crc kubenswrapper[4840]: I0311 10:51:15.311159 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-h772l/must-gather-qtsc5" podUID="8873911f-dd11-461e-a6c5-a612cd94f39b" containerName="copy" containerID="cri-o://76aaaa6c4b47e015508b26f519ba636cc6dfe6b5000cb4623c0e9949ad6bfb31" gracePeriod=2 Mar 11 10:51:15 crc kubenswrapper[4840]: I0311 10:51:15.323066 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-h772l/must-gather-qtsc5"] Mar 11 10:51:15 crc kubenswrapper[4840]: I0311 10:51:15.991804 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-h772l_must-gather-qtsc5_8873911f-dd11-461e-a6c5-a612cd94f39b/copy/0.log" Mar 11 10:51:15 crc kubenswrapper[4840]: I0311 10:51:15.993727 4840 generic.go:334] "Generic (PLEG): container finished" podID="8873911f-dd11-461e-a6c5-a612cd94f39b" containerID="76aaaa6c4b47e015508b26f519ba636cc6dfe6b5000cb4623c0e9949ad6bfb31" exitCode=143 Mar 11 10:51:16 crc kubenswrapper[4840]: I0311 10:51:16.272654 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-h772l_must-gather-qtsc5_8873911f-dd11-461e-a6c5-a612cd94f39b/copy/0.log" Mar 11 10:51:16 crc kubenswrapper[4840]: I0311 10:51:16.273383 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h772l/must-gather-qtsc5" Mar 11 10:51:16 crc kubenswrapper[4840]: I0311 10:51:16.446936 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8sml\" (UniqueName: \"kubernetes.io/projected/8873911f-dd11-461e-a6c5-a612cd94f39b-kube-api-access-r8sml\") pod \"8873911f-dd11-461e-a6c5-a612cd94f39b\" (UID: \"8873911f-dd11-461e-a6c5-a612cd94f39b\") " Mar 11 10:51:16 crc kubenswrapper[4840]: I0311 10:51:16.448009 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8873911f-dd11-461e-a6c5-a612cd94f39b-must-gather-output\") pod \"8873911f-dd11-461e-a6c5-a612cd94f39b\" (UID: \"8873911f-dd11-461e-a6c5-a612cd94f39b\") " Mar 11 10:51:16 crc kubenswrapper[4840]: I0311 10:51:16.465737 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8873911f-dd11-461e-a6c5-a612cd94f39b-kube-api-access-r8sml" (OuterVolumeSpecName: "kube-api-access-r8sml") pod "8873911f-dd11-461e-a6c5-a612cd94f39b" (UID: "8873911f-dd11-461e-a6c5-a612cd94f39b"). InnerVolumeSpecName "kube-api-access-r8sml". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:51:16 crc kubenswrapper[4840]: I0311 10:51:16.550864 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8sml\" (UniqueName: \"kubernetes.io/projected/8873911f-dd11-461e-a6c5-a612cd94f39b-kube-api-access-r8sml\") on node \"crc\" DevicePath \"\"" Mar 11 10:51:16 crc kubenswrapper[4840]: I0311 10:51:16.574047 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8873911f-dd11-461e-a6c5-a612cd94f39b-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "8873911f-dd11-461e-a6c5-a612cd94f39b" (UID: "8873911f-dd11-461e-a6c5-a612cd94f39b"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 11 10:51:16 crc kubenswrapper[4840]: I0311 10:51:16.652833 4840 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8873911f-dd11-461e-a6c5-a612cd94f39b-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 11 10:51:17 crc kubenswrapper[4840]: I0311 10:51:17.002652 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-h772l_must-gather-qtsc5_8873911f-dd11-461e-a6c5-a612cd94f39b/copy/0.log" Mar 11 10:51:17 crc kubenswrapper[4840]: I0311 10:51:17.003014 4840 scope.go:117] "RemoveContainer" containerID="76aaaa6c4b47e015508b26f519ba636cc6dfe6b5000cb4623c0e9949ad6bfb31" Mar 11 10:51:17 crc kubenswrapper[4840]: I0311 10:51:17.003103 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-h772l/must-gather-qtsc5" Mar 11 10:51:17 crc kubenswrapper[4840]: I0311 10:51:17.043561 4840 scope.go:117] "RemoveContainer" containerID="e2ad27deeb07e13af6854a4f583bccabc64422d067f5d46a46fa17b0b0af3d72" Mar 11 10:51:18 crc kubenswrapper[4840]: I0311 10:51:18.071623 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8873911f-dd11-461e-a6c5-a612cd94f39b" path="/var/lib/kubelet/pods/8873911f-dd11-461e-a6c5-a612cd94f39b/volumes" Mar 11 10:51:27 crc kubenswrapper[4840]: I0311 10:51:27.446155 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:51:27 crc kubenswrapper[4840]: I0311 10:51:27.446696 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:51:57 crc kubenswrapper[4840]: I0311 10:51:57.445572 4840 patch_prober.go:28] interesting pod/machine-config-daemon-brtht container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 11 10:51:57 crc kubenswrapper[4840]: I0311 10:51:57.446227 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 11 10:51:57 crc kubenswrapper[4840]: I0311 10:51:57.446283 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-brtht" Mar 11 10:51:57 crc kubenswrapper[4840]: I0311 10:51:57.447048 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"282b582e15e705641c51e98813ed5073fe8b121ae821dee37bebd3923d2fd376"} pod="openshift-machine-config-operator/machine-config-daemon-brtht" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 11 10:51:57 crc kubenswrapper[4840]: I0311 10:51:57.447112 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-brtht" podUID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerName="machine-config-daemon" containerID="cri-o://282b582e15e705641c51e98813ed5073fe8b121ae821dee37bebd3923d2fd376" gracePeriod=600 Mar 11 10:51:58 crc kubenswrapper[4840]: I0311 10:51:58.357786 4840 generic.go:334] "Generic (PLEG): container finished" podID="8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d" containerID="282b582e15e705641c51e98813ed5073fe8b121ae821dee37bebd3923d2fd376" exitCode=0 Mar 11 10:51:58 crc kubenswrapper[4840]: I0311 10:51:58.357857 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerDied","Data":"282b582e15e705641c51e98813ed5073fe8b121ae821dee37bebd3923d2fd376"} Mar 11 10:51:58 crc kubenswrapper[4840]: I0311 10:51:58.358354 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-brtht" event={"ID":"8e1dfb30-a7e9-4283-b6b5-ae3b7308e86d","Type":"ContainerStarted","Data":"4d40b29dc098814452ffc252e56893df0ab773e45eaaaeebf4e8223fcc6b1cfc"} Mar 11 10:51:58 crc kubenswrapper[4840]: I0311 10:51:58.358374 4840 scope.go:117] "RemoveContainer" containerID="180f0de32e524184a80d118f575e43d9d75889633af9d6d017760d5bf414cc50" Mar 11 10:52:00 crc kubenswrapper[4840]: I0311 10:52:00.139536 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29553772-glxng"] Mar 11 10:52:00 crc kubenswrapper[4840]: E0311 10:52:00.140197 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e705c45-6819-483f-96d8-65b7cf9ae3e6" containerName="extract-utilities" Mar 11 10:52:00 crc kubenswrapper[4840]: I0311 10:52:00.140215 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e705c45-6819-483f-96d8-65b7cf9ae3e6" containerName="extract-utilities" Mar 11 10:52:00 crc kubenswrapper[4840]: E0311 10:52:00.140236 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8873911f-dd11-461e-a6c5-a612cd94f39b" containerName="gather" Mar 11 10:52:00 crc kubenswrapper[4840]: I0311 10:52:00.140243 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="8873911f-dd11-461e-a6c5-a612cd94f39b" containerName="gather" Mar 11 10:52:00 crc kubenswrapper[4840]: E0311 10:52:00.140262 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e705c45-6819-483f-96d8-65b7cf9ae3e6" containerName="extract-content" Mar 11 10:52:00 crc kubenswrapper[4840]: I0311 10:52:00.140271 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e705c45-6819-483f-96d8-65b7cf9ae3e6" containerName="extract-content" Mar 11 10:52:00 crc kubenswrapper[4840]: E0311 10:52:00.140291 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8873911f-dd11-461e-a6c5-a612cd94f39b" containerName="copy" Mar 11 10:52:00 crc kubenswrapper[4840]: I0311 10:52:00.140298 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="8873911f-dd11-461e-a6c5-a612cd94f39b" containerName="copy" Mar 11 10:52:00 crc kubenswrapper[4840]: E0311 10:52:00.140310 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e705c45-6819-483f-96d8-65b7cf9ae3e6" containerName="registry-server" Mar 11 10:52:00 crc kubenswrapper[4840]: I0311 10:52:00.140317 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e705c45-6819-483f-96d8-65b7cf9ae3e6" containerName="registry-server" Mar 11 10:52:00 crc kubenswrapper[4840]: I0311 10:52:00.140516 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="8873911f-dd11-461e-a6c5-a612cd94f39b" containerName="copy" Mar 11 10:52:00 crc kubenswrapper[4840]: I0311 10:52:00.140536 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="8873911f-dd11-461e-a6c5-a612cd94f39b" containerName="gather" Mar 11 10:52:00 crc kubenswrapper[4840]: I0311 10:52:00.140546 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e705c45-6819-483f-96d8-65b7cf9ae3e6" containerName="registry-server" Mar 11 10:52:00 crc kubenswrapper[4840]: I0311 10:52:00.141186 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553772-glxng" Mar 11 10:52:00 crc kubenswrapper[4840]: I0311 10:52:00.144981 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 11 10:52:00 crc kubenswrapper[4840]: I0311 10:52:00.144987 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-q6lwc" Mar 11 10:52:00 crc kubenswrapper[4840]: I0311 10:52:00.146764 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 11 10:52:00 crc kubenswrapper[4840]: I0311 10:52:00.153740 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553772-glxng"] Mar 11 10:52:00 crc kubenswrapper[4840]: I0311 10:52:00.226441 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hclcc\" (UniqueName: \"kubernetes.io/projected/fd862a10-849c-4cb9-ac5d-608b3621ecdf-kube-api-access-hclcc\") pod \"auto-csr-approver-29553772-glxng\" (UID: \"fd862a10-849c-4cb9-ac5d-608b3621ecdf\") " pod="openshift-infra/auto-csr-approver-29553772-glxng" Mar 11 10:52:00 crc kubenswrapper[4840]: I0311 10:52:00.327856 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hclcc\" (UniqueName: \"kubernetes.io/projected/fd862a10-849c-4cb9-ac5d-608b3621ecdf-kube-api-access-hclcc\") pod \"auto-csr-approver-29553772-glxng\" (UID: \"fd862a10-849c-4cb9-ac5d-608b3621ecdf\") " pod="openshift-infra/auto-csr-approver-29553772-glxng" Mar 11 10:52:00 crc kubenswrapper[4840]: I0311 10:52:00.346135 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hclcc\" (UniqueName: \"kubernetes.io/projected/fd862a10-849c-4cb9-ac5d-608b3621ecdf-kube-api-access-hclcc\") pod \"auto-csr-approver-29553772-glxng\" (UID: \"fd862a10-849c-4cb9-ac5d-608b3621ecdf\") " pod="openshift-infra/auto-csr-approver-29553772-glxng" Mar 11 10:52:00 crc kubenswrapper[4840]: I0311 10:52:00.476982 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553772-glxng" Mar 11 10:52:00 crc kubenswrapper[4840]: I0311 10:52:00.932731 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29553772-glxng"] Mar 11 10:52:00 crc kubenswrapper[4840]: W0311 10:52:00.937884 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd862a10_849c_4cb9_ac5d_608b3621ecdf.slice/crio-cfb69bd117f0634957e1b888fd71c12f0cca1a30d9cfc3db2b2e5620f0565020 WatchSource:0}: Error finding container cfb69bd117f0634957e1b888fd71c12f0cca1a30d9cfc3db2b2e5620f0565020: Status 404 returned error can't find the container with id cfb69bd117f0634957e1b888fd71c12f0cca1a30d9cfc3db2b2e5620f0565020 Mar 11 10:52:01 crc kubenswrapper[4840]: I0311 10:52:01.390511 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553772-glxng" event={"ID":"fd862a10-849c-4cb9-ac5d-608b3621ecdf","Type":"ContainerStarted","Data":"cfb69bd117f0634957e1b888fd71c12f0cca1a30d9cfc3db2b2e5620f0565020"} Mar 11 10:52:03 crc kubenswrapper[4840]: I0311 10:52:03.777078 4840 generic.go:334] "Generic (PLEG): container finished" podID="fd862a10-849c-4cb9-ac5d-608b3621ecdf" containerID="32c507733d631ef013913e6ad9752c88e56900c15c2f22f5d0b05095f1222230" exitCode=0 Mar 11 10:52:03 crc kubenswrapper[4840]: I0311 10:52:03.777131 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553772-glxng" event={"ID":"fd862a10-849c-4cb9-ac5d-608b3621ecdf","Type":"ContainerDied","Data":"32c507733d631ef013913e6ad9752c88e56900c15c2f22f5d0b05095f1222230"} Mar 11 10:52:05 crc kubenswrapper[4840]: I0311 10:52:05.342995 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-677bd678f7-kkplv" podUID="91aae815-00f1-46d8-8709-f212ab049fdf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.67:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 11 10:52:05 crc kubenswrapper[4840]: I0311 10:52:05.625779 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553772-glxng" Mar 11 10:52:05 crc kubenswrapper[4840]: I0311 10:52:05.748054 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hclcc\" (UniqueName: \"kubernetes.io/projected/fd862a10-849c-4cb9-ac5d-608b3621ecdf-kube-api-access-hclcc\") pod \"fd862a10-849c-4cb9-ac5d-608b3621ecdf\" (UID: \"fd862a10-849c-4cb9-ac5d-608b3621ecdf\") " Mar 11 10:52:05 crc kubenswrapper[4840]: I0311 10:52:05.754284 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd862a10-849c-4cb9-ac5d-608b3621ecdf-kube-api-access-hclcc" (OuterVolumeSpecName: "kube-api-access-hclcc") pod "fd862a10-849c-4cb9-ac5d-608b3621ecdf" (UID: "fd862a10-849c-4cb9-ac5d-608b3621ecdf"). InnerVolumeSpecName "kube-api-access-hclcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 11 10:52:05 crc kubenswrapper[4840]: I0311 10:52:05.793727 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29553772-glxng" event={"ID":"fd862a10-849c-4cb9-ac5d-608b3621ecdf","Type":"ContainerDied","Data":"cfb69bd117f0634957e1b888fd71c12f0cca1a30d9cfc3db2b2e5620f0565020"} Mar 11 10:52:05 crc kubenswrapper[4840]: I0311 10:52:05.793766 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfb69bd117f0634957e1b888fd71c12f0cca1a30d9cfc3db2b2e5620f0565020" Mar 11 10:52:05 crc kubenswrapper[4840]: I0311 10:52:05.793849 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29553772-glxng" Mar 11 10:52:05 crc kubenswrapper[4840]: I0311 10:52:05.854489 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hclcc\" (UniqueName: \"kubernetes.io/projected/fd862a10-849c-4cb9-ac5d-608b3621ecdf-kube-api-access-hclcc\") on node \"crc\" DevicePath \"\"" Mar 11 10:52:06 crc kubenswrapper[4840]: I0311 10:52:06.681676 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29553766-qkd44"] Mar 11 10:52:06 crc kubenswrapper[4840]: I0311 10:52:06.687544 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29553766-qkd44"] Mar 11 10:52:07 crc kubenswrapper[4840]: I0311 10:52:07.910597 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-xp9gd" podUID="76a3df80-6e5e-4618-8dc4-2b697c4f73b8" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 11 10:52:07 crc kubenswrapper[4840]: I0311 10:52:07.937286 4840 trace.go:236] Trace[191349478]: "Calculate volume metrics of ovndbcluster-nb-etc-ovn for pod openstack/ovsdbserver-nb-1" (11-Mar-2026 10:52:06.657) (total time: 1280ms): Mar 11 10:52:07 crc kubenswrapper[4840]: Trace[191349478]: [1.280222524s] [1.280222524s] END Mar 11 10:52:07 crc kubenswrapper[4840]: I0311 10:52:07.940957 4840 trace.go:236] Trace[1110937575]: "Calculate volume metrics of mysql-db for pod openstack/openstack-galera-0" (11-Mar-2026 10:52:06.514) (total time: 1426ms): Mar 11 10:52:07 crc kubenswrapper[4840]: Trace[1110937575]: [1.426549756s] [1.426549756s] END Mar 11 10:52:08 crc kubenswrapper[4840]: I0311 10:52:08.073257 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a6422e4-95b1-45e6-b90c-8e72fb37b64f" path="/var/lib/kubelet/pods/6a6422e4-95b1-45e6-b90c-8e72fb37b64f/volumes" Mar 11 10:52:14 crc kubenswrapper[4840]: I0311 10:52:14.682965 4840 scope.go:117] "RemoveContainer" containerID="d6f9bc474aaa77fdba07a4734a7cae2521afd4b9d2a0b4a471f48652a79e8643"